DEV Community

Cover image for How I Combined Small Language Models to Automate Workflow like Financial Research
Namee for LLMWare

Posted on • Updated on

How I Combined Small Language Models to Automate Workflow like Financial Research

More and more people are recognizing that small language models can provide great, on par results when used in specific workflows.

Clem Delangue, CEO of HuggingFace, even suggested that up to 99% of use cases can be addressed using SLMs in a recent VentureBeat article.


The next wave of Gen AI will be automating workflows

While Chatbots are one of the great use cases for Large Language Models, the next true groundbreaking use case for Gen AI for 2024 will be automating workflows.

Of course, if you are an AI expert reading this, the idea of chaining together various AI agent workflows with prompts is not new. However, what is new and still to be explored much more is the concept of multi-model agentic workflow with small language models (SLMs).


What are Small Language Models?

The definition of SLMs vary depending on who you ask.

In my opinion, an SLM is a model that can run fairly well without a GPU. It seems pretty simplistic but that is my general rule, which means that models that are 7 Billion parameters and under fit this rule today.

As of today, models that are larger than 7B tend to run excruciatingly slowly even quantized without a GPU.


Why use Small Language Models?

There are many reasons to use SLMs:

1) the most obvious - you don't need GPUs;

2) they can be easily run on-prem, in private cloud, on a laptop, or in edge devices;

3) they are more targeted and focused in scope of their training and are much easier to fine-tune; and

4) they are easier to keep track of and audit.

I can honestly add many more, but I will stop here because these are probably the most important ones.

(I will add the caveat here that I am the founder of LLMWare, an open source project providing LLM-based application platform and over 60 SLMs in HuggingFace and that the example to follow uses LLMWare.)


Here is a full end-to-end example of stacking 3 of popular Small Language Models (SLIM Extract, SLIM Summary and BLING Stable LM 3-B) along with 2 popular web services (Yahoo Finance and Wikipedia) to complete a financial research assignment with 30 different information keys, all on your laptop.


The Example Use Case: Combining 3 Different Models and 2 Web Services for Complex Financial Research

  1. Extracting key information from the source text using models.
  2. Performing secondary lookups using extracted information with web services like Yfinance for stock data and Wikipedia for company background information.
  3. Summarizing and structuring the extracted information into a comprehensive dictionary.

Models Used:

  • slim-extract-tool
  • slim-summary-tool
  • bling-stablelm-3b-tool

Web Services Used:

  • Yfinance for stock ticker information
  • Wikipedia for company background information

Setup and Imports

First, let's import the necessary libraries and modules for our analysis.

from llmware.util import YFinance
from llmware.models import ModelCatalog
from llmware.parsers import WikiParser
Enter fullscreen mode Exit fullscreen mode

Input Data

Our input for this example is a financial news article about NIKE, Inc. We will extract and analyze information from this text.

text = ("_BEAVERTON, Ore.--(BUSINESS WIRE)--NIKE, Inc. (NYSE:NKE) today reported fiscal 2024 financial results for its "
        "third quarter ended February 29, 2024.) “We are making the necessary adjustments to drive NIKE’s next chapter "
        "of growth Post this Third quarter revenues were slightly up on both a reported and currency-neutral basis* "
        "at $12.4 billion NIKE Direct revenues were $5.4 billion, slightly up on a reported and currency-neutral basis "
        "NIKE Brand Digital sales decreased 3 percent on a reported basis and 4 percent on a currency-neutral basis "
        "Wholesale revenues were $6.6 billion, up 3 percent on a reported and currency-neutral basis Gross margin "
        "increased 150 basis points to 44.8 percent, including a detriment of 50 basis points due to restructuring charges "
        "Selling and administrative expense increased 7 percent to $4.2 billion, including $340 million of restructuring "
        "charges Diluted earnings per share was $0.77, including $0.21 of restructuring charges. Excluding these "
        "charges, Diluted earnings per share would have been $0.98* “We are making the necessary adjustments to "
        "drive NIKE’s next chapter of growth,” said John Donahoe, President & CEO, NIKE, Inc. “We’re encouraged by "
        "the progress we’ve seen, as we build a multiyear cycle of new innovation, sharpen our brand storytelling and "
        "work with our wholesale partners to elevate and grow the marketplace_.")
Enter fullscreen mode Exit fullscreen mode

Step 1: Extract Information from Source Text

We begin by loading the models and extracting key information from the source text. The keys we are interested in include the stock ticker, company name, total revenues, and more.

Load models

model = ModelCatalog().load_model("slim-extract-tool", temperature=0.0, sample=False)
model2 = ModelCatalog().load_model("slim-summary-tool", sample=False, temperature=0.0, max_output=200)
model3 = ModelCatalog().load_model("bling-stablelm-3b-tool", sample=False, temperature=0.0)

research_summary = {}

# Extract information
extract_keys = ["stock ticker", "company name", "total revenues", "restructuring charges", "digital growth", "ceo comment", "quarter end date"]

for key in extract_keys:
    response = model.function_call(text, params=[key])
    dict_key = key.replace(" ", "_")
    if dict_key in response["llm_response"]:
        value = response["llm_response"][dict_key][0]
        research_summary.update({dict_key: value})
Enter fullscreen mode Exit fullscreen mode

Step 2: Secondary Lookups Using Extracted Information

With the extracted information, we perform secondary lookups using the YFinance web service to enrich our data with stock information, financial summaries, and company details.

if "stock_ticker" in research_summary:
    ticker = research_summary["stock_ticker"]
    ticker_core = ticker.split(":")[-1]  # Adjusting ticker format if needed

    # Fetch stock summary information from YFinance
    yf = YFinance().get_stock_summary(ticker=ticker_core)
    print("yahoo finance stock info: ", yf)

    # Update research summary with financial data from YFinance
    financial_keys = ["current_stock_price", "high_ltm", "low_ltm", "trailing_pe", "forward_pe", "volume"]
    for key in financial_keys:
        research_summary.update({key: yf[key]})

    # Fetch detailed financial summary
    yf2 = YFinance().get_financial_summary(ticker=ticker_core)
    print("yahoo finance financial info - ", yf2)
    for key in ["market_cap", "price_to_sales", "revenue_growth", "ebitda", "gross_margin", "currency"]:
        research_summary.update({key: yf2[key]})
Enter fullscreen mode Exit fullscreen mode

Step 3: Use Extracted Company Name for Wikipedia Lookup

Next, we use the extracted company name to fetch background information from Wikipedia. This includes a company overview, founding date, and other relevant details.

if "company_name" in research_summary:
    company_name = research_summary["company_name"]
    wiki_output = WikiParser().add_wiki_topic(company_name, target_results=1)

    # Extract and summarize company overview from Wikipedia
    company_overview = "".join(block["text"] for block in wiki_output["blocks"][:3])
    summary = model2.function_call(company_overview, params=["company history (5)"])
    research_summary.update({"summary": summary["llm_response"]})

    # Extract founding date and company description
    founding_date_response = model.function_call(company_overview, params=["founding date"])
    company_description_response = model.function_call(company_overview, params=["company description"])
    research_summary.update({
        "founding_date": founding_date_response["llm_response"]["founding_date"][0],
        "company_description": company_description_response["llm_response"]["company_description"][0]
    })

    # Direct questions to the model about the company's business and products
    business_overview_response = model3.inference("What is an overview of company's business?", add_context=company_overview)
    origin_of_name_response = model3.inference("What is the origin of the company's name?", add_context=company_overview)
    products_response = model3.inference("What are the product names", add_context=company_overview)
    research_summary.update({
        "business_overview": business_overview_response["llm_response"],
        "origin_of_name": origin_of_name_response["llm_response"],
        "products": products_response["llm_response"]
    })
Enter fullscreen mode Exit fullscreen mode

Step 4: Completed Research - Summary Output

Finally, we display the structured research summary, which includes all the extracted and enriched information.

print("Completed Research - Summary Output\n")
item_counter = 1
for key, value in research_summary.items():
    if isinstance(value, str):
        value = value.replace("\n", "").replace("\r", "").replace("\t", "")
    print(f"\t\t -- {item_counter} - \t - {key.ljust(25)} - {str(value).ljust(40)}")
    item_counter += 1
Enter fullscreen mode Exit fullscreen mode

Here is a video of a tutorial if you are more of a visual learner:

⭐️ Star LLMWare ⭐️

Please be sure to visit our website llmware.ai for more information and updates.

Top comments (3)

Collapse
 
fernandezbaptiste profile image
Bap

Excellent write-up. 👏

Collapse
 
ayush2390 profile image
Ayush Thakur

The article is highly informative

Collapse
 
nevodavid profile image
Nevo David

Great article!