DEV Community

Cover image for Open Source Frameworks for Building Generative AI Applications
Danilo Poccia for AWS

Posted on

Open Source Frameworks for Building Generative AI Applications

There are many amazing tools that help building generative AI applications. But starting with a new tool takes time to learn and practice.

For this reason, I created a repository with examples of popular open source frameworks for building generative AI applications.

The examples also show how to use these frameworks with Amazon Bedrock.

You can find the repository here:

https://github.com/danilop/oss-for-generative-ai

In the rest of this article, I'll describe the frameworks I selected, what is in the sample code in the repository, and how these can be used in practice.

Frameworks Included

  • LangChain: A framework for developing applications powered by language models, featuring examples of:

    • Basic model invocation
    • Chaining prompts
    • Building an API
    • Creating a client
    • Implementing a chatbot
    • Using Bedrock Agents
  • LangGraph: An extension of LangChain for building stateful, multi-actor applications with large language models (LLMs)

  • Haystack: An end-to-end framework for building search systems and language model applications

  • LlamaIndex: A data framework for LLM-based applications, with examples of:

    • RAG (Retrieval-Augmented Generation)
    • Building an agent
  • DSPy: A framework for solving AI tasks using large language models

  • RAGAS: A framework for evaluating Retrieval Augmented Generation (RAG) pipelines

  • LiteLLM: A library to standardize the use of LLMs from different providers

Frameworks Overview

LangChain

A framework for developing applications powered by language models.

Key Features:

  • Modular components for LLM-powered applications
  • Chains and agents for complex LLM workflows
  • Memory systems for contextual interactions
  • Integration with various data sources and APIs

Primary Use Cases:

  • Building conversational AI systems
  • Creating domain-specific question-answering systems
  • Developing AI-powered automation tools

LangGraph

An extension of LangChain for building stateful, multi-actor. applications with LLMs

Key Features:

  • Graph-based workflow management
  • State management for complex agent interactions
  • Tools for designing and implementing multi-agent systems
  • Cyclic workflows and feedback loops

Primary Use Cases:

  • Creating collaborative AI agent systems
  • Implementing complex, stateful AI workflows
  • Developing AI-powered simulations and games

Haystack

An open-source framework for building production-ready LLM applications.

Key Features:

  • Composable AI systems with flexible pipelines
  • Multi-modal AI support (text, image, audio)
  • Production-ready with serializable pipelines and monitoring

Primary Use Cases:

  • Building RAG pipelines and search systems
  • Developing conversational AI and chatbots
  • Content generation and summarization
  • Creating agentic pipelines with complex workflows

LlamaIndex

A data framework for building LLM-powered applications.

Key Features:

  • Advanced data ingestion and indexing
  • Query processing and response synthesis
  • Support for various data connectors
  • Customizable retrieval and ranking algorithms

Primary Use Cases:

  • Creating knowledge bases and question-answering systems
  • Implementing semantic search over large datasets
  • Building context-aware AI assistants

DSPy

A framework for solving AI tasks through declarative and optimizable language model programs.

Key Features:

  • Declarative programming model for LLM interactions
  • Automatic optimization of LLM prompts and parameters
  • Signature-based type system for LLM inputs/outputs
  • Teleprompter (now optimizer) for automatic prompt improvement

Primary Use Cases:

  • Developing robust and optimized NLP pipelines
  • Creating self-improving AI systems
  • Implementing complex reasoning tasks with LLMs

RAGAS

An evaluation framework for Retrieval Augmented Generation (RAG) systems.

Key Features:

  • Automated evaluation of RAG pipelines
  • Multiple evaluation metrics (faithfulness, context relevancy, answer relevancy)
  • Support for different types of questions and datasets
  • Integration with popular RAG frameworks

Primary Use Cases:

  • Benchmarking RAG system performance
  • Identifying areas for improvement in RAG pipelines
  • Comparing different RAG implementations

LiteLLM

A unified interface for multiple LLM providers.

Key Features:

  • Standardized API for 100+ LLM models
  • Automatic fallback and load balancing
  • Caching and retry mechanisms
  • Usage tracking and budget management

Primary Use Cases:

  • Simplifying multi-LLM application development
  • Implementing model redundancy and fallback strategies
  • Managing LLM usage across different providers

Conclusion

Let me know if you used any of these tools. Did I miss something you'd like to share with others? Feel free to contribute back to the repository!

Top comments (2)

Collapse
 
lh8ppl profile image
LH8PPL

Hi, thanks for the explanation, I want to start with LLM but I want to start with one of the open-source LLM's on hugging face and not bedrock. Do you have any advice?

Collapse
 
danilop profile image
Danilo Poccia

You can use a Llama model, their weights are publicly available.