DEV Community

Ben Perlmutter for MongoDB

Posted on • Edited on

Build a Production-Ready, Intelligent Chatbot With the MongoDB Chatbot Framework

Chatbots powered by large language models (LLMs) like ChatGPT have become a central focus of the tech world over the last several months.

Tools and templates abound for getting started with this exciting new technology. It's quick, easy, and even fun to build a prototype. In fact, MongoDB already has a tutorial on how you can build a chatbot over your data using Atlas Vector Search and LangChain.

However, taking a chatbot prototype to production is where things stop being so easy. There are standard application development questions around how to persist data and secure your app. And now, there's a new crop of generative AI-related concerns like prompt engineering, retrieval-augmented generation (RAG), and qualitative evaluation of responses.

To help you take your intelligent chatbot from prototype to production, we've created the MongoDB Chatbot Framework.

What is the MongoDB Chatbot Framework?

The MongoDB Chatbot Framework is a set of libraries that you can use to build a production-ready, full-stack chatbot application using TypeScript, Express.js, and React. The framework provides first-class support for retrieval-augmented generation using Atlas Vector Search. It also includes modules for using the OpenAI ChatGPT and Embeddings APIs. It includes a LangChain integration as well, so you can easily pull in other model providers such as Amazon Bedrock or Google Vertex AI. The framework is extensible, so you can bring your own AI models, as well.

The MongoDB Chatbot Framework is built entirely in TypeScript, distinguishing it from many Python libraries in the LLM application development ecosystem. Using TypeScript presents a number of advantages for the framework:

  • You can develop your entire application stack using a single language.
  • TypeScript's robust typing system gives the framework extensible interfaces that you can use to safely implement custom application logic and catch bugs early.
  • TypeScript's compatibility with JavaScript ecosystems allows seamless integration with a wide range of existing libraries and tools.
  • Server-side logic compiles to Node.js, which is performant and battle-tested for asynchronous data processing.

Read on to learn more about the MongoDB Chatbot Framework and why you should use it. You can also explore the framework in its documentation.

Why Use the MongoDB Chatbot Framework?

The MongoDB Chatbot Framework lets you efficiently move from prototype to production. The framework provides interfaces that you can use to build intelligent chatbots with the latest and most effective chatbot development techniques.

We have built the framework informed by our own experience building the MongoDB Docs Chatbot, which lets you talk to the MongoDB documentation. We will continue adding features to the MongoDB Chatbot Framework as we include them in the MongoDB Docs Chatbot.

The MongoDB Chatbot Framework takes care of general application boilerplate like:

  • Persisting conversations in MongoDB Atlas.
  • Error and edge case handling.
  • Streaming responses.
  • Providing extensible interfaces for customizing and securing application logic.

With the application boilerplate simplified, you can focus on building the best generative AI experience for your users.

In our experience building the Docs Chatbot, one of the most difficult aspects of the intelligent chatbot development process was knowing what to focus on. Given the novelty and rapid evolution of the generative AI space, it can feel like a full-time job just keeping up to date with the latest techniques, let alone incorporating them into the chatbot.

By using the MongoDB Chatbot Framework, you can let keeping up with intelligent chatbot development be our full-time job, so it doesn't have to be yours.

The framework includes modules and interfaces for the most important aspects of building an intelligent chatbot including:

  • Prompt engineering.
  • Using RAG to ground chatbot answers in external knowledge.
  • Context window management.
  • Efficiently ingesting data to use with RAG.

Learn more about the MongoDB Chatbot Framework in the documentation.

Framework Core Modules

The MongoDB Chatbot Framework includes the following core modules:

  • Chatbot server: TypeScript/Express.js server that serves and persists conversations between a user and an LLM
  • Chatbot UI: React components that you can use to build a chatbot front end that communicates with the Chatbot server
  • Ingest CLI: CLI application that you can use to ingest data from your data sources to use in RAG

In the remainder of this section, we'll take a tour through these components of the MongoDB Chatbot Framework and look at how you can use them to build an intelligent chatbot.

Chatbot Server

The Chatbot Server is a set of TypeScript and Express.js modules. The Chatbot Server orchestrates communication between users, an LLM like GPT-3.5, and data persistence in MongoDB Atlas.

The Chatbot Server comes with an implementation that supports OpenAI's LLMs, and the interface can be extended to support other LLMs, as well. You can quickly get your server up and running using an OpenAI model like GPT-3.5 or GPT-4, and then substitute models as you develop the application.

Conversations are stored in MongoDB Atlas. The Atlas developer data platform comes with robust security, scalability, and data management capabilities. For example, you can perform analytic queries using the Aggregation Framework or visualize conversation data using Atlas Charts.

The Chatbot Server offers first-class support for RAG using Atlas Vector Search. This allows the chatbot to pull relevant information from external data sources, enriching its responses with accurate and up-to-date knowledge. The built-in RAG module is pluggable, so you can add custom retrieval logic. You can use Atlas Vector Search on a collection in the same MongoDB database as your conversation data, giving you a single platform to manage all application data.

The server also provides the flexibility to incorporate custom application logic. This includes specific LLM prompts, RAG configurations, authentication mechanisms, and deployment strategies. A code example within the documentation illustrates how these elements can be integrated seamlessly.

Here's the code for an example implementation of the chatbot server that performs RAG using Atlas Vector Search:



import {
  MongoClient,
  makeMongoDbEmbeddedContentStore,
  makeOpenAiEmbedder,
  makeMongoDbConversationsService,
  AppConfig,
  makeOpenAiChatLlm,
  OpenAiChatMessage,
  SystemPrompt,
  makeDefaultFindContent,
  logger,
  makeApp,
  GenerateUserPromptFunc,
  makeRagGenerateUserPrompt,
  MakeUserMessageFunc,
} from "mongodb-chatbot-server";
import { OpenAIClient, OpenAIKeyCredential } from "@azure/openai";
import path from "path";

const {
  MONGODB_CONNECTION_URI,
  MONGODB_DATABASE_NAME,
  VECTOR_SEARCH_INDEX_NAME,
  OPENAI_API_KEY,
  OPENAI_EMBEDDING_MODEL,
  OPENAI_CHAT_COMPLETION_MODEL,
} = process.env;

// Create the OpenAI client
// for interacting with the OpenAI API (ChatGPT API and Embedding API)
const openAiClient = new OpenAIClient(new OpenAIKeyCredential(OPENAI_API_KEY));

// Chatbot LLM for responding to the user's query.
const llm = makeOpenAiChatLlm({
  openAiClient,
  deployment: OPENAI_CHAT_COMPLETION_MODEL,
  openAiLmmConfigOptions: {
    temperature: 0,
    maxTokens: 500,
  },
});

// MongoDB data source for the content used in RAG.
// Generated with the Ingest CLI.
const embeddedContentStore = makeMongoDbEmbeddedContentStore({
  connectionUri: MONGODB_CONNECTION_URI,
  databaseName: MONGODB_DATABASE_NAME,
});

// Creates vector embeddings for user queries to find matching content
// in the embeddedContentStore using Atlas Vector Search.
const embedder = makeOpenAiEmbedder({
  openAiClient,
  deployment: OPENAI_EMBEDDING_MODEL,
  backoffOptions: {
    numOfAttempts: 3,
    maxDelay: 5000,
  },
});

// Find content in the embeddedContentStore using the vector embeddings
// generated by the embedder.
const findContent = makeDefaultFindContent({
  embedder,
  store: embeddedContentStore,
  findNearestNeighborsOptions: {
    k: 5,
    path: "embedding",
    indexName: VECTOR_SEARCH_INDEX_NAME,
    minScore: 0.9,
  },
});

// Constructs the user message sent to the LLM from the initial user message
// and the content found by the findContent function.
const makeUserMessage: MakeUserMessageFunc = async function ({
  content,
  originalUserMessage,
}): Promise<OpenAiChatMessage & { role: "user" }> {
  const chunkSeparator = "~~~~~~";
  const context = content.map((c) => c.text).join(`\n${chunkSeparator}\n`);
  const contentForLlm = `Using the following information, answer the user query.
Different pieces of information are separated by "${chunkSeparator}".

Information:
${context}


User query: ${originalUserMessage}`;
  return { role: "user", content: contentForLlm };
};

// Generates the user prompt for the chatbot using RAG
const generateUserPrompt: GenerateUserPromptFunc = makeRagGenerateUserPrompt({
  findContent,
  makeUserMessage,
});

// System prompt for chatbot
const systemPrompt: SystemPrompt = {
  role: "system",
  content: `You are an assistant to users of the MongoDB Chatbot Framework.
Answer their questions about the framework in a friendly conversational tone.
Format your answers in Markdown.
Be concise in your answers.
If you do not know the answer to the question based on the information provided,
respond: "I'm sorry, I don't know the answer to that question. Please try to rephrase it. Refer to the below information to see if it helps."`,
};

// Create MongoDB collection and service for storing user conversations
// with the chatbot.
const mongodb = new MongoClient(MONGODB_CONNECTION_URI);
const conversations = makeMongoDbConversationsService(
  mongodb.db(MONGODB_DATABASE_NAME),
  systemPrompt
);

// Create the MongoDB Chatbot Server Express.js app configuration
const config: AppConfig = {
  conversationsRouterConfig: {
    llm,
    conversations,
    generateUserPrompt,
  },
  maxRequestTimeoutMs: 30000,
  serveStaticSite: true,
};

// Start the server and clean up resources on SIGINT.
const PORT = process.env.PORT || 3000;
const startServer = async () => {
  logger.info("Starting server...");
  const app = await makeApp(config);
  const server = app.listen(PORT, () => {
    logger.info(`Server listening on port: ${PORT}`);
  });

  process.on("SIGINT", async () => {
    logger.info("SIGINT signal received");
    await mongodb.close();
    await embeddedContentStore.close();
    await new Promise<void>((resolve, reject) => {
      server.close((error: any) => {
        error ? reject(error) : resolve();
      });
    });
    process.exit(1);
  });
};

try {
  startServer();
} catch (e) {
  logger.error(`Fatal error: ${e}`);
  process.exit(1);
}


Enter fullscreen mode Exit fullscreen mode

To learn more about the MongoDB Chatbot Server, visit the framework's server documentation.

Chatbot UI

The Chatbot UI provides a set of React UI elements that you can use to add a chatbot front end to your website. The components are customizable so you can tailor them to your use case.

Here's a basic code example of how you can add the chatbot UI components to your React application:



import Chatbot, {
  FloatingActionButtonTrigger,
  InputBarTrigger,
  ModalView,
} from "mongodb-chatbot-ui";

function MyApp() {
  const suggestedPrompts = [
    "How do I create a new MongoDB Atlas cluster?",
    "Can MongoDB store lists of data?",
    "How does vector search work?",
  ];
  return (
    <div>
      <Chatbot>
        <InputBarTrigger suggestedPrompts={suggestedPrompts} />
        <FloatingActionButtonTrigger text="My MongoDB AI" />
        <ModalView
          initialMessageText="Welcome to MongoDB AI Assistant. What can I help you with?"
          initialMessageSuggestedPrompts={suggestedPrompts}
        />
      </Chatbot>
    </div>
  );
}


Enter fullscreen mode Exit fullscreen mode

This creates a UI like the following:

MongoDB Chatbot Framework UI demo

To learn more about the MongoDB chatbot UI, visit the framework's UI documentation.

Ingest CLI

The Ingest CLI module is a tool that you can use to ingest data to use in your chatbot with RAG.

The Ingest CLI performs the following:

  • Pulls data from data sources
  • Breaks the documents into chunks to use in RAG
  • Creates vector embeddings for the chunked content
  • Stores the chunks and their embeddings in a MongoDB collection that's indexed with Atlas Vector Search

The Ingest CLI handles complex logic surrounding performant data ingestion, such as only ingesting data when the underlying source data has been updated or your configuration changes.

The Ingest CLI lets you focus more on the application-specific aspects of data ingestion. It comes with a number of interfaces and configurations that you can plug into to manage ingesting data for RAG.

The DataSource interface lets you define how you ingest external data. Use the DataSource interface to build your own data fetchers from scratch or wrap other popular data fetchers, like LangChain.js document loaders or Unstructured. You can define DataSource implementations that ingest diverse forms of data like Markdown, PDFs, HTML, JSON, and CSV. Once you define how you fetch data, running the Ingest CLI programmatically updates the chunked data used in RAG.

The Ingest CLI also includes modules that you can use to optimize ingestion for your application needs based on emerging RAG best practices. For example, you can choose which embedding model the CLI uses, how data is transformed before generating embeddings, and what the chunking strategy is.

You define what you ingest and how you ingest it from a configuration file that resembles the following:



// ingest.config.ts

import { makeIngestMetaStore, type Config } from "mongodb-rag-ingest";

import {
  makeOpenAiEmbedder,
  OpenAIClient,
  AzureKeyCredential,
  makeMongoDbEmbeddedContentStore,
  makeMongoDbPageStore,
} from "mongodb-rag-core";

import { standardChunkFrontMatterUpdater } from "mongodb-rag-ingest/embed";

import { type DataSource } from "mongodb-rag-ingest/sources";

const {
  MONGODB_DOT_COM_CONNECTION_URI,
  MONGODB_DOT_COM_DB_NAME,
  MONGODB_CONNECTION_URI,
  MONGODB_DATABASE_NAME,
  OPENAI_ENDPOINT,
  OPENAI_API_KEY,
  OPENAI_EMBEDDING_DEPLOYMENT,
} = process.env;

export default {
  embedder: () =>
    makeOpenAiEmbedder({
      openAiClient: new OpenAIClient(
        OPENAI_ENDPOINT,

        new AzureKeyCredential(OPENAI_API_KEY)
      ),

      deployment: OPENAI_EMBEDDING_DEPLOYMENT,

      backoffOptions: {
        numOfAttempts: 25,

        startingDelay: 1000,
      },
    }),
  embeddedContentStore: () =>
    makeMongoDbEmbeddedContentStore({
      connectionUri: MONGODB_CONNECTION_URI,

      databaseName: MONGODB_DATABASE_NAME,
    }),
  pageStore: () =>
    makeMongoDbPageStore({
      connectionUri: MONGODB_CONNECTION_URI,

      databaseName: MONGODB_DATABASE_NAME,
    }),
  ingestMetaStore: () =>
    makeIngestMetaStore({
      connectionUri: MONGODB_CONNECTION_URI,
      databaseName: MONGODB_DATABASE_NAME,
      entryId: "all",
    }),
  chunkOptions: () => ({
    transform: standardChunkFrontMatterUpdater,
  }),
  dataSources: () => [
    // Add your own data sources here.
    {
      name: "my-data-source",
      async fetchPages() {
        const pages: Page[] = await getPagesFromSomeSource();
        return pages;
      },
    },
  ],
} satisfies Config;


Enter fullscreen mode Exit fullscreen mode

Compile the TypeScript and run data ingestion with the Ingest CLI:



tsc --outDir build && ingest all --config build/ingest.config.js


Enter fullscreen mode Exit fullscreen mode

To learn more about the MongoDB Chatbot Ingest CLI, visit the framework's ingest documentation.

Start Building With the MongoDB Chatbot Framework

If you'd like to get started using the MongoDB Chatbot Framework, check out the Quick Start guide in the documentation. You can use the Quick Start to set up a full-stack application using the MongoDB Chatbot Framework, and then customize it to your application needs.

Top comments (1)

Collapse
 
ranjancse profile image
Ranjan Dailata • Edited

Great Blog Post. Pleased to see it coming from the MongoDB Team.

Suggestion - Chatbot comes with a ton of variations with the mix of human in loop. Hence, it would be considered good to mention or target this blog post specific to Knowledge Based or Question And Answering System based on the private data.

I have noticed the Azure Open AI usages. However, I did not see anywhere in the blog post mentioning the rate limits, quotas etc. It's really important to remember and design an appropriate production ready system which could handle the load with the concurrency in mind. Any PROD systems that you see needs to be load balanced and should be designed with a great care.

Any usage of LLM needs to be designed with care. Especially when it comes to the fallback mechanism. One need to deal with multiple LLM systems. This is where the PROD Chatbot needs a complete redesign, with the clean and SOLID architecture for handling the various needs of the production system.