DEV Community

Cover image for Unlock the Power of LangChain: Deploying to Production Made Easy
Austin Vance for Focused

Posted on

Unlock the Power of LangChain: Deploying to Production Made Easy

In this tutorial, Austin Vance, CEO and co-founder of Focused Labs, will guide you through deploying a PDF RAG with LangChain to production!

In this captivating video, we will dive deep into the process of deploying a rag to production, taking you on an informative and engaging step-by-step journey. Throughout this tutorial, we will explore the intricacies of transforming a local rag into a powerful and accessible resource on the digital ocean app platform. Don't miss out on this highly informative and exciting adventure - make sure to subscribe now to stay updated with all the latest content!
Don't forget to subscribe for more tutorials like this.


Just to remember what happened so far:

In Part One You will Learned:

  • Create a new app using LangChain's LangServe
  • ingestion of PDFs using unstructuredio
  • Chunking of documents via LangChain's SemanticChunker
  • Embedding chunks using OpenAI's embeddings API
  • Storing embedded chunks into a PGVector a vector database
  • Build a LCEL Chain for LangServe that uses PGVector as a retriever
  • Use the LangServe playground as a way to test our RAG
  • Stream output including document sources to a future front end.

In Part 2 we will focus on:

  • Creating a front end with Typescript, React, and Tailwind
  • Display sources of information along with the LLM output
  • Stream to the frontend with Server Sent Events

In Part 3 we will focus on:

  • Deploying the Backend application to @DigitalOcean
  • Deploying the frontend to @digitalocean_staff App Platform
  • Use a managed Postgres Database

In Part 4 we will focus on:

  • Adding Memory to the @LangChain Chain with PostgreSQL
  • Add Multiquery to the chain for better breadth of search
  • Add sessions to the Chat History

Github repo

https://github.com/focused-labs/pdf_rag

Top comments (0)