DEV Community

Connie Leung
Connie Leung

Posted on

Build a RAG application to learn Angular using langchhtain.js, nestjs, Htmx, and Gemma 2

In this blog post, I describe how to use Langchain, NestJS, and Gemma 2 to build an RAG application about an Angular book in PDF format. Then, the HTMX and Handlebar template engine render the responses in a list. The application uses Langchain and its built-in PDF loader to load the PDF book and split the document into chunks. Then, Langchain uses the Gemini embedding text model to represent the documents into vectors and persist the vectors into a vector database. The vector store retriever provides the context for the large language model (LLM) to find information in its data to generate correct responses.

Set up environment variables

PORT=3001
GROQ_API_KEY=<GROQ API KEY>
GROQ_MODEL=gemma2-9b-it
GEMINI_API_KEY=<GEMINI API KEY>
GEMINI_TEXT_EMBEDDING_MODEL=text-embedding-004
HUGGINGFACE_API_KEY=<Huggingface API KEY>
HUGGINGFACE_EMBEDDING_MODEL=BAAI/bge-small-en-v1.5
QDRANT_URL=<Qdrant URL>
QDRANT_APK_KEY=<Qdrant API KEY>
Enter fullscreen mode Exit fullscreen mode

Navigate to https://aistudio.google.com/app/apikey, sign in to create a new API Key. Replace the API Key to GENINI_API_KEY.

Navigate to Groq Cloud, https://console.groq.com/, sign up and register a new API Key. Replace the API Key to GROQ_API_KEY.

Navigate to Huggingface, https://huggingface.co/join, sign up and create a new access token. Replace the access token to HUGGINGFACE_API_KEY.

Navigate to Qdrant, https://cloud.qdrant.io/, sign up and create a Qdrant space. Replace the URL to QDRANT_URL. Replace the API Key to QDRANT_API_KEY.

Install the dependencies

npm i -save-exact @google/generative-ai @huggingface/inference
   @langchain/community @langchain/core @langchain/google-genai 
@langchain/groq @langchain/qdrant @nestjs/config @nestjs/swagger 
@nestjs/throttler class-transformer class-validator compression hbs 
langchain pdf-parse
Enter fullscreen mode Exit fullscreen mode
npm i -save-exact –save-dev @commitlint/cli 
@commitlint/config-conventional 
husky lint-staged
Enter fullscreen mode Exit fullscreen mode

Define the configuration in the application

Create a src/configs folder and add a configuration.ts to it

export default () => ({
 port: parseInt(process.env.PORT, 10) || 3000,
 groq: {
   apiKey: process.env.GROQ_API_KEY || '',
   model: process.env.GROQ_MODEL || 'gemma2-9b-it',
 },
 gemini: {
   apiKey: process.env.GEMINI_API_KEY || '',
   embeddingModel: process.env.GEMINI_TEXT_EMBEDDING_MODEL || 'text-embedding-004',
 },
 huggingface: {
   apiKey: process.env.HUGGINGFACE_API_KEY || '',
   embeddingModel: process.env.HUGGINGFACE_EMBEDDING_MODEL || 'BAAI/bge-small-en-v1.5',
 },
 qdrant: {
   url: process.env.QDRANT_URL || 'http://localhost:6333',
   apiKey: process.env.QDRANT_APK_KEY || '',
 },
});
Enter fullscreen mode Exit fullscreen mode

Create a Groq Module

Generate a Groq module, a controller and a service.

nest g mo groq
nest g s groq/application/groq --flat
nest g co groq/presenters/http/groq --flat 
Enter fullscreen mode Exit fullscreen mode

Add a chat model

Define a Groq configuration type, application/types/groq-config.type.ts, in the module. The configuration service converts the configuration values to a custom Object.

export type GroqConfig = {
 model: string;
 apiKey: string;
};
Enter fullscreen mode Exit fullscreen mode

Add a custom provider to provide an instance of GroqChatModel. Create a groq.constant.ts file under application/constants folder.

// application/constants/groq.constant.ts

export const GROQ_CHAT_MODEL = 'GROQ_CHAT_MODEL';
Enter fullscreen mode Exit fullscreen mode
// application/providers/groq-chat-model.provider.ts

import { ChatGroq } from '@langchain/groq';
import { Provider } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import { GROQ_CHAT_MODEL } from '~groq/application/constants/groq.constant';
import { GroqConfig } from '~groq/application/types/groq-config.type';

export const GroqChatModelProvider: Provider<ChatGroq> = {
 provide: GROQ_CHAT_MODEL,
 useFactory: (configService: ConfigService) => {
   const { apiKey, model } = configService.get<GroqConfig>('groq');
   return new ChatGroq({
     apiKey,
     model,
     temperature: 0.1,
     maxTokens: 2048,
     streaming: false,
   });
 },
 inject: [ConfigService],
};
Enter fullscreen mode Exit fullscreen mode

Test the Groq Chat Model in a controller

import { MessageContent } from '@langchain/core/messages';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { ChatGroq } from '@langchain/groq';
import { Inject, Injectable } from '@nestjs/common';
import { GROQ_CHAT_MODEL } from './constants/groq.constant';

@Injectable()
export class GroqService {
 constructor(@Inject(GROQ_CHAT_MODEL) private model: ChatGroq) {}

 async generateText(input: string): Promise<MessageContent> {
   const prompt = ChatPromptTemplate.fromMessages([
     ['system', 'You are a helpful assistant'],
     ['human', '{input}'],
   ]);

   const chain = prompt.pipe(this.model);
   const response = await chain.invoke({
     input,
   });

   return response.content;
 }
}
Enter fullscreen mode Exit fullscreen mode

The GroqService service has a method to take a query and ask the model to generate a text response

@Controller('groq')
export class GroqController {
 constructor(private service: GroqService) {}

 @Get()
 testChain(): Promise<MessageContent> {
   return this.service.generateText('What is Agentic RAG?');
 }
}
Enter fullscreen mode Exit fullscreen mode

Export the chat model from the module

import { Module } from '@nestjs/common';
import { GroqChatModelProvider } from './application/providers/groq-chat-model.provider';
import { GroqService } from './application/groq.service';
import { GroqController } from './presenters/http/groq.controller';

@Module({
 providers: [GroqChatModelProvider, GroqService],
 controllers: [GroqController],
 exports: [GroqChatModelProvider],
})
export class GroqModule {}
Enter fullscreen mode Exit fullscreen mode

Create a vector store module

nest g mo vectorStore
nest g s application/vectorStore --flat
Enter fullscreen mode Exit fullscreen mode

Add configuration types

Define configuration types under application/types folder.

This is the configuration type of embedding model. This application supports both Gemini Text Embedding model and Huggingface Inference Embedding Model.

// application/types/embedding-model-config.type.ts

export type EmbeddingModelConfig = {
 apiKey: string;
 embeddingModel: string;
};
Enter fullscreen mode Exit fullscreen mode

The application supports memory vector store and Qdrant vector store. Therefore, the application has Qdrant configuration.

// application/types/qdrant-database-config.type.ts

export type QdrantDatabaseConfig = {
 apiKey: string;
 url: string;
};
Enter fullscreen mode Exit fullscreen mode

This configuration stores the split documents, vector database type and embedding model.

export type VectorDatabasesType = 'MEMORY' | 'QDRANT';
Enter fullscreen mode Exit fullscreen mode
// application/types/vector-store-config.type.ts

import { Document } from '@langchain/core/documents';
import { Embeddings } from '@langchain/core/embeddings';
import { VectorDatabasesType } from './vector-databases.type';

export type VectorDatabaseFactoryConfig = {
 docs: Document<Record<string, any>>[];
 type: VectorDatabasesType;
 embeddings: Embeddings;
};

export type DatabaseConfig = Omit<VectorDatabaseFactoryConfig, 'type'>;
Enter fullscreen mode Exit fullscreen mode

Create a configurable embedding model

export type EmbeddingModels = 'GEMINI_AI' | 'HUGGINGFACE_INFERENCE';
Enter fullscreen mode Exit fullscreen mode
import { TaskType } from '@google/generative-ai';
import { HuggingFaceInferenceEmbeddings } from '@langchain/community/embeddings/hf';
import { Embeddings } from '@langchain/core/embeddings';
import { GoogleGenerativeAIEmbeddings } from '@langchain/google-genai';
import { InternalServerErrorException } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import { EmbeddingModelConfig } from '../types/embedding-model-config.type';
import { EmbeddingModels } from '../types/embedding-models.type';

function createGeminiTextEmbeddingModel(configService: ConfigService) {
 const { apiKey, embeddingModel: model } = configService.get<EmbeddingModelConfig>('gemini');
 return new GoogleGenerativeAIEmbeddings({
   apiKey,
   model,
   taskType: TaskType.RETRIEVAL_DOCUMENT,
   title: 'Angular Book',
 });
}

function createHuggingfaceInferenceEmbeddingModel(configService: ConfigService) {
 const { apiKey, embeddingModel: model } = configService.get<EmbeddingModelConfig>('huggingface');
 return new HuggingFaceInferenceEmbeddings({
   apiKey,
   model,
 });
}

export function createTextEmbeddingModel(configService: ConfigService, embeddingModel: EmbeddingModels): Embeddings {
 if (embeddingModel === 'GEMINI_AI') {
   return createGeminiTextEmbeddingModel(configService);
 } else if (embeddingModel === 'HUGGINGFACE_INFERENCE') {
   return createHuggingfaceInferenceEmbeddingModel(configService);
 } else {
   throw new InternalServerErrorException('Invalid type of embedding model.');
 }
}
Enter fullscreen mode Exit fullscreen mode

The createGeminiTextEmbeddingModel function instantiates and returns a Gemini text embedding model. Similarly, the createHuggingfaceInferenceEmbeddingModel instantiates and returns a huggingface inference embedding model. Finally, the createTextEmbeddingModel function is a factory method that creates the embedding model based on the embedding model flag.

Create a configurable vector store retriever

Define a contract of a Vector Database Service

// application/interfaces/vector-database.interface.ts

import { VectorStore, VectorStoreRetriever } from '@langchain/core/vectorstores';
import { DatabaseConfig } from '../types/vector-store-config.type';

export interface VectorDatabase {
 init(config: DatabaseConfig): Promise<void>;
 asRetriever(): VectorStoreRetriever<VectorStore>;
}
Enter fullscreen mode Exit fullscreen mode
import { VectorStore, VectorStoreRetriever } from '@langchain/core/vectorstores';
import { Injectable, Logger } from '@nestjs/common';
import { MemoryVectorStore } from 'langchain/vectorstores/memory';
import { VectorDatabase } from '../interfaces/vector-database.interface';
import { DatabaseConfig } from '../types/vector-store-config.type';

@Injectable()
export class MemoryVectorDBService implements VectorDatabase {
 private readonly logger = new Logger(MemoryVectorDBService.name);
 private vectorStore: VectorStore;

 async init({ docs, embeddings }: DatabaseConfig): Promise<void> {
   this.logger.log('MemoryVectorStoreService init called');
   this.vectorStore = await MemoryVectorStore.fromDocuments(docs, embeddings);
 }

 asRetriever(): VectorStoreRetriever<VectorStore> {
   return this.vectorStore.asRetriever();
 }
}
Enter fullscreen mode Exit fullscreen mode

The MemoryVectorDBService implements the interface, persists the vectors into the memory store, and returns the vector store retriever.

import { VectorStore, VectorStoreRetriever } from '@langchain/core/vectorstores';
import { QdrantVectorStore } from '@langchain/qdrant';
import { Injectable, InternalServerErrorException, Logger } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import { QdrantClient } from '@qdrant/js-client-rest';
import { VectorDatabase } from '../interfaces/vector-database.interface';
import { QdrantDatabaseConfig } from '../types/qdrant-database-config.type';
import { DatabaseConfig } from '../types/vector-store-config.type';

const COLLECTION_NAME = 'angular_evolution_collection';

@Injectable()
export class QdrantVectorDBService implements VectorDatabase {
 private readonly logger = new Logger(QdrantVectorDBService.name);
 private vectorStore: VectorStore;

 constructor(private configService: ConfigService) {}

 async init({ docs, embeddings }: DatabaseConfig): Promise<void> {
   this.logger.log('QdrantVectorStoreService init called');
   const { url, apiKey } = this.configService.get<QdrantDatabaseConfig>('qdrant');
   const client = new QdrantClient({ url, apiKey });
   const { exists: isCollectionExists } = await client.collectionExists(COLLECTION_NAME);
   if (isCollectionExists) {
     const isDeleted = await client.deleteCollection(COLLECTION_NAME);
     if (!isDeleted) {
       throw new InternalServerErrorException(`Unable to delete ${COLLECTION_NAME}`);
     }
     this.logger.log(`QdrantVectorStoreService deletes ${COLLECTION_NAME}. Result -> ${isDeleted}`);
   }

   const size = (await embeddings.embedQuery('test')).length;
   const isSuccess = await client.createCollection(COLLECTION_NAME, {
     vectors: { size, distance: 'Cosine' },
   });

   if (!isSuccess) {
     throw new InternalServerErrorException(`Unable to create collection ${COLLECTION_NAME}`);
   }

   this.vectorStore = await QdrantVectorStore.fromDocuments(docs, embeddings, {
     client,
     collectionName: COLLECTION_NAME,
   });
 }

 asRetriever(): VectorStoreRetriever<VectorStore> {
   return this.vectorStore.asRetriever();
 }
}
Enter fullscreen mode Exit fullscreen mode

The QdrantVectorDBService implements the interface, persists the vectors into the Qdrant vector database, and returns the vector store retriever.

// application/vector-databases/create-vector-database.t

import { InternalServerErrorException } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import { VectorDatabasesType } from '../types/vector-databases.type';
import { MemoryVectorDBService } from './memory-vector-db.service';
import { QdrantVectorDBService } from './qdrant-vector-db.service';

export function createVectorDatabase(type: VectorDatabasesType, configService: ConfigService) {
 if (type === 'MEMORY') {
   return new MemoryVectorDBService();
 } else if (type === 'QDRANT') {
   return new QdrantVectorDBService(configService);
 }
 throw new InternalServerErrorException(`Invalid vector store type: ${type}`);
}
Enter fullscreen mode Exit fullscreen mode

The function instantiates the database service based on the database type.

Create document chunks from an Angular PDF book

Copy the book to the assets folder

import { PDFLoader } from '@langchain/community/document_loaders/fs/pdf';
import { RecursiveCharacterTextSplitter } from '@langchain/textsplitters';

const splitter = new RecursiveCharacterTextSplitter({
 chunkSize: 1000,
 chunkOverlap: 100,
});

export async function loadPdf(path: string) {
 const loader = new PDFLoader(path);

 const docs = await loader.load();
 const splitDocs = await splitter.splitDocuments(docs);
 return splitDocs;
}
Enter fullscreen mode Exit fullscreen mode

The loadPdf function uses the pdf loader to load the PDF file, and split the document into many chunks.

import { Embeddings } from '@langchain/core/embeddings';
import { VectorStore, VectorStoreRetriever } from '@langchain/core/vectorstores';
import { Inject, Injectable, Logger } from '@nestjs/common';
import path from 'path';
import { appConfig } from '~configs/root-path.config';
import { ANGULAR_EVOLUTION_BOOK, TEXT_EMBEDDING_MODEL, VECTOR_DATABASE } from './constants/rag.constant';
import { VectorDatabase } from './interfaces/vector-database.interface';
import { loadPdf } from './loaders/pdf-loader';

@Injectable()
export class VectorStoreService {
 private readonly logger = new Logger(VectorStoreService.name);

 constructor(
   @Inject(TEXT_EMBEDDING_MODEL) embeddings: Embeddings,
   @Inject(VECTOR_DATABASE) private dbService: VectorDatabase,
 ) {
   this.createDatabase(embeddings, this.dbService);
 }

 private async createDatabase(embeddings: Embeddings, dbService: VectorDatabase) {
   const docs = await this.loadDocuments();
   await dbService.init({ docs, embeddings });
 }

 private async loadDocuments() {
   const bookFullPath = path.join(appConfig.rootPath, ANGULAR_EVOLUTION_BOOK);
   const docs = await loadPdf(bookFullPath);
   this.logger.log(`number of docs -> ${docs.length}`);
   return docs;
 }

 asRetriever(): VectorStoreRetriever<VectorStore> {
   this.logger.log(`return vector retriever`);
   return this.dbService.asRetriever();
 }
}
Enter fullscreen mode Exit fullscreen mode

The VectorStoreService stores the PDF book into the vector database, and returns the vector store retriever.

Make the module a dynamic module

import { DynamicModule, Module } from '@nestjs/common';
import { ConfigService } from '@nestjs/config';
import { TEXT_EMBEDDING_MODEL, VECTOR_DATABASE, VECTOR_STORE_TYPE } from './application/constants/rag.constant';
import { createTextEmbeddingModel } from './application/embeddings/create-embedding-model';
import { EmbeddingModels } from './application/types/embedding-models.type';
import { VectorDatabasesType } from './application/types/vector-databases.type';
import { createVectorDatabase, MemoryVectorDBService, QdrantVectorDBService } from './application/vector-databases';
import { VectorStoreTestService } from './application/vector-store-test.service';
import { VectorStoreService } from './application/vector-store.service';
import { VectorStoreController } from './presenters/http/vector-store.controller';

@Module({
 providers: [VectorStoreService, VectorStoreTestService, MemoryVectorDBService, QdrantVectorDBService],
 controllers: [VectorStoreController],
 exports: [VectorStoreService],
})
export class VectorStoreModule {
 static register(embeddingModel: EmbeddingModels, vectorStoreType: VectorDatabasesType): DynamicModule {
   return {
     module: VectorStoreModule,
     providers: [
       {
         provide: TEXT_EMBEDDING_MODEL,
         useFactory: (configService: ConfigService) => createTextEmbeddingModel(configService, embeddingModel),
         inject: [ConfigService],
       },
       {
         provide: VECTOR_STORE_TYPE,
         useValue: vectorStoreType,
       },
       {
         provide: VECTOR_DATABASE,
         useFactory: (type: VectorDatabasesType, configService: ConfigService) =>
           createVectorDatabase(type, configService),
         inject: [VECTOR_STORE_TYPE, ConfigService],
       },
     ],
   };
 }
}
Enter fullscreen mode Exit fullscreen mode

The VectorStoreModule is a dynamic module; the embedding model and vector database are configurable. The register static method create text embedding module and vector database based on the configuration.

Create an RAG module

The rag module is responsible for creating a langchain chain that asks the model to generate responses.

nest g mo ragTechBook
nest g s ragTechBook/application/rag --flat
nest g s ragTechBook/presenters/http/rag --flat 
Enter fullscreen mode Exit fullscreen mode

Create the RAG Service

// application/constants/prompts.constant.ts

import { ChatPromptTemplate, MessagesPlaceholder } from '@langchain/core/prompts';

const qaSystemPrompt = `You are an assistant for question-answering tasks.
Use the following pieces of retrieved context to answer the question.
If you don't know the answer, just say that you don't know.

{context}`;

export const qaPrompt = ChatPromptTemplate.fromMessages([
 ['system', qaSystemPrompt],
 new MessagesPlaceholder('chat_history'),
 ['human', '{question}'],
]);

const contextualizeQSystemPrompt = `Given a chat history and the latest user question
which might reference context in the chat history, formulate a standalone question
which can be understood without the chat history. Do NOT answer the question,
just reformulate it if needed and otherwise return it as is.`;

export const contextualizeQPrompt = ChatPromptTemplate.fromMessages([
 ['system', contextualizeQSystemPrompt],
 new MessagesPlaceholder('chat_history'),
 ['human', '{question}'],
]);
Enter fullscreen mode Exit fullscreen mode

This constant file stores some prompts for the Langchain chains.

import { StringOutputParser } from '@langchain/core/output_parsers';
import { ChatGroq } from '@langchain/groq';
import { contextualizeQPrompt } from '../constants/prompts.constant';

export function createContextualizedQuestion(llm: ChatGroq) {
 const contextualizeQChain = contextualizeQPrompt.pipe(llm).pipe(new StringOutputParser());

 return (input: Record<string, unknown>) => {
   if ('chat_history' in input) {
     return contextualizeQChain;
   }
   return input.question;
 };
}
Enter fullscreen mode Exit fullscreen mode

This function creates a chain that formulates a question without relying on the chat history.

import { BaseMessage } from '@langchain/core/messages';
import { Runnable, RunnablePassthrough, RunnableSequence } from '@langchain/core/runnables';
import { ChatGroq } from '@langchain/groq';
import { Inject, Injectable } from '@nestjs/common';
import { formatDocumentsAsString } from 'langchain/util/document';
import { GROQ_CHAT_MODEL } from '~groq/application/constants/groq.constant';
import { VectorStoreService } from '~vector-store/application/vector-store.service';
import { createContextualizedQuestion } from './chain-with-history/create-contextual-chain';
import { qaPrompt } from './constants/prompts.constant';
import { ConversationContent } from './types/conversation-content.type';

@Injectable()
export class RagService {
 private chat_history: BaseMessage[] = [];

 constructor(
   @Inject(GROQ_CHAT_MODEL) private model: ChatGroq,
   private vectorStoreService: VectorStoreService,
 ) {}

 async ask(question: string): Promise<ConversationContent[]> {
   const contextualizedQuestion = createContextualizedQuestion(this.model);
   const retriever = this.vectorStoreService.asRetriever();

   try {
     const ragChain = RunnableSequence.from([
       RunnablePassthrough.assign({
         context: (input: Record<string, unknown>) => {
           if ('chat_history' in input) {
             const chain = contextualizedQuestion(input);
             return (chain as Runnable).pipe(retriever).pipe(formatDocumentsAsString);
           }
           return '';
         },
       }),
       qaPrompt,
       this.model,
     ]);

     const aiMessage = await ragChain.invoke({ question, chat_history: this.chat_history });
     this.chat_history = this.chat_history.concat(aiMessage);
     if (this.chat_history.length > 10) {
       this.chat_history.shift();
     }

     return [
       {
         role: 'Human',
         content: question,
       },
       {
         role: 'Assistant',
         content: (aiMessage.content as string) || '',
       },
     ];
   } catch (ex) {
     console.error(ex);
     throw ex;
   }
 }
}
Enter fullscreen mode Exit fullscreen mode

The RagService service is straightforward. The ask method submits the input to the chain and outputs a response. This method extracts the content from the response, stores the Human and AI messages in chat history in memory and returns the conversation to the template engine for rendering.

Add RAG Controller

import { IsNotEmpty, IsString } from 'class-validator';

export class AskDto {
 @IsString()
 @IsNotEmpty()
 query: string;
}
Enter fullscreen mode Exit fullscreen mode
@Controller('rag')
export class RagController {
 constructor(private service: RagService) {}

 @Post()
 async ask(@Body() dto: AskDto): Promise<string> {
   const conversation = await this.service.ask(dto.query);
   return toDivRow(conversation);
 }
}
Enter fullscreen mode Exit fullscreen mode

The RAG controller submits the query to the chain, gets the results, and sends the HTML codes back to the template engine to render.

Import modules into RAG module

import { Module } from '@nestjs/common';
import { GroqModule } from '~groq/groq.module';
import { VectorStoreModule } from '~vector-store/vector-store.module';
import { RagService } from './application/rag.service';
import { RagController } from './presenters/http/rag.controller';

@Module({
 imports: [GroqModule, VectorStoreModule.register('GEMINI_AI', 'MEMORY')],
 providers: [RagService],
 controllers: [RagController],
})
export class RagTechBookModule {}
Enter fullscreen mode Exit fullscreen mode

Import RagModule to AppModule

import { RagTechBookModule } from '~rag-tech-book/rag-tech-book.module';

@Module({
 imports: [
    other imports 
   RagTechBookModule,
 ],
 controllers: [AppController],
})
export class AppModule {}
Enter fullscreen mode Exit fullscreen mode

Modify App Controller to render handlebar template

import { Controller, Get, Render } from '@nestjs/common';

@Controller()
export class AppController {
 @Get()
 @Render('index')
 getHello(): Record<string, string> {
   return {
     title: 'Angular Tech Book RAG',
   };
 }
}
Enter fullscreen mode Exit fullscreen mode

The App controller informs the Handlebar template engine to render index.hbs file.

HTMX and Handlebar Template Engine

This is a simple user interface to display the conversation

default.hbs
<!DOCTYPE html>
<html lang="en">
 <head>
   <meta charset="utf-8" />
   <meta name="description" content="Angular tech book RAG powed by gemma 2 LLM." />
   <meta name="author" content="Connie Leung" />
   <meta name="viewport" content="width=device-width, initial-scale=1.0" />
   <title>{{{ title }}}</title>
   <style>
     *, *::before, *::after {
         padding: 0;
         margin: 0;
         box-sizing: border-box;
     }
   </style>
   <script src="https://cdn.tailwindcss.com?plugins=forms,typography"></script>
 </head>
 <body class="p-4 w-screen h-screen min-h-full">
   <script src="https://unpkg.com/htmx.org@2.0.1" integrity="sha384-QWGpdj554B4ETpJJC9z+ZHJcA/i59TyjxEPXiiUgN2WmTyV5OEZWCD6gQhgkdpB/" crossorigin="anonymous"></script>
   <div class="h-full grid grid-rows-[auto_1fr_40px] grid-cols-[1fr]">
     {{> header }}
     {{{ body }}}
     {{> footer }}
   </div>
 </body>
</html>
Enter fullscreen mode Exit fullscreen mode

The above is a default layout with a header, footer, and body. The body eventually displays the conversation between the AI and human. The head section imports tailwind to style the HTML elements and htmx to interact with the server.

<div>
   <div class="mb-2 p-1 border border-solid border-[#464646] rounded-lg">
       <p class="text-[1.25rem] mb-2 text-[#464646] underline">Architecture</p>
       <ul>
           <li class="text-[1rem]">Chat Model: Groq</li>
           <li class="text-[1rem]">LLM: Gemma 2</li>
           <li class="text-[1rem]">Embeddings: Gemini AI Embedding / HuggingFace Embedding</li>
           <li class="text-[1rem]">Vector Store: Memory Vector Store / Qdrant Vector Store</li>
           <li class="text-[1rem]">Retriever: Vector Store Retriever</li>
       </ul>
   </div>
   <div id="chat-list" class="mb-4 h-[300px] overflow-y-auto overflow-x-auto">
       <div class="flex text-[#464646] text-[1.25rem] italic underline">
           <span class="w-1/5 p-1 border border-solid border-[#464646]">Role</span>
           <span class="w-4/5 p-1 border border-solid border-[#464646]">Result</span>
       </div>
   </div>
   <form id="rag-form" hx-post="/rag" hx-target="#chat-list" hx-swap="beforeend swap:1s">
       <div>
           <label>
               <span class="text-[1rem] mr-1 w-1/5 mb-2 text-[#464646]">Question: </span>
               <input type="text" name="query" class="mb-4 w-4/5 rounded-md p-2"
                   placeholder="Ask me something"
                   aria-placeholder="Placeholder to ask question to RAG"></input>
           </label>
       </div>
       <button type="submit" class="bg-blue-500 hover:bg-blue-700 text-white p-2 text-[1rem] flex justify-center items-center rounded-lg">
           <span class="mr-1">Send</span><img class="w-4 h-4 htmx-indicator" src="/images/spinner.gif">
       </button>
   </form>
</div>
Enter fullscreen mode Exit fullscreen mode

A user can input the question in the text field and click the Send button. The button makes a POST request to /rag and appends the conversation to the list.

This is the end of my first langchain RAG application using the Gemma 2 model to generate the responses.

Resources

Top comments (4)

Collapse
 
sakar_dhana profile image
Sakar

Thank you for your time in creating this article. It helps me learn how to use RAG in Angular.

Collapse
 
railsstudent profile image
Connie Leung

Thanks. This application does not use Angular. It uses HTMX to build simple user interfaces. I use the technology to learn Angular from a RAG application that consumes an Angular book

Collapse
 
winzod4ai profile image
Winzod AI

Amazing!! Also folks, I came across this post and thought it might be helpful for you all! Reranker Rag.

Collapse
 
oli_guo_58058161037ddd2a3 profile image
Oli Guo

Nice detailed tutorial on RAG tech, I read it from Google Dev Wechat and come back here for more understanding😂.