In my previous post Use case for RAG and LLM my sample code only used basic string manipulation of the prompt. On this post I will show how to use LangChain Expression Language (LCEL)
Instead of string manipulation, LCEL offers a more effective alternative. Here are the step by step conversion:
- Instead of using python string interpolation:
prompt = f"I need help on {context}"
use the same string without interpolation and a chat prompt template
prompt = ChatPromptTemplate.from_template("I need help on {context}")
- We can directly use the vector store as a retriever within a sub-chain, simplifying the search and integration process.
retriever = vector_store.as_retriever(search_type='similarity')
context_subchain = itemgetter('user_query') | retriever
- Finally combine the prompts, retriever and output processing in a chain. RunnablePassthrough is used for the user_query is supplied when the chain is invoked. itemgetter is use for llm_personality which will be substituted from a disctionary passed on the chain's invocation.
chain = (
{
'context': context_subchain,
'user_query': RunnablePassthrough(),
'llm_personality': itemgetter('llm_personality')
}
| prompt
| model
| StrOutputParser()
)
Here is as sample code is written using LCEL
template_system = """
Use the following information to answer the user's query:
{context}
"""
template_user = """
User's query:
{user_query}
"""
prompt = ChatPromptTemplate.from_messages([
SystemMessagePromptTemplate.from_template(template_system),
HumanMessagePromptTemplate.from_template(template_user)
])
retriever = vector_store.as_retriever(search_type='similarity')
context_subchain = itemgetter('user_query') | retriever
chain = (
{
'context': context_subchain,
'user_query': RunnablePassthrough(),
'llm_personality': itemgetter('llm_personality')
}
| prompt
| model
| StrOutputParser()
)
response = chain.invoke({**{'user_query': user_query}, **prompt_placeholders})
You can see a more complete commit diff from old string manipulation to LCEL
Top comments (0)