Large language models are powerful tools with extensive capabilities; nonetheless, they grapple with a distinct limitation known as the context win...
For further actions, you may consider blocking this person and/or reporting abuse
The maintainers of the Langchain documentation should link to your useful explanation.
Thanks!
I totally agree! The langchain documentation is just suck.
Thanks for your kind words 🥰 Who knows they might.
What about the chunk_overlap param?
The
chunk_overlap
parameter determines how much the chunks overlap with each other.For example let's split your comment into three chunks.
What about
|the chunk_
|overlap param?
Let's overlap each chunk with 5 characters:
What about the
|about the chunk_
|chunk_overlap param?
If we didn't use chunk overlapping your comment would have lost is meaning when split.
Thanks! That makes sense but what value should I use if, for instance, I need to save the texts In a vectorDB later to augment a RAG?
Does it matter? If this is significant I'd add this information to the article.
Thanks again.
It is all depends on your data and what you are trying to achieve. The whole Augmenting LLMs with external knowledge is still in it's infancy. So you can experiment with different params to see how your LLM performs during RAG.
Something doesn’t quite work right as I see some words throughout my text after splitting are broken apart with a space making 2 non-words of each of them. They have quite a few characters in between, so it isn’t frequent, but in a large body of text, these add up. I am concerned about the detrimental impact to the vector embeddings and retrieval then.
Splitting is far from perfect. Hopefully more efficient techniques will be developed.
Could I just make the chunk_size really big? Like 1000 or even 2000, is there a downsides with doing that?
Yes you could. If your LLM has a very large context window it won't be a problem.
Thank you for your reply. How do I find out how much context window do I have or my LLM has? How to increase that?
great explanation!
Thanks a lot
You should be writing more such!
For someone new to LangChain and text split, this post really went deep on the subject.
Thanks!
Thank you for your nice words ☺️
Thanks a lot for detailed explanation, I wonder why this is not linked or published as langchain blogs
I am wondering why '.' is not part of the default separators? It seems to me that it would be effective to separate sentences.
very good, thank you
This was very helpful, Thanks for the detailed explanation!
It's really a valuable post.
Good write-up with more insightful knowledge on implementation part
Great explanation, many thanks!
Great and direct explanation!