PrivateGPT is a python script to interrogate local files using GPT4ALL, an open source large language model. It is pretty straight forward to set up:
- Clone the repo
-
Download the LLM - about 10GB - and place it in a new folder called
models
. - Place the documents you want to interrogate into the
source_documents
folder - by default, there's a text of the last US state of the union in there. - Run the
ingest.py
script - this can a long time. On this MacBook M1 it was 2 minutes though - Run the
privateGPT.py
script and you get a prompt in your Terminal:
I asked it what the US president has to say about the situation in the Ukraine and it gave me a synopsis and where in the document the information is from.
Neat!
Top comments (14)
can I use this to do my own chatgpt on a webpage?
You could but there are easier implementations like this one: github.com/mlc-ai/web-llm
I know is probably easy for most people, but I enter in the repository and I didnt understand, if you can help me more, I will be very grateful: c and, much text in English confuses me
I can't see why not. What would prevent that? It's a python script. Nothing to do you running out server side in response to requests and delivering responses based on our containing the results.
I'll try, I'm a bit of a newbie to this
Same just learning new things .
I want to develop an AI chatbot in my university on a web that can help each student that need help of any information or protocol of the university.
use python fastapi
Hi! Great article.
Im using this to make visit reports of customer meetings from audio, emails and documents to give the full picture.
Is there a way to create a Python pipeline with pre-made prompts and get the responses saved in a txt file? Basically an automation of that Private Gpt.
Merci
This is very interesting, I will give it a try and post the results.
This is pretty fun to play around with, thanks for sharing!
Can you make it search your Google Drive?
This code relies on local files in one of a set of understood formats (text, markdown, etc). You could extend it to pull down your Google Drive content, or sync it to a local directory first.
ive read somewhere that it only works on CPU becasue of llama cpp , cant we run it in gpu linux environment
Some comments have been hidden by the post's author - find out more