We are thrilled to team up with Timescale to bring the community our newest challenge. We think you'll like this one.
Running through November 10,...
For further actions, you may consider blocking this person and/or reporting abuse
This looks awesome! I love hackathons!
Good luck everyone!
I don't think my 5 years old laptop with 512mb of GPU ram will be able to run a 7B LLM 😂
So many questions since I'm not familiar with LLMs at all:
No need to write in Python. You can create a Posgres function and invoke this function as SQL statement from any PG client.
This means the chat response is generated from pgai and you are getting it back via Postgres.
Is this compatible with the Open Source AI Definition just ratified by the OSI, please? I don't see Ollama on the list of endorsements there.
Depends on the model you are using. Ollama is basically just a wrapper for llama.cpp and llama.cpp serves as a host for the interference with an LLM. Both, Ollama and Llama.cpp are published under open source licenses, but there are models which are not open source.
Yes, so the answer is (I think) that this contest is not compatible with the OSI Open Source AI Definition. Thanks.
If we want to utilize Ollama model, does that mean that we have to host the model ourselves and open the access to public?
Did you get an answer for your question? I think we have to deploy the model, otherwise how would the judges try the mout?
YAY!!!
hell yeah
I built a CLI that needs to connect to TimeScaleDB and OpenAI. How can I share the credentials with the judges? Otherwise, they can't run the app and I would rather not publish them to my repo.
@jess Can you help me please?
This is great, all the best for all participants.
Hey guys, please fix the url for dev.to/timescale
Thanks, fixed!
Pre-built UI components to help you create stunning websites in no time with bardui.com.
👍👍sir
the discord invite says it's invalid?
nvm it worked now.
❤️
wow