Skip to content
Navigation menu
Search
Powered by
Search
Algolia
Log in
Create account
DEV Community
Close
#
llamacpp
Follow
Hide
Posts
Left menu
👋
Sign in
for the ability to sort posts by
relevant
,
latest
, or
top
.
Right menu
Jan v0.5.15: More control over llama.cpp settings, advanced hardware control, and more
Emre Can Kartal
Emre Can Kartal
Emre Can Kartal
Follow
Feb 18
Jan v0.5.15: More control over llama.cpp settings, advanced hardware control, and more
#
ai
#
llm
#
opensource
#
llamacpp
Comments
Add Comment
2 min read
Run a LLM Locally on an Intel Mac with an eGPU
Ankit Babber
Ankit Babber
Ankit Babber
Follow
Feb 11
Run a LLM Locally on an Intel Mac with an eGPU
#
llm
#
llamacpp
#
egpu
#
ai
Comments
Add Comment
9 min read
🎯 Run Qwen2-VL on CPU Using GGUF model & llama.cpp
Mai Chi Bao
Mai Chi Bao
Mai Chi Bao
Follow
Jan 13
🎯 Run Qwen2-VL on CPU Using GGUF model & llama.cpp
#
mrzaizai2k
#
ai
#
cpu
#
llamacpp
10
reactions
Comments
1
comment
3 min read
Build your own VS Code extension
Simon Pfeiffer
Simon Pfeiffer
Simon Pfeiffer
Follow
for
Codesphere Inc.
Jun 10 '24
Build your own VS Code extension
#
tutorial
#
vscode
#
extension
#
llamacpp
23
reactions
Comments
2
comments
15 min read
loading...
We're a place where coders share, stay up-to-date and grow their careers.
Log in
Create account