DEV Community

# localllm

Posts

đź‘‹ Sign in for the ability to sort posts by relevant, latest, or top.
[Day 2] I Trained an AI on 22 Photos of My Cat — Now It Draws Her in Any Scene

[Day 2] I Trained an AI on 22 Photos of My Cat — Now It Draws Her in Any Scene

Comments
6 min read
Devstral 2: Run Mistral's Open Coding Agent Locally

Devstral 2: Run Mistral's Open Coding Agent Locally

Comments
9 min read
[Day 1] DGX Spark Came Home — I Made It Draw a Cat

[Day 1] DGX Spark Came Home — I Made It Draw a Cat

Comments
6 min read
My AI Experience in Russia as a European🤯

My AI Experience in Russia as a European🤯

Comments
4 min read
How to Run Qwen 3.6 Locally - 27B Dense, 35B MoE, and Coding Variants Setup Guide

How to Run Qwen 3.6 Locally - 27B Dense, 35B MoE, and Coding Variants Setup Guide

Comments
3 min read
Run Gemma 3 Locally on Windows: The VRAM Guide Nobody Gave You [2026]

Run Gemma 3 Locally on Windows: The VRAM Guide Nobody Gave You [2026]

Comments
8 min read
Local LLM for Log Analysis: Privacy-First Debugging with Ollama

Local LLM for Log Analysis: Privacy-First Debugging with Ollama

Comments
7 min read
Locally Uncensored v2.4.0 — Settings Polish, Linux Drag Fix, and Configurable HuggingFace Path

Locally Uncensored v2.4.0 — Settings Polish, Linux Drag Fix, and Configurable HuggingFace Path

Comments
3 min read
GPU Prices Up 48% in Two Months. I Run LLMs in My Garage.

GPU Prices Up 48% in Two Months. I Run LLMs in My Garage.

Comments
3 min read
Local LLM on NVIDIA GPU vs Cloud API: A Real Cost Analysis

Local LLM on NVIDIA GPU vs Cloud API: A Real Cost Analysis

Comments
5 min read
How I Stopped GGUF Models From Crashing My GPU: A Pre-flight VRAM Check

How I Stopped GGUF Models From Crashing My GPU: A Pre-flight VRAM Check

Comments
4 min read
Q4 KV Cache Fit 32K Context into 8GB VRAM — Only Math Broke

Q4 KV Cache Fit 32K Context into 8GB VRAM — Only Math Broke

Comments
8 min read
GitHub Copilot with Ollama: Agentic AI Models Running Locally in Your IDE

GitHub Copilot with Ollama: Agentic AI Models Running Locally in Your IDE

Comments
11 min read
Portable LLM on a USB Stick: I Built Offline AI That Runs Anywhere [2026 Guide]

Portable LLM on a USB Stick: I Built Offline AI That Runs Anywhere [2026 Guide]

Comments 1
7 min read
Run a Local LLM on Android: What RAM Tier You Need and Which Models Actually Work

Run a Local LLM on Android: What RAM Tier You Need and Which Models Actually Work

1
Comments
2 min read
đź‘‹ Sign in for the ability to sort posts by relevant, latest, or top.