DEV Community

# llm

Posts

👋 Sign in for the ability to sort posts by relevant, latest, or top.
I thought I had a bug

I thought I had a bug

Comments
9 min read
Aria: Building an AI Customer Support Agent with Persistent Memory

Aria: Building an AI Customer Support Agent with Persistent Memory

Comments
8 min read
🐘 The Pink Elephant Problem in AI: Why “Don’t Do This” Makes LLMs Do Exactly That

🐘 The Pink Elephant Problem in AI: Why “Don’t Do This” Makes LLMs Do Exactly That

Comments
3 min read
Fixing blind spots in code reviews with Hindsight memory

Fixing blind spots in code reviews with Hindsight memory

Comments
2 min read
AI Is Bad at Disagreeing. I Spent Weeks Trying to Fix That.

AI Is Bad at Disagreeing. I Spent Weeks Trying to Fix That.

Comments
5 min read
Multi-Agent Memory in 2026: 5 Recent Posts, One Pattern, One Spec

Multi-Agent Memory in 2026: 5 Recent Posts, One Pattern, One Spec

Comments
5 min read
Escribí un intérprete de Python en Python. Lo que aprendí no tiene nada que ver con Python

Escribí un intérprete de Python en Python. Lo que aprendí no tiene nada que ver con Python

Comments
8 min read
Five habits that separate the operator from the vibe-coder

Five habits that separate the operator from the vibe-coder

Comments
6 min read
Claude's default teaching shape has no return: the 5-node loop that fixes it

Claude's default teaching shape has no return: the 5-node loop that fixes it

Comments
6 min read
Gate Zero: stop unfalsifiable prompts before they canonicalize as specs

Gate Zero: stop unfalsifiable prompts before they canonicalize as specs

Comments
5 min read
Eval-driven development for a local-LLM agent: how I shipped Lore 0.2.0 with confidence

Eval-driven development for a local-LLM agent: how I shipped Lore 0.2.0 with confidence

1
Comments
6 min read
Your Claude Code rules are a liability you'll never audit

Your Claude Code rules are a liability you'll never audit

Comments
6 min read
Qwen 3.6 Ollama Release, Consumer GPU Benchmarks, GGUF Quantization Fixes

Qwen 3.6 Ollama Release, Consumer GPU Benchmarks, GGUF Quantization Fixes

Comments
4 min read
Claude Opus 4.7 and the Beginning of the End of AI Abundance

Claude Opus 4.7 and the Beginning of the End of AI Abundance

Comments
8 min read
Traditional Quantization vs 1.58-Bit Ternary Models: A Practical Comparison

Traditional Quantization vs 1.58-Bit Ternary Models: A Practical Comparison

Comments 1
5 min read
👋 Sign in for the ability to sort posts by relevant, latest, or top.