Remember when serverless was going to revolutionize everything? Well, LLMs just delivered the killing blow.
Here's the thing: In an AI-assisted coding world, proprietary serverless platforms are dead weight. Why? Because LLMs understand Docker like they understand breathing, but they choke on your special snowflake Lambda configuration.
Let me explain why serverless was already a scam and how LLMs just made it ten times worse.
The Original Sin: Serverless Was Always Broken
Before we get to the LLM angle, let's recap why serverless was already a bad idea:
The Promise:
- No servers to manage!
- Infinite scale!
- Pay only for what you use!
The Reality:
- 15-minute execution limits
- Cold starts that make your app feel broken
- Surprise $10,000 bills
- Vendor lock-in so tight it hurts
- Debugging that makes you question your career choices
You know what doesn't have these problems? A container.
Enter LLMs: The Final Nail in the Coffin
Here's where it gets spicy.
When you're coding with Claude, ChatGPT, or Cursor, what works better?
Option A: "Deploy this to Docker"
docker build -t my-app .
docker run -p 3000:3000 my-app
Option B: "Deploy this to AWS Lambda with API Gateway, configure the execution role, set up the VPC endpoints, create a deployment package with the right runtime, configure the event source mappings..."
The LLM's response to Option B: confused screaming
Why LLMs Love Docker (And Hate Your Serverless Platform)
1. Documentation Density
Docker has been around since 2013. That's over a decade of:
- Stack Overflow answers
- GitHub examples
- Blog posts
- Official docs
- YouTube tutorials
AWS Lambda? Sure, there's documentation. But it's:
- Constantly changing
- Platform-specific
- Full of edge cases
- Buried in AWS's labyrinth of services
When an LLM trains on the internet, it sees 1000x more Docker examples than CloudFormation YAML nightmares.
2. Universal Patterns vs. Proprietary Nonsense
Docker is just Linux containers. The patterns are universal:
- Environment variables work the same everywhere
- Volumes are just mounted directories
- Networking is standard TCP/IP
Serverless? Every platform invents its own:
- Event formats
- Configuration syntax
- Deployment procedures
- Debugging tools
- Billing models
LLMs can't keep up with this Tower of Babel.
3. Local Development = Better LLM Assistance
Watch this:
Me: "Help me debug why my container isn't connecting to Redis"
LLM: "Let's check your docker-compose.yml, ensure the services are on the same network, verify the connection string..."
vs.
Me: "Help me debug why my Lambda can't connect to ElastiCache"
LLM: "First, check your VPC configuration, then the security groups, subnet associations, NAT gateway, execution role permissions, and... wait, are you using VPC endpoints? What about the Lambda ENI lifecycle? Did you enable DNS resolution in your VPC?"
head explodes
"But Serverless Scales!"
So does Kubernetes. So does Docker Swarm. So does literally any container orchestrator.
But here's the thing: with containers + LLMs, you can actually implement that scaling:
Me: "Add horizontal autoscaling to my Docker Compose setup"
LLM: "Here's a complete docker-compose.yml with scaling configuration, health checks, and load balancing..."
vs.
Me: "Add autoscaling to my Lambda"
LLM: "First, create an Application Auto Scaling target, then define a scaling policy using CloudWatch metrics, but make sure your concurrent execution limits don't interfere with account limits, and don't forget about reserved concurrency vs provisioned concurrency..."
Which one are you actually going to implement correctly?
Breaking Free: The Container + LLM Combo
Here's your escape plan:
- Pick boring technology: Docker, PostgreSQL, Redis
- Use standard patterns: REST APIs, background workers, cron jobs
- Deploy anywhere: VPS, Kubernetes, even Sliplane (yes, shameless plug)
- Let LLMs actually help: They understand these tools
Your AI assistant becomes a force multiplier instead of a confused intern.
The Future Is Boring (And That's Beautiful)
We're entering an era where AI can write most of our code. But it can only write code for platforms it understands.
Docker is boring. PostgreSQL is boring. Redis is boring.
You know what? Boring means:
- Documented
- Predictable
- LLM-friendly
- Actually works
Serverless is "exciting": excitingly broken, excitingly expensive, excitingly impossible to debug.
TL;DR
Serverless was already a questionable choice. Now that we code with LLMs, it's practically sabotage.
Your AI assistant can spin up a complete containerized application in seconds. But ask it to debug your Lambda cold start issues? Good luck.
The writing's on the wall: In an LLM-powered development world, proprietary platforms are dead weight. Stick to technologies with deep documentation, wide adoption, and standard patterns.
Or keep fighting with CloudFormation while your competitors ship features. Your choice.
Cheers,
Jonas, Co-Founder of sliplane.io
Top comments (20)
Nice satire,
Oh you mean it?
Replacing cynism with experience:
15-minute execution limits
Thats the point of one time calls
Cold starts that make your app feel broken
With GO below 50ms, snapstart for java
tecracer.com/blog/2023/07/custom-r...
Surprise $10,000 bills
With scaled instances it would be about the same, just limit concurrent execution - just like in scaling
Vendor lock-in so tight it hurts
There are ways around it if you code solid
Or use tecracer.com/blog/2025/06/the-arch...
Debugging that makes you question your career choices
With unit test and now debugging in running lambdas also no problem
docs.aws.amazon.com/toolkit-for-vs...
me "Add autoscaling to my Lambda"
llm : its build into the service, you do not need to do anything
docs.aws.amazon.com/lambda/latest/...
This must be a joke
What part?:D
Cloud platforms are much much more than just running Docker services.
I think the big take-away here is really "Run your 'serverless' service collections in Docker to simplify your life." Because what is an LLM in this context?
It's a glorified search agent. You're asking it to Google the answer for you.
The other thing here is to ask yourself "Do I really need to structure this in a 'serverless' way? Or is it cheaper to do it with more control and less limitations?"
So many things wrong here:
I understand that people tried to shove everything into AWS Lambda, which was a huge mistake, but that doesn't mean we should throw it all away. There are legitimate use-cases for serverless.
But I understand, this isn't a real article, it's your way to push sliplane, your own service based on docker. I'm sure once you add some serverless functionality you will start singing it's praises.
Given the quality of this piece, I wonder if it will help your company, or hurt it.
Dev.to is about more than getting a backlink or trying to make a buck. Be better!
I think serverless has its niche, but running things in Lambdas is not at all trivial
The way you formulated it might be a bit extreme (sure to provoke clicks and comments, lol) but that doesn't mean it's not true - I think it is ...
At some point I also jumped on the "serverless"/lambda bandwagon, but I stopped being a "believer" a while ago - for sure there are some good use cases, but it's no longer my "religion", and in most cases there are probably better ways, as you pointed out ...
It's as with many hypes and overhyped things - NoSQL, GraphQL, Blockchain, Serverless - all of those haven't been the revolution they were made out to be - useful in specific "niche" scenarios, but none of them truly going "mainstream" :)
If you’re hating on ANY tech because your reliance on LLMs to properly use them is making you upset, than the tech isn’t the problem — you’re just a bad developer! This is exactly what the industry is starting to catch up with and being careful with new hires.
Get opinionated due to LLM work is career suicide right now. For anyone reading this, do the opposite of what this person is saying.
Serverless is actually still very much alive and cost-saving if you actually know how & when to use it. This is the first time I saw a ridiculous rant like this one — but it makes sense considering the shameless plug.
LLMs thrive on open standards and universal tooling, exactly why we built haveto.com to run AI workloads directly on-chain using containers, not proprietary setups.
No cold starts. No vendor lock-in. No YAML puzzles.
Just real AI compute, fully transparent, and deployable like any standard container.
If you’re tired of fighting your stack instead of shipping features, you'll feel right at home here.
Let’s build smarter, not more complex.
To make AWS lambda (or Supabase edge function, or Netlify edge function or whatever) you just need to know how to write a handler. To use Docker you need to be a Linux expert just like with bare metal servers. Lambdas are for small teams, Docker is for teams with dedicated devops guy onboard I guess.
There's no relation to the rise of LLM's to the end of serverless. LLMs can easily create terraform and/or CDK scripts that deploy straight to AWS or any cloud provider for that matter. Abstractions exist and LLMs will learn on those abstractions. Docker is an abstraction in itself and so are tools to deploy to the cloud
Some comments may only be visible to logged-in visitors. Sign in to view all comments.