DEV Community

Cover image for LLMs are the End of Serverless
Jonas Scholz
Jonas Scholz Subscriber

Posted on • Originally published at sliplane.io

LLMs are the End of Serverless

Remember when serverless was going to revolutionize everything? Well, LLMs just delivered the killing blow.

Here's the thing: In an AI-assisted coding world, proprietary serverless platforms are dead weight. Why? Because LLMs understand Docker like they understand breathing, but they choke on your special snowflake Lambda configuration.

Let me explain why serverless was already a scam and how LLMs just made it ten times worse.


The Original Sin: Serverless Was Always Broken

Before we get to the LLM angle, let's recap why serverless was already a bad idea:

The Promise:

  • No servers to manage!
  • Infinite scale!
  • Pay only for what you use!

The Reality:

  • 15-minute execution limits
  • Cold starts that make your app feel broken
  • Surprise $10,000 bills
  • Vendor lock-in so tight it hurts
  • Debugging that makes you question your career choices

You know what doesn't have these problems? A container.


Enter LLMs: The Final Nail in the Coffin

Here's where it gets spicy.

When you're coding with Claude, ChatGPT, or Cursor, what works better?

Option A: "Deploy this to Docker"

docker build -t my-app .
docker run -p 3000:3000 my-app
Enter fullscreen mode Exit fullscreen mode

Option B: "Deploy this to AWS Lambda with API Gateway, configure the execution role, set up the VPC endpoints, create a deployment package with the right runtime, configure the event source mappings..."

The LLM's response to Option B: confused screaming


Why LLMs Love Docker (And Hate Your Serverless Platform)

1. Documentation Density

Docker has been around since 2013. That's over a decade of:

  • Stack Overflow answers
  • GitHub examples
  • Blog posts
  • Official docs
  • YouTube tutorials

AWS Lambda? Sure, there's documentation. But it's:

  • Constantly changing
  • Platform-specific
  • Full of edge cases
  • Buried in AWS's labyrinth of services

When an LLM trains on the internet, it sees 1000x more Docker examples than CloudFormation YAML nightmares.

2. Universal Patterns vs. Proprietary Nonsense

Docker is just Linux containers. The patterns are universal:

  • Environment variables work the same everywhere
  • Volumes are just mounted directories
  • Networking is standard TCP/IP

Serverless? Every platform invents its own:

  • Event formats
  • Configuration syntax
  • Deployment procedures
  • Debugging tools
  • Billing models

LLMs can't keep up with this Tower of Babel.

3. Local Development = Better LLM Assistance

Watch this:

Me: "Help me debug why my container isn't connecting to Redis"

LLM: "Let's check your docker-compose.yml, ensure the services are on the same network, verify the connection string..."

vs.

Me: "Help me debug why my Lambda can't connect to ElastiCache"

LLM: "First, check your VPC configuration, then the security groups, subnet associations, NAT gateway, execution role permissions, and... wait, are you using VPC endpoints? What about the Lambda ENI lifecycle? Did you enable DNS resolution in your VPC?"

head explodes

exploding gif


"But Serverless Scales!"

So does Kubernetes. So does Docker Swarm. So does literally any container orchestrator.

But here's the thing: with containers + LLMs, you can actually implement that scaling:

Me: "Add horizontal autoscaling to my Docker Compose setup"

LLM: "Here's a complete docker-compose.yml with scaling configuration, health checks, and load balancing..."

vs.

Me: "Add autoscaling to my Lambda"

LLM: "First, create an Application Auto Scaling target, then define a scaling policy using CloudWatch metrics, but make sure your concurrent execution limits don't interfere with account limits, and don't forget about reserved concurrency vs provisioned concurrency..."

Which one are you actually going to implement correctly?


Breaking Free: The Container + LLM Combo

Here's your escape plan:

  1. Pick boring technology: Docker, PostgreSQL, Redis
  2. Use standard patterns: REST APIs, background workers, cron jobs
  3. Deploy anywhere: VPS, Kubernetes, even Sliplane (yes, shameless plug)
  4. Let LLMs actually help: They understand these tools

Your AI assistant becomes a force multiplier instead of a confused intern.


The Future Is Boring (And That's Beautiful)

We're entering an era where AI can write most of our code. But it can only write code for platforms it understands.

Docker is boring. PostgreSQL is boring. Redis is boring.

You know what? Boring means:

  • Documented
  • Predictable
  • LLM-friendly
  • Actually works

Serverless is "exciting": excitingly broken, excitingly expensive, excitingly impossible to debug.


TL;DR

Serverless was already a questionable choice. Now that we code with LLMs, it's practically sabotage.

Your AI assistant can spin up a complete containerized application in seconds. But ask it to debug your Lambda cold start issues? Good luck.

The writing's on the wall: In an LLM-powered development world, proprietary platforms are dead weight. Stick to technologies with deep documentation, wide adoption, and standard patterns.

Or keep fighting with CloudFormation while your competitors ship features. Your choice.

Cheers,

Jonas, Co-Founder of sliplane.io

Top comments (20)

Collapse
 
megaproaktiv profile image
Gernot Glawe • Edited

Nice satire,
Oh you mean it?
Replacing cynism with experience:

Collapse
 
spock123 profile image
Lars Rye Jeppesen

This must be a joke

Collapse
 
code42cate profile image
Jonas Scholz

What part?:D

Collapse
 
spock123 profile image
Lars Rye Jeppesen

Cloud platforms are much much more than just running Docker services.

Collapse
 
strredwolf profile image
STrRedWolf

I think the big take-away here is really "Run your 'serverless' service collections in Docker to simplify your life." Because what is an LLM in this context?

It's a glorified search agent. You're asking it to Google the answer for you.

The other thing here is to ask yourself "Do I really need to structure this in a 'serverless' way? Or is it cheaper to do it with more control and less limitations?"

Collapse
 
jluterek profile image
James Luterek

So many things wrong here:

  • There is a universal cloud event format - cloudevents.io/
  • Building applications are more than just APIs and Websites.
  • LLMs can write serverless functions extremely well.
  • Serverless is more than just Lambda at this point.

I understand that people tried to shove everything into AWS Lambda, which was a huge mistake, but that doesn't mean we should throw it all away. There are legitimate use-cases for serverless.

But I understand, this isn't a real article, it's your way to push sliplane, your own service based on docker. I'm sure once you add some serverless functionality you will start singing it's praises.

Given the quality of this piece, I wonder if it will help your company, or hurt it.

Dev.to is about more than getting a backlink or trying to make a buck. Be better!

Collapse
 
wimadev profile image
Lukas Mauser

I think serverless has its niche, but running things in Lambdas is not at all trivial

Collapse
 
leob profile image
leob • Edited

The way you formulated it might be a bit extreme (sure to provoke clicks and comments, lol) but that doesn't mean it's not true - I think it is ...

At some point I also jumped on the "serverless"/lambda bandwagon, but I stopped being a "believer" a while ago - for sure there are some good use cases, but it's no longer my "religion", and in most cases there are probably better ways, as you pointed out ...

It's as with many hypes and overhyped things - NoSQL, GraphQL, Blockchain, Serverless - all of those haven't been the revolution they were made out to be - useful in specific "niche" scenarios, but none of them truly going "mainstream" :)

Collapse
 
ddaversa profile image
Dario D'Aversa

If you’re hating on ANY tech because your reliance on LLMs to properly use them is making you upset, than the tech isn’t the problem — you’re just a bad developer! This is exactly what the industry is starting to catch up with and being careful with new hires.

Get opinionated due to LLM work is career suicide right now. For anyone reading this, do the opposite of what this person is saying.

Serverless is actually still very much alive and cost-saving if you actually know how & when to use it. This is the first time I saw a ridiculous rant like this one — but it makes sense considering the shameless plug.

Collapse
 
umang_suthar_9bad6f345a8a profile image
Umang Suthar

LLMs thrive on open standards and universal tooling, exactly why we built haveto.com to run AI workloads directly on-chain using containers, not proprietary setups.

No cold starts. No vendor lock-in. No YAML puzzles.

Just real AI compute, fully transparent, and deployable like any standard container.

If you’re tired of fighting your stack instead of shipping features, you'll feel right at home here.

Let’s build smarter, not more complex.

Collapse
 
igor_romanov_664299504e40 profile image
Igor Romanov

To make AWS lambda (or Supabase edge function, or Netlify edge function or whatever) you just need to know how to write a handler. To use Docker you need to be a Linux expert just like with bare metal servers. Lambdas are for small teams, Docker is for teams with dedicated devops guy onboard I guess.

Collapse
 
ryands17 profile image
Ryan Dsouza

There's no relation to the rise of LLM's to the end of serverless. LLMs can easily create terraform and/or CDK scripts that deploy straight to AWS or any cloud provider for that matter. Abstractions exist and LLMs will learn on those abstractions. Docker is an abstraction in itself and so are tools to deploy to the cloud

Some comments may only be visible to logged-in visitors. Sign in to view all comments.