Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Disagree

I tried to use lambda. Cold startup really is awful. You have to deal with running db migrations in step functions or find other solutions. The aurora serverless also does not scale to zero. Once you get traffic you overload RDS and need to pay for and setup a RDS proxy, and dont get me started on the pointless endeavor of trying to keep your lambdas warm. Sort of defeats the point. Serverless is not actually serverless and ends up costing more for less performance and more complexity

Its way simpler and cheaper to start with a single VPS single point of failure, then over time graduate to running docker compose or a single node k3 cluster on that VPS. And then eventually scale out to more nodes…



Without more details on how you tried to set up lambda...

> Cold startup really is awful

It has gotten significantly better over time, particularly for VPC-connected functions, as AWS no longer creates an ENI per function but re-uses an ENI. If most of your UI code is elsewhere (CDN, mobile app) then you're not hitting the lambda endpoint for initial UI draws.

> db migrations in step functions or find other solutions

Fargate? Especially as DB migrations might exceed the 15 minute maximum runtime for Lambda

> aurora serverless also does not scale to zero

Serverless V2 does auto-pause, but yeah, I agree that AWS's serverless SQL portfolio is lacking compared to Planetscale, Neon, other similar new entries. Which you can run without RDS proxy or a VPC.

I'll agree that projects for which response latency needs to be lower than what cold starts will reasonably permit should pick a different architecture, but I don't think most greenfield product projects are so latency-sensitive. That sounds to me like premature optimization.

> way simpler and cheaper to start with a single VPS single point of failure, then over time graduate to running docker compose or a single node k3 cluster on that VPS. And then eventually scale out to more nodes…

Cheaper in raw early cloud infrastructure costs, sure. Cheaper in total cost of ownership, particularly as the service starts to scale, including overprovisioning waste, engineering time dedicated to concerns unrelated to value... for any project for which autoscaling behavior would be bursty or at least unknown at best, I beg to differ.


Serverless does not necessarily mean lambda. It could be just about anything that runs containers for you. AWS ECS has an offering called Fargate that I've been happy with for our hosting. You are right though that the compute costs are typically more than renting a traditional VPS. There is definitely a tradeoff between labor and compute costs.


The point of serverless isn't to run 0 servers in your downtime, its to abstract away everything related to running hardware. I have an app that is built on a runtime (jdk, node, whatever) and I shouldn't have to deal with anything below that layer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: