Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Cost aside, I wonder how far you can get with something like a managed newsql database (Spanner, CockroachDB, Vitess, etc.) and serverless.

Most providers at this point offer ephemeral containers or serverless functions.

Does a product focused, non infra startup even need k8s? In my honest opinion people should be using Cloud Run. It’s by far Google’s best cloud product.

Anyway, going back to the article - k8s is hard if you’re doing hard things. It’s pretty trivial to do easy things using k8s, which only leads to the question - why not use the cloud equivalents of all the “easy”things? Monitoring, logging, pub/sub, etc. basically all of these things have cloud equivalents as services.

The question is, cost aside, why use k8s? Of course, if you are cost constrained you might do bare metal, or a cheaper collocation, or maybe even a cheap cloud like DigitalOcean. Regardless, you will bear the cost one way or another.

If it were really so easy to use k8s to productionize services to then offer as a SaaS, everyone would do it. Therefore I assert, unless those things are your service, you should use the cloud services. Don’t use cloud vms, use cloud services, and preserve your sanity. After all, if you’re not willing to pay someone else to be oncall, that implies the arbitrage isn’t really there enough to drive the cost down enough for you to pay, which might imply it isn’t worth your time either (infra companies aside).



Cloud services are shit unless I can run them locally when developing and testing.


Google's Firebase emulator is really, really good. Fantastic first party emulator suite.

LocalStack for AWS also let's you emulate a huge chunk of AWS locally. I use it to emulate SNS/SQS locally.


That may be so, which is why I recommend cloud run or equivalent


why on earth do you need a functional implementation of a consumed service for testing?


Iteration speed and blazing fast automated tests. When I discovered minio, I suddenly got much more confident coding against s3.


> Iteration speed and blazing fast automated tests.

Wholeheartedly agreed!

It is also nice to have that additional assurance of being able to self-host things (if ever necessary) and not being locked into a singular implementation. For example, that's why managed database offerings generally aren't that risky to use, given that they're built on already established projects (e.g. compatible with MySQL/MariaDB/PostgreSQL).

> When I discovered minio, I suddenly got much more confident coding against s3.

MinIO is pretty good, but licensing wise could become problematic if you don't work on something open source but ever want to run it in prod. Not really what this discussion is about, but AGPL is worth mentioning: https://github.com/minio/minio/blob/master/LICENSE

That said, thankfully S3 is so common that we have alternatives even to MinIO available, like Zenko https://www.zenko.io/ which is good for both local development as well as hosting in whatever environments necessary. I was actually about to mention Garage as well which seems better because it's a single executable but they also switched to AGPL, probably not an issue for local testing though: https://garagehq.deuxfleurs.fr/


Or just app engine honestly.

Works with docker containers so you can run the same simple stack locally as in prod. No need for more exotic serverless architectures.

Generous free tier, too!

Have only good things to say about it for quickly firing up a product.


> Does a product focused, non infra startup even need k8s? In my honest opinion people should be using Cloud Run. It’s by far Google’s best cloud product.

> Or just app engine honestly.

As a former App Engine PM and PM of Cloud Run, this warms my heart to hear--I'm glad folks love to use it :)

It's been a few years since I've worked on these products, but they were designed to be composed and used by each other. Cloud Run provides an autoscaling container runtime with Knative compatible API; Cloud Build (and Container Registry) + Cloud Run = App Engine, App Engine + CloudEvents = Cloud Functions.

With a Knative compatible API, you can theoretically take those primitives and move them to your own K8s cluster, managed (GKE/EKS/AKS) or self-managed, giving folks a tremendous amount of flexibility down the line if/when they need it (hint: the majority of customers didn't, and the fully managed services were great).


Oh I see, that's cool, thanks!

I had the idea that cloud run was like a (possibly auto-scaling?) abstraction of a container runtime or something so it makes sense that app engine uses it.

I don't think I have been aware of that fact though when using app engine in the past -- which is good, by the way, I don't want to know or care about this, just run my containers somehow :-)


Similar with fly.io, I have an application running on there and pleasantly surprised they don't even charge if the amount is under $5/month. I've been very happy with how easy it is to deploy and with the hosted Postgres. I'm using the JVM and works well; I originally played around with Elixir and was especially impressed with their feature set for it.


I think you could push that setup far. I'm not familiar with GCP or Cloud Run, but it probably integrates nicely with other services GCP offers (for debugging, etc.).

I'd be curious to read if anybody has that setup and what scale they have.


Regarding the second part, I totally agree, either use cloud or don't. For some reason, most companies want to be cloud-agnostic and so they stay away from things that are too difficult to migrate between cloud providers.


> In my honest opinion people should be using Cloud Run. It’s by far Google’s best cloud product.

Is this the same thing as running containers in Azure App Services?


It’s similar, tho I’d say that is more like App Engine. The azure equivalent is probably Azure container instances


Azure Container Apps is the equivalent.

AWS also has App Runner but it doesn't scale to 0.


This. Greenfield products should be serverless by default. By the time you have sustained traffic to the point where you can run the numbers and think that you could save money by switching off serverless, that's a Good Problem To Have, one for which you'll have investors giving you money to hire DevOps to take care of the servers.


Disagree

I tried to use lambda. Cold startup really is awful. You have to deal with running db migrations in step functions or find other solutions. The aurora serverless also does not scale to zero. Once you get traffic you overload RDS and need to pay for and setup a RDS proxy, and dont get me started on the pointless endeavor of trying to keep your lambdas warm. Sort of defeats the point. Serverless is not actually serverless and ends up costing more for less performance and more complexity

Its way simpler and cheaper to start with a single VPS single point of failure, then over time graduate to running docker compose or a single node k3 cluster on that VPS. And then eventually scale out to more nodes…


Without more details on how you tried to set up lambda...

> Cold startup really is awful

It has gotten significantly better over time, particularly for VPC-connected functions, as AWS no longer creates an ENI per function but re-uses an ENI. If most of your UI code is elsewhere (CDN, mobile app) then you're not hitting the lambda endpoint for initial UI draws.

> db migrations in step functions or find other solutions

Fargate? Especially as DB migrations might exceed the 15 minute maximum runtime for Lambda

> aurora serverless also does not scale to zero

Serverless V2 does auto-pause, but yeah, I agree that AWS's serverless SQL portfolio is lacking compared to Planetscale, Neon, other similar new entries. Which you can run without RDS proxy or a VPC.

I'll agree that projects for which response latency needs to be lower than what cold starts will reasonably permit should pick a different architecture, but I don't think most greenfield product projects are so latency-sensitive. That sounds to me like premature optimization.

> way simpler and cheaper to start with a single VPS single point of failure, then over time graduate to running docker compose or a single node k3 cluster on that VPS. And then eventually scale out to more nodes…

Cheaper in raw early cloud infrastructure costs, sure. Cheaper in total cost of ownership, particularly as the service starts to scale, including overprovisioning waste, engineering time dedicated to concerns unrelated to value... for any project for which autoscaling behavior would be bursty or at least unknown at best, I beg to differ.


Serverless does not necessarily mean lambda. It could be just about anything that runs containers for you. AWS ECS has an offering called Fargate that I've been happy with for our hosting. You are right though that the compute costs are typically more than renting a traditional VPS. There is definitely a tradeoff between labor and compute costs.


The point of serverless isn't to run 0 servers in your downtime, its to abstract away everything related to running hardware. I have an app that is built on a runtime (jdk, node, whatever) and I shouldn't have to deal with anything below that layer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: