Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If you're building a cluster from scratch, on your own hardware, setting up the control plane yourself etc. it's very very hard.

I'd like to make a distinction:

If you do cluster deployment from scratch, Kubernetes actually gets easier.

Because you get good at it. What it also gets is more time-consuming.

If you aim to replicate all of AWS, then unrealistically so.

The art is knowing when to stop. For a lot of people that's even before Kubernetes.

For others, you can run all your apps in a cluster, except persistent storage.



At the end of the day, getting a stable production environment is simply a tradeoff between the amount of complexity you need to make your infrastructure do what you want, and reducing complexity because it removes failure points from the production environment.

K8s is nice and all, but if all you really need can be solved by 2 VM's and a way to announce an anycast address (or use a loadbalancer if that is not an option), why would i add all that complexity?


For reasons of experience, all I ever want from a system is that it’s reproducible.

I had a vanity website running k8s in a managed cloud. I thought I was backed up by my provider and original ansible deployment, which was of course developed iteratively.

I originally did this mostly to do a practice deployment, and get the workflow.

A few years later, it went down and I didn’t notice for a few weeks. It was too unimportant to invest in monitoring, and not worth it to do over. Redeploying gave me lots of confusing errors (note: I also do this stuff for work)

Frankly, I was surprised that the provider doing updates would make my years-stable site fall over. I haven’t tried that one again for funsies, yet. It’s the vendor specificity that was my d’oh!


I totally agree.

I've gotten away with docker-compose for container orchestration for the last 16 months.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: