Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

OMFG so much this.

I worked at a large company that deployed it's own Kubernetes stack, on a VERY large number of physical hosts. The theory was that the K8S would simplify our devops story enough that we could iterate quickly and scale linearly.

In reality, the K8S team ended up being literally 10x larger than the team building the application we were deploying on it. In addition, K8S introduced entirely new categories of failure mode (ahem; CNI updates/restarts/bedsh*tting, operator/custom resource failures, and tons of other ego driven footguns).

The worst part? The application itself ran fine on a single dev workstation, but also on any random assortment of VMs. Just pass the consul details as environment variables. I am not saying everybody on K8S is in the same boat, but I think that far more people are planning on becoming a unicorn cloud service than have any hope of becoming a unicorn cloud service.

TL;DR: If your hosting solution requires more maintenance than the application itself, you made a boo-boo.



Using k8s is not remotely the same thing as maintaining your own k8s stack. One is easy (get your feet wet in an afternoon), the other is hard (maybe after a couple months of full-time study you can pull it off).

The vast majority of teams that have enough crap to run to warrant using k8s should not be maintaining their own k8s stack.

In fact, it’s entirely possible that running k8s is so hard that the only players that can do it reliably are the big cloud companies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: