Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Thinking of Kubernetes as a runtime for declarative infrastructure instead of a mere orchestrator results in very practical approaches to operate your cluster.

Unpopular opinion, but the source of most of the problems I've seen with infrastructures using Kubernetes came from exactly this kind of approach.

Problems usually come when we use tools to solve things that they weren't made for. That is why - in my opinion - it is super important to treat a container orchestrator a container orchestrator.





Kubernetes is explicitly designed to do what the article describes. In that respect the article is just describing what you can find in the standard Kubernetes docs.

> it is super important to treat a container orchestrator a container orchestrator.

Which products do you think are only “container orchestrators”? Even Docker Compose is designed to achieve a desired state from a declarative infrastructure definition.


> Which products do you think are only “container orchestrators”? Even Docker Compose is designed to achieve a desired state from a declarative infrastructure definition.

The way how something describes the desired state (declaratively for example) has nothing to do with if it is a container orchestrator or not.

If you open the Kubernetes website, do you know what is the first thing you will see? "Production-Grade Container Orchestration". Even according to their own docs, Kubernetes is a container orchestrator.


I feel like the author has a good grasp of the Kubernetes design... What about the approach is problematic? And why don't you think that is how Kubernetes was designed to be used?

I wrote some personal stories below in this thread as a response to another user.

But then you need two different provisioning tools, one for infra in k8s, and one for infra outside k8s. Or perhaps using non-native tools or wrappers.

> But then you need two different provisioning tools, one for infra in k8s, and one for infra outside k8s.

Yes, and 99% of the companies do this. It is quite common to use Terraform/AWS CDK/Pulumi/etc to provision the infrastructure, and ArgoCD/Helm/etc to manage the resources on Kubernetes. There is nothing wrong with it.


It would have helped if you tell us why you don’t like this approach.

It's right there:

> the source of most of the problems I've seen with infrastructures using Kubernetes came from exactly this kind of approach

But some more concrete stories:

Once, while I was on call, I got paged because a Kubernetes node was running out of disk space. The root cause was the logging pipeline. Normally, debugging a "no space left on device" issue in a logging pipeline is fairly straightforward, if the tools are used as intended. This time, they weren't.

The entire pipeline was managed by a custom-built logging operator, designed to let teams describe logging pipelines declaratively. The problem? The resource definitions alone were around 20,000 lines of YAML. In the middle of the night, I had to reverse-engineer how the operator translated that declarative configuration into an actual pipeline. It took three days and multiple SREs to fully understand and fix the issue. Without such a declarative magic it takes usually 1 hour to solve such an issue.

Another example: external-dns. It's commonly used to manage DNS declaratively in Kubernetes. We had multiple clusters using Route 53 in the same AWS account. Route 53 has a global API request limit per account. When two or more clusters tried to reconcile DNS records at the same time, one would hit the quota. The others would partially fail, drift out of sync, and trigger retries - creating one of the messiest cross-cluster race conditions I've ever dealt with.

And I have plenty more stories like these.


You mention a questionably designed custom operator and an add-on from a SIG. This is like blaming Linux for the UI in Gimp.

> a questionably designed custom operator

This is the logging operator, the most used logging operator in the cloud native ecosystem (we built it).

> This is like blaming Linux for the UI in Gimp.

I never blamed anything, read my comment again. I only pointed out that problems arise when you use something to do something that is not built for. Like a container orchestrator managing infrastructure (DNS, logging pipelines). That is why I wrote to "it is super important to treat a container orchestrator a container orchestrator". Not a logging pipeline orchestrator, or a control plane for Route 53 DNS.

This has nothing to do with Kubernetes, but with the people who choose to do everything with it (managing the whole infrastructure).


Also not like logging setups outside of k8s can't be a horror show too. Like, have you ever had to troubleshoot a rsyslog based ELK setup? I'll forever have nightmares from debugging RainerScript mixed with the declarative config and having to read the source code to find out why all of our logs were getting dropped in the middle of the night.

I'd also argue the whole external DNS thing could have happened with any dynamic DNS automation... And yes it is a completely optional add-on!



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: