Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Having extensively used Chef and K8s, the difference is that they try to deal with chaos in unmanaged way (Puppet is the closest to "managed"), but when dealing with wild chaos you lack many ways of enforcing the order. Plus they don't really do multi-server computation of resources.

What k8s brings to the table is a level of standardization. It's the difference between bringing some level of robotics to manual loading and unloading of classic cargo ships, vs. the fully automated containerized ports.

With k8s, you get structure where you can wrap individual program's idiosyncracies into a container that exposes standard interface. This standard interface allows you to then easily drop it into server, with various topologies, resources, networking etc. handled through common interfaces.

I said that for a long time before, but recently I got to understand just how much work k8s can "take away" when I foolishly said "eh, it's only one server, I will run this the classic way. Then I spent 5 days on something that could be handled within an hour on k8s, because k8s virtualized away HTTP reverse proxies, persistent storage, and load balancing in general.

Now I'm thinking of deploying k8s at home, not to learn, but because I know it's easier for me to deploy nextcloud, or an ebook catalog, or whatever, using k8s than by setting up more classical configuration management system and deal with inevitable drift over time.



What a container lets you do is move a bunch of imperative logic and decisions to build time, so that there are very few decisions made at deploy time. I'm not trying to have a bunch of decisions made that are worded as statements. I've watched a long succession of 'declarative' tools that make a bunch of decisions under the hood alienate a lot of people who can't or won't think that way, and nobody really should have to, even if they can. There are so many things I'd rather being doing with my day than dealing with these sorts of systems because otherwise it won't get done, and I'm heavily invested in the outcome.

I think the build, deploy, start and run-time split is an important aspect that gets overlooked quite a bit, and is critical to evaluating tools at this point. That is why we aren't still doing everything with Chef or Puppet. Whether we continue doing it with Kubernetes or Pulumi or something else matters a bit less.

Repeatability is not the goal, as others in this thread have implied. The goal is trusting that the button will work when you push it. That if it doesn't work, you can fix it, or find someone who can. Doing that without repeatability is pretty damned hard, certainly, but there are ways to chase repeatability without ever arriving at the actual goal.


> Now I'm thinking of deploying k8s at home, not to learn, but because I know it's easier for me to deploy nextcloud, or an ebook catalog

can't you do that just with containers?


But what do you use to manage those containers and surrounding infra (networking, proxies, etc)? I've been down the route of using Puppet for managing Docker containers on existing systems, Ansible, Terraform, Nomad/Consul. But in the end it all is just tying different solutions together to make it work. Kubernetes (in the form of K3s or a other lightweight implementation) just works for me, even in a single server setup. I barely have to worry about the OS layer, I just flash K3s to a disk and only have to talk to the Kubernetes API to apply declarative configurations. Only things I'm sometimes still need the OS layer for is networking, firewall or hardening of the base OS. But that configuration is mostly static anyways and I'm sure I will fine some operators for that to manage then through the Kubernetes API as IaC if I really need to.


I used to have a bunch of bash scripts for bootstrapping my docker containers. At one point I even made init scripts, but that was never fully successful.

And then one day I decided to set up kubernetes as a learning experiment. There is definitely some learning curve about making sure I understood what deployment, or replicaset or service or pod or ingress was, and how to properly set them up for my environment. But now that I have that, adding a new app to my cluster, and making it accessible is super low effort. i have previous yaml files to base my new app's config on.

It feels like the only reason not to use it would be learning curve and initial setup... but after I overcame the curve, it's been a much better experience than trying to orchestrate containers by hand.

Perhaps this is all doable without kubernetes, and there is a learning curve, but it's far from the complicated nightmare beast everyone makes it out to be (from the user side, maybe from the implementation details side)


I can do it with just containers, yes.

It would mean I removed ~20% of the things that were annoying me and left 80% still to solve, while kubernetes goes 80% for me with the remaining 20% being mostly "assembly these blocks".

Plus, a huge plus of k8s for me was that it abstracted away horrible interfaces and behaviours of docker daemon and docker cli.


>Now I'm thinking of deploying k8s at home

Are we talking about k8 base on your own server rack at your house?


K3s on few devices. Thinking of grabbing a HP microserver or something with similar case for ITX ryzen (embedded EPYC would be probably too expensive), some storage space, maybe connect few extra bits of compute power into a heterogenous cluster. Put everything except maybe PiHole on it, with ZFS pool exported over bunch of protocols as backing store for persistent volume claim support.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: