One side of this - our base k8s config is 44k lines of yaml, leading to a bunch of controllers and pods running, in a rather complex fashion.
Not to mention k8s conplexity and codebase itself.
It blackboxes so many things. I still chuckle at the fact that ansible is a first class citizen of the operator framwork.
It can certainly implode on you!
In my experience running nomad & consul is a more lightweight and simple way of doing many of the same things.
It’s a bit like the discussion raging in regards to systemd and the fact that it’s not “unixy”. I get the same feeling with k8s, whereas the hashicorp stuff, albeit less features, adheres more to the unix philosophy.
Thus easier to maintain and integrate.
Edit, sorry - I missed the dot - I meant to write 4.4K lines, but greping through the templates dir it's actually close to 12k lines.
Ah, no, it's not about replacing functionality. It's about opening up for general integrations and ease of use.
If you've set up a fully fledged infrastructure on k8s with all the bells and whistles, there's a whole lot of configuration going on here. Like a whole lot!
I most certainly can't replace all of the above with those two tools, but they make it easier to integrate in any way I see fit. What I'm saying is that Nomad is a general purpose workload scheduler, where k8s is k8s POD's only.
Consul is just providing "service discovery", do with it what you want. And so on...
Having worked a couple of years using both these setups I'm a bit torn. K8s brings a lot, no doubt, but I get the feeling that the whole point of it is for google to make sure you _do not ever invest in your own datacenters_.
k8s on your own bare metal at least used to be not exactly straight forward.
> k8s on your own bare metal at least used to be not exactly straight forward.
I actually just deployed k8s on a raspberry pi cluster on my desk (obviously as a toy, not for production) and it took about an hour to get things fully functional minus RBAC.
> What I'm saying is that Nomad is a general purpose workload scheduler
Yeah, Nomad and k8s are not direct replacements at all. Nomad is a great tool for mixed workloads, but if you're purely k8s then there are more specific tools you can use.
> I meant to write 4.4K lines
Just a small difference! Glad no one wrote 44k lines of yaml, that's just a lot of yaml to write...
> close to 12k lines
Our production cluster (not the raspis running on my desk!) runs in about 4k lines, but we have a fairly simple networking and RBAC layer. We also started from just kubernetes and grew organically over time, so I'm sure someone starting today has a lot of advantage to get running more easily.
If you want ”cloud style” ingress, you’ll probably use metalLB and bgp etc.
Here’s where it gets fun.
I mean, don’t get me wrong, it works - now at least. Never liked it until 1.12 tbh, which is when a bunch of things settled.
The article is about “maybe you don’t need...” and as an anecdote I helped build a really successful $$$ e-commerce business with a couple of hundred micro-services on an on-prem LXC/LXD “cloud” using nomad, vault & consul.
You can use these tools independently of each other or have them work together - unixy.
I have anecdotes from my last couple of years on k8s as well, and... it just ends up with a much more narrow scope.
Sort of similar to the fact that I basically always have both tmux and dtach installed, the latter for "just make this detachable", the former for "actually I'd like some features today".
Something like that. I want service discovery, where Consul really shines - cause containers ain't all we're doing, mkay. K8s forces etcd on you, for service discovery, and only within the cluster.
So, sync it then... more code more complexity, no "pipe" ("pipe" not to be taken literally in this context) and simple integrations.
Not to mention, consul is already a full DNS server, but in k8s, we need yet another pod for coredns. Is YAP a thing? =)
For example, I love how easily envoy integrates to any service discovery of you liking, even your own - simply present it with a specific JSON response from an API. Much like how the Ansible inventory works. It makes integrating, picking and choosing as well as maintaining and extending your configuration management just so much more pleasant.
It blackboxes so many things. I still chuckle at the fact that ansible is a first class citizen of the operator framwork.
It can certainly implode on you!
In my experience running nomad & consul is a more lightweight and simple way of doing many of the same things.
It’s a bit like the discussion raging in regards to systemd and the fact that it’s not “unixy”. I get the same feeling with k8s, whereas the hashicorp stuff, albeit less features, adheres more to the unix philosophy. Thus easier to maintain and integrate.
Just my ¢2.