Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why do devops keep piling abstractions on top of abstractions?

There's the machine. Then the VM. Then the container. Then the orchestrator. Then the controller. And it's all so complex that you need even more tools to generate the configuration files for the former tools.

I don't want to write a Kubernetes controller. I don't even know why it should exist.



Right now I’m typing on a glass screen that pretends to have a keyboard on it that is running a web browser developed with a UI toolkit in a programming language that compiles down to an intermediate bytecode that’s compiled to machine code that’s actually interpreted as microcode on the processor, half of it is farmed out to accelerators and coprocessors of various kinds, all assembled out of a gajillion transistors that neatly hide the fact that we’ve somehow made it possible to make sand think.

The number of layers of abstraction you’re already relying on just to post this comment is nigh uncountable. Abstraction is literally the only way we’ve continued to make progress in any technological endeavor.


I think the point is that there are abstractions that require you to know almost nothing (e.g. that my laptop has a SSD with blocks that are constantly dying is abstracted to a filesystem that looks like a basic tree structure).

Then there are abstractions that may actually increase cognitive load "What if instead of thinking about chairs, we philosophically think about ALL standing furniture types, stools, tables, etc. They may have 4 legs, 3, 6? What about a car seats too?"

AFAICT writing a kubernetes controller is probably overkill challenge-yourself level exercise (e.g. a quine in BF) because odds are that any resource you've ever needed to manage somebody else has built an automated way to do it first.

Would love to hear other perspectives though if anybody has great examples of when you really couldn't succeed without writing your own kubernetes controller.


Yes, k8s is an abstraction, and it's a useful one, even though not everyone needs it. At this new level of abstraction, your hardware becomes homogeneous, making it trivial to scale and recover from hardware failures since k8s automatically distributes your application instances across the hardware in a unified manner. It also has many other useful capabilities downstream of that (e.g. zero downtime deployment/rollback/restart). There's not really any other (well supported) alternative if you want that. Of course, most organizations don't need it, but it's very nice to have in a service oriented system.


> There's not really any other (well supported) alternative if you want that

You don't think AWS autoscale groups give you both of those things?


I think you comically underestimate what Kubernetes provides.

Autoscaling groups give you instances, but Kubernetes automatically and transparently distributes all your running services, jobs, and other workloads across all those instances.

Amongst a laundry list of other things.


I think you’re comically misunderstanding what 95% of companies actually are doing with kubernetes


A big part of what kubernetes provides is a standard interface. When your infra guy gets hit by a bus, someone else (like a contractor) can plug in blindly and at least grasp the system in a day or two.


AWS autoscaling does not take your application logic into account, which means that aggresive downscaling will, at worst, lead your applications to fail.

I'll give a specific example with Apache Spark: AWS provides a managed cluster via EMR. You can configure your task nodes (i.e. instances that run the bulk of your submitted jobs to Spark) to be autoscaled. If these jobs fetch data from managed databases, you might have RDS configured with autoscaling read replicas to support higher volume queries.

What I've frequently see happening: tasks fail because the task node instances were downscaled at the end of the job, because they are no longer consuming enough resources to stay up, but the tasks themselves haven't finished. Or tasks failed because database connections were suddenly cut off, since RDS read replicas were no longer transmitting enough data to stay up.

The workaround is to have a fixed number of instances up, and pay the costs you were trying to avoid in the first place.

Or you could have an autoscaling mechanism that is aware of your application state, which is what k8s enables.


> since RDS read replicas were no longer transmitting enough data to stay up.

As an infra guy, I’ve seen similar things happening multiple times. This could be a non problem if developers handled the connection lost case, reconnection with retries and stuff.

But most developers just don’t bother.

So we’re often building elastic infrastructure that is consumed by people that write code as if we were still on the late 90ies with the single instance dbs expected to be always available.


Asgs can do both of those things, it’s a 5% use-case so it takes a little more work but not much


Can you elaborate on that "little more work", given that resizing on demand isn't sufficient for this use-case, and predictive scaling is also out of the question?


Nothing wrong with ASGs, but they're not really comparable to k8s. k8s isn't simply "scaling", it's a higher level of abstraction that has granular control and understanding of your application instances in a manner that allows it to efficiently spread workloads across all your hardware automatically, all while managing service discovery, routing, lb, rollbacks and countless more. Comparing it to ASG suggests you may not be that familiar with k8s.

I think it's fair to argue that k8s is overkill for many or even most organizations, but ASG is not even close to an alternative.


It seems that you don't understand ASGs. They do all the things that you listed.

K8s is essential when working with a fleet of bare metals. It's an unneeded abstraction if you're just going to deploy it on AWS or similar.


Those only require you to understand them because you’re working directly on top of them. If you were writing a filesystem driver you would absolutely need to know those details. If you’re writing a database backend, you probably need to know a lot about the filesystem. If you’re writing an ORM, you need to know a lot about databases.

Some of these abstractions are leakier than others. Web development coordinates a lot of different technologies so often times you need to know about a wide variety of topics, and sometimes a layer below those. Part of it is that there’s a lot less specialization in our profession than in others, so we need lots of generalists.


I think you're sort of hand-waving here.

I think the concrete question is -- do you need to learn more or fewer abstractions to use kubernetes versus say AWS?

And it looks like kubernetes is more abstractions in exchange for more customization. I can understand why somebody would roll their eyes at a system that has as much abstraction as kuberenetes does if their use-case is very concrete - they are scaling a web app based on traffic.


Kubernetes and AWS aren’t alternatives. They occupy vastly different problem spaces.


Not really.


Kubernetes isn't locked to any vendor

Try moving your AWS solution to Google Cloud without a massive rewrite.

Also Kubernetes doesn't actually deal with the underlying physical devices, directly. That would be done something like Terraform or if you're still hardcore, shell scripts.


I’ve never seen a single company use kubernetes or terraform to move vendors; the feasibility of that was massively over represented


Well we did at my company when we moved from AWS to GCP


Sure, what do I know, I only operate the Kubernetes platform (on AWS) that runs most of a $50bn public company.


"It is difficult to get a man to understand something when his salary depends on his not understanding it." - Upton Sinclair


My salary directly depends upon me deeply understanding both AWS and Kubernetes. Better luck next time.


I wrote a tiny one that worked as glue between our application's opinion on how node DNS names should be, and what ExternalDNS controller would accept automatically. When GKE would scale the cluster, or upgrade nodes, it was requiring manual steps to fix the DNS. So, instead of rewriting a ton of code all over in our app, and changing the other environments we were running on, I just wrote a ~100 line controller that would respond to node-add events by annotating the node in a way ExternalDNS would parse, and in turn automatically create DNS entries in the form we wanted.


I both agree this should exactly be what these kinds of small custom operators should be and also see the nuisance of awkward database triggers bubbling up into the "I dunno why it works, just magic" kind of lost knowledge into how systems actually function.


Seemingly endlessly layered abstraction is also why phones and computers get faster and faster yet nothing seems to actually run better. Nobody wants to write native software anymore because there are too many variations of hardware and operating systems but everyone wants their apps to run on everything. Thus, we are stuck in abstraction hell.

I'd argue the exact opposite has happened. We have made very little progress because everything is continually abstracted out to the least common denominator, leaving accessibility high but features low. Very few actual groundbreaking leaps have been accomplished with all of this abstraction; we've just made it easier to put dumb software on more devices.


I encourage you to actually work on a twenty year old piece of technology. It’s easy to forget that modern computers are doing a lot more. Sure, there’s waste. But the expectations from software these days are exponentially greater than what we used to ship.


Winamp was great and there's nothing better now. Office 2003 was feature complete IMO.


I can stream almost any song I can conceive of in a matter of seconds from my phone. In doing so I can play it wirelessly across every speaker in my house simultaneously as well as on my TV. The lyrics will be displayed on that TV alongside animated cover art and I can control playback with my remote. I will have other similar music suggested to me automatically when that song is finished playing. Guests at my home can add music to the queue from their phones without any additional setup or intervention on my part.

You don’t have to want to do any of that yourself, but if you can’t concede that that sort of experience would have been utterly inconceivable in the days of Winamp—while being boringly commonplace today—I’m not sure we can have a productive discussion.


VLC has been able to stream to/from devices since ~2000. It has captioning support but I don't know if that applies to music. I guess WinAmp came out in ~1998 but it added support for shoutcast in ~1999. The size of computer needed to run winamp/stream shoutcast has shrunk I suppose. But anyways it was certainly conceivable as it was already a commercial product 25 years ago.


I've been a software developer for 25 years, so I'm already there. I really disagree with this though. When I look back at software I was developing 20 years ago in 2005 it is not particularly different than now. It's still client-server, based on web protocols, and uses primarily the same desktop UX. Mobile UX wasn't a thing yet but my whole point was that if we built more apps that were directly native with fewer abstraction layers they would perform better and be able to do more.

Can you give an example of an app that does exponentially more than the same or equivalent app from 2005?


Another, huge in fact, reason is that we ask them to do a lot more.

Just the framebuffer for one of my displays uses more memory than a computer that was very usable for all sorts of tasks back in 1998. Rendering UI to it also takes a lot more resources because of that.


> Nobody wants to write native software anymore because there are too many variations of hardware and operating systems but everyone wants their apps to run on everything.

So far we have: Android and i(pad)OS (mobile); MacOS, Windows, *nix? (desktop); And the web. That's not a lot of platform. My theory is that no one want to properly architect their software anymore. It's just too easy to build a ball of mud on top of electron and have a 5GB node_modules folder full of dependencies with unknown provenance.


This is just totally wrong. Full stop. Today's devices are unimaginably orders of magnitude faster than the computers of old. To suggest otherwise is absolutely absurd, either pure ignorance or a denial of reality. I'm quite blown away that people so confidently state something that's so easily demonstrated as incorrect.


Then all of that data is turned into HTTP requests which turn into TCP packets distributed over IP over wifi over Ethernet over PPPoE over DSL and probably turned into light sent over fiber optics at various stages... :-)


The problem isn't abstractions. The problem is leaky abstractions that make it harder to reason about a system and add lots of hidden states and configurations of that state.

What could have been a static binary running a system service has become a Frankenstein mess of opaque nested environments operated by action at a distance.


CRDs and their controllers are perhaps the reason Kubernetes is as ubiquitous as it is today - the ability to extend clusters effortlessly is amazing and opens up the door for so many powerful capabilities.

> I don't want to write a Kubernetes controller. I don't even know why it should exist.

You can take a look at Crossplane for a good example of the capabilities that controllers allow for. They're usually encapsulated in Kubernetes add-ons and plugins, so much as you might never have to write an operating system driver yourself, you might never have to write a Kubernetes controller yourself.


One of the first really pleasant surprises I got while learning was that the kubectl command itself was extended (along with tab completion) by CRDs. So install external secrets operator and you get tab complete on those resources and actions.


> Why do devops keep piling abstractions on top of abstractions?

Mostly, because developers keep trying to replace sysadmins with higher levels of abstraction. Then when they realise that they require (some new word for) sysadmins still, they pile on more abstractions again and claim they don't need them.

The abstraction du-jour is not Kubernetes at the moment, it's FaaS. At some point managing those FaaS will require operators again and another abstraction on top of FaaS will exist, some kind of FaaS orchestrator, and the cycle will continue.


I think it's clear that Kubernetes et al aren't trying to replace sysadmins. They're trying to massively increase the ratio of sysadmin:machine.


Fair point. Kubernetes seems to have been designed as a system to abstract across large physical machines, but instead we're using it in "right-sized" VM environments, which is solving the exact same set of problems in a different way.

Similar to how we developed a language that could use many cores very well, and compiles to a single binary, but we use that language almost exclusively in environments that scale by running multiple instances of the same executable on the same machine, and package/distribute that executable in a complicated tarball/zipping process.

I wonder if there's a name for this, solving the same problem twice but combining the solutions in a way that renders the benefits moot.


There are no sysadmins though in the new model. There are teams of engineers who code Go, do kubernetes stuff and go on call. They may occasionally Google some sysadmin knowledge. They replace sysadmins like drivers replace the person in front of the Model T waving a flag. Or pilots replace navigators.


I don’t want Kubernetes period. Best decision we’ve made at work is to migrate away from k8s and onto AWS ECS. I just want to deploy containers! DevOps went from something you did when standing up or deploying an application, to an industry-wide jobs program. It’s the TSA of the software world.


If I may ask, just to educate myself

where do you keep the ECS service/task specs and how do you mutate them across your stacks?

How long does it take to stand up/decomm a new instance of your software stack?

How do you handle application lifecycle concerns like database backup/restore, migrations/upgrades?

How have you supported developer stories like "I want to test a commit against our infrastructure without interfering with other development"?

I recognize these can all be solved for ECS but I'm curious about the details and how it's going.

I have found Kubernetes most useful when maintaining lots of isolated tenants within limited (cheap) infrastructure, esp when velocity of software and deployments is high and has many stakeholders (customer needs their demo!)


Not sure if this is a rhetorical question but I'll bite :-)

> where do you keep the ECS service/task specs and how do you mutate them across your stacks?

It can be defined in CloudFormation, then use CloudFormation Git sync, some custom pipeline (ie. github actions) or CodePipeline to deploy it from github

You can also use CodeDeploy to deploy from Git or even a AWS supplied github action for deploying a ECS task.

> How long does it take to stand up/decomm a new instance of your software stack?

It really depends on many factors, ECS isn't very fast (I think it's on purpose to prevent thundering herd problems).

> How do you handle application lifecycle concerns like database backup/restore, migrations/upgrades?

From what I learned from AWS is that ECS is a compute service and you shouldn't persist data in ECS.

Run your database in RDS and use the supplied backup/restore functionality

> How have you supported developer stories like "I want to test a commit against our infrastructure without interfering with other development"?

If it's all defined in CloudFormation you can duplicate the whole infrastructure and test your commit there.



Yeah, that doesn't really answer the question at all... Do you just have a pile of cloudformation on your desktop? point and click? tf? And then none of the actual questions like

> How do you handle application lifecycle concerns like database backup/restore, migrations/upgrades?

were even touched.


There is no difference between cloudformation, clicking, terraform, boto, awscli, pulumi, or whatever else. The platform at the other end of those tools is still ECS.

Backing up databases isn't the job of the container-running platform (ECS), especially not in AWS-world where databases are managed with RDS.

The rest of the questions were "how do I run containers on ecs?" in various forms. The answers to all of them is "by asking ecs to run containers in various forms."


ECS is very very similar to Kubernetes and duplicates pretty much all of the functionality except AWS names and manages each piece as a separate service/offering.

ECS+Route53+ALB/ELB+EFS+Parameter Store+Secrets Manager+CloudWatch (Metrics, Logs, Events)+VPC+IAM/STS and you're pretty close in functionality.


I'm so confused about the jobs program thing. I'm an infra engineer who has had the title devops for parts of my career. I feel like I've always been desperately needed by teams of software devs that don't want to concern themselves with the gritty reality of actually running software in production. The job kinda sucks but for some reason jives with my brain. I take a huge amount of work and responsibility off the plates of my devs and my work scales well to multiple teams and multiple products.

I've never seen an infra/devops/platform team not swamped with work and just spinning their tires on random unnecessary projects. We're more expensive on average than devs, harder to hire, and two degrees separated from revenue. We're not a typically overstaffed role.


It is always this holier than thou attitude of Software engineers towards DevOps that is annoying. Especially if it comes from ignorance.

These days often DevOps is done by former Software Engineers rather than "old fashioned" Sys admins.

Just because you are ignorant on how to use AKS efficiently, doesn't mean your alternative is better.


> These days often DevOps is done by former Software Engineers rather than "old fashioned" Sys admins.

Yes, and the world is a poorer place for it. Google’s SRE model works in part because they have _both_ Ops and SWE backgrounds.

The thing about traditional Ops is, while it may not scale to Google levels, it does scale quite well to the level most companies need, _and_ along the way, it forces people to learn how computers and systems work to a modicum of depth. If you’re having to ssh into a box to see why a process is dying, you’re going to learn something about that process, systemd, etc. If you drag the dev along with you to fix it, now two people have learned cross-areas.

If everything is in a container, and there’s an orchestrator silently replacing dying pods, that no longer needs to exist.

To be clear, I _love_ K8s. I run it at home, and have used it professionally at multiple jobs. What I don’t like is how it (and every other abstraction) have made it such that “infra” people haven’t the slightest clue how infra actually operates, and if you sat them down in front of an empty, physical server, they’d have no idea how to bootstrap Linux on it.


That's a fair point I also observed.


Yeah, DevOps was a culture not a job title, and then we let us software engineers in who just want to throw something into prod and go home on friday night, so they decided it was a task, and the lowest importance thing possible, but simultaniously, the devops/sre/prod eng teams needed to be perfect, because its prod.

it is a wierd dichotomy I have seem, and it is getting worse. We let teams have access to argo manifiests, and helm charts, and even let them do custom in repo charts.

not one team in the last year has actually gone and looked at k8s docs to figure out how to do basic shit, they just dump questions into channels, and soak up time from people explaining the basics of the system their software runs on.


Nah, I'm delighted if someone wants to do it.

Not as delighted by the fact that many companies seem to want developers to do devops as well, like when the code is compiling or something.

It's not being taken seriously.


Why don't you just deploy to cloud run on gcp and call it a day


Thats great if that works for you, and for a lot people and teams. You have just shifted the complexity of networking, storage, firewalling, IP management, L7 proxying to AWS, but hey, you do have click ops there.

> DevOps went from something you did when standing up or deploying an application, to an industry-wide jobs program. It’s the TSA of the software world.

DevOps was never a job title, or process, it was a way of working, that went beyond yeeting to prod, and ignoring it.

From that one line, you never did devops - you did dev, with some deployment tools (that someone else wrote?)


You can have Click-Ops on Kubernetes too! Everything has a schema so it's possible to build a nice UI on top of it (with some effort).

My current project is basically this, except it edits your git-ops config repository, so you can click-ops while you git-ops.


You mean ArgoCD and Rancher? Both ready to do click ops!


I mean you can edit a big YAML file inside ArgoCD, but what I'm building is an actual web form (e.x. `spec.rules[].http.paths[].pathType` is a dropdown of `Prefix`, `ImplementationSpecific`, `Exact`), and all your documentation inline as you're editing.

People have tried this before but usually the UI version is not fully complete so you have to drop to YAML. Now that the spec is good enough it's possible to build a complete UI for this.


Yup, and it has the advantage of having a easily backed up state store to represent the actions of the GUI.

I always liked the octant UI autogeneration for CRDs and the way it just parsed things correctly from the beginning, if they had an edit mode that would be perfect


Is there anything in particular you like about what Octant does? I don't see anything that actually looks at the object spec, just the status fields / etc.


ArgoCD has a "New App" button that opens an actual web form you fill out.


Sounds great. An interactive Spec builder, if I understand correctly.


Anywhere we can see your project?


K8s really isn't about piling up abstractions. The orchestrator sits beside containers (which can be run on bare metal, btw) and handles tasks which already need to be done. Orchestration of any system is always necessary. You can do it with K8s (or a related platform), or you can can cobble together custom shell scripts, or even perform the tasks manually.

One of these gives you a way to democratize the knowledge and enable self-service across your workforce. The others result in tribal knowledge being split into silos all across an organization. If you're just running a couple of web servers and rarely have to make changes, maybe the manual way is OK for you. For organizations with many different systems that have complex interactions with each other, the time it takes to get a change through a system and the number of potential errors that manual tasks add are just infeasible.

Controllers are just one way to bring some level of sanity to all of the different tasks which might be required to maintain any given system. Maybe you don't need your own custom controllers, as there are a huge number which have already been created to solve the most common requirements. Knowing how to write them allows one to codify business rules, reduce human error, and get more certainty over the behavior of complex systems.


Because, like it or not, that's how we build big things.

A bridge connects two otherwise separate geographical regions. To a government it's an abstract piece of infrastructure that will have economic and social impacts. To users it's a convenience that will change the way they plan journeys. To traffic planners it's another edge in a graph. To cartographers it's another line on a map. To road builders it's another surface to tarmac. To geologists it sits on a (hopefully) stable foundation that isn't expected to move or subside for at least a few hundred years. To cement people it's made of a particular blend that's the product of a specialised industry and expected to last for a hundred years. To metal workers it's reinforced with steel with particular strengths and weaknesses.

Nobody understands it all. Abstraction is not the source of complexity, abstraction is how we deal with complexity. The complexity is just there whether you want it or not. You think it's easy because you're the guy walking across the bridge.


Current example from work: an extreme single-tenant architecture, deployed for large N number of tenants, which need both logically and physically isolation; the cost of the cloud provider's managed databases is considered Too Expensive to create one per tenant, so an open-source Kubernetes controller for the database is used instead.

Not all systems are small-N modern multi-tenant architectures deployed at small scale.


This is the point. Right tool for the job. Kubernetes was incubated at Google and designed for deployments at scale. Lot of teams are happily using it. But it is definitely not for startups or solo devs, unless you are an expert user already.


You have some computing resource that needs to be provisioned according to the specifications laid out in a Kubernetes manifest (YAML). Something needs to go out and actually "physically" create or retrieve that resource, with all the side-effects that involves, bring its state into accordance with whatever the manifest specifies, and continuously make adjustments when the resource's state diverges from the manifest throughout the lifetime of the resource.

One example is a controller responsible for fulfilling ACME challenges to obtain x509 certificates. Something needs to actually publish the challenge responses somewhere on the internet, retrieve the x509 certificate, and then persist it onto the cluster so that it may be used by other applications. Something needs to handle certificate renewal on an ongoing basis. That something is the controller.


> I don't want to write a Kubernetes controller. I don't even know why it should exist.

I don't want to write one either. Given the choice, I won't even touch one.

I think I know why they exist, though. Kubernetes is a system of actors (resources) and events (state transitions). If you want to derive new state from existing state, and to maintain that new state, then you need something that observes "lower" state transitions and takes action on the system to achieve its desired "higher" state.

Whether we invent terminology for these things or not, controllers exist in all such systems.


Yeah, for a lot of companies, this is way overkill. Thats fine, don't use it! In the places I have seen use it when it is actually needed, the controller makes a lot of work for teams disappear. It exists, because thats how K8S itself works? - how it translates from a deployment -> replica set -> pod -> container.

Abstractions are useful to stop 100000s lines of boiler plate code. Same reason we have terraform providers, Ansible modules, and well, the same concepts in programming ...


How do you run multiple copies of an application? How do you start new copy when one fails? How do you deploy changes to the system? That is the orchestrator.

What do you do when site gets really popular and needs new copies? What happens when fill the VMs?If you want to automate it, that is a controller.

Also, if you are running on-premise, you don't need VM, you can use the whole machine for Kubernetes and containers for isolation. If you need more isolation, you can run VM containers; being able to switch is advantage of Kubernetes.


Because most places never needed kubernetes but used it to put their technical debt on a credit line. So what do you do when they try to collect? Well you just take out another loan to pay off the first one.


Because the works on my machine meme, plus the cattle not pets lore.

Why do this for relational databases? Why do I need to write a pg extension and SQL and an ORM when I can just write to disk?


If you're implementing a distributed system that needs to manage many custom resources (of whatever kind, not Kubernetes-specific), implementing a Kubernetes controller for it can save a great deal of development time and give you a better system in the end, with standard built-in observability, manageability, deployment automation, and a whole lot else.

It's certainly true that some use of Kubernetes is overkill. But if you actually need what it offers, it can be a game-changer. That's a big reason why it caught on so fast in big enterprises.

Don't fall into the trap of thinking that because you don't understand the need for something, that the need doesn't exist.


I'm always surprised when people say Kubernetes is overkill in the context of distributed systems. You'll end up running all the same stuff yourself but have to manage the integration yourself as well (traffic/L7, config, storage, app instances, network/L1-4)


Right, the key is "distributed systems". The overkill tends to come in when someone decides to use Kubernetes to run e.g. a single web application and database - which is not particularly "distributed" on the back end - or something that they could run with say Docker Compose on a single machine.

A chart of effort vs. complexity would show this nicely. Kubernetes involves a baseline level of effort that's higher than simpler alternatives, which is the "overkill" part of the chart. But once complexity reaches a certain level, the effort involved in alternatives grows higher and faster.

> (traffic/L7, config, storage, app instances, network/L1-4)

Cloud and PaaS providers can do a lot of this for you though. Of course some of the PaaS providers are built on Kubernetes, but the point is they're the ones expending that effort for you.


Why do developers keep piling abstractions on top of abstractions?

There is machine code. Then the assembler. Then the compiler, that targets the JVM. Then the programming language. Then classes and objects. And then modules. And then design patterns. And then architectural patterns.

Why all of this should exist?

...

Well, because each level is intended to provide something the previous levels cannot provide.

My last "operator" (not really an operator, but conceptually similar) is Airflow. Because Kubernetes doesn't have a way to chain job executions, as in "run this job after these two jobs finished".


Job security


> There's the machine. Then the VM. Then the container. Then the orchestrator

If you're running your orchestrator on top of VMs, you're doing it wrong (or you're at a very small scale or just getting started).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: