Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> There's not really any other (well supported) alternative if you want that

You don't think AWS autoscale groups give you both of those things?



I think you comically underestimate what Kubernetes provides.

Autoscaling groups give you instances, but Kubernetes automatically and transparently distributes all your running services, jobs, and other workloads across all those instances.

Amongst a laundry list of other things.


I think you’re comically misunderstanding what 95% of companies actually are doing with kubernetes


A big part of what kubernetes provides is a standard interface. When your infra guy gets hit by a bus, someone else (like a contractor) can plug in blindly and at least grasp the system in a day or two.


AWS autoscaling does not take your application logic into account, which means that aggresive downscaling will, at worst, lead your applications to fail.

I'll give a specific example with Apache Spark: AWS provides a managed cluster via EMR. You can configure your task nodes (i.e. instances that run the bulk of your submitted jobs to Spark) to be autoscaled. If these jobs fetch data from managed databases, you might have RDS configured with autoscaling read replicas to support higher volume queries.

What I've frequently see happening: tasks fail because the task node instances were downscaled at the end of the job, because they are no longer consuming enough resources to stay up, but the tasks themselves haven't finished. Or tasks failed because database connections were suddenly cut off, since RDS read replicas were no longer transmitting enough data to stay up.

The workaround is to have a fixed number of instances up, and pay the costs you were trying to avoid in the first place.

Or you could have an autoscaling mechanism that is aware of your application state, which is what k8s enables.


> since RDS read replicas were no longer transmitting enough data to stay up.

As an infra guy, I’ve seen similar things happening multiple times. This could be a non problem if developers handled the connection lost case, reconnection with retries and stuff.

But most developers just don’t bother.

So we’re often building elastic infrastructure that is consumed by people that write code as if we were still on the late 90ies with the single instance dbs expected to be always available.


Asgs can do both of those things, it’s a 5% use-case so it takes a little more work but not much


Can you elaborate on that "little more work", given that resizing on demand isn't sufficient for this use-case, and predictive scaling is also out of the question?


Nothing wrong with ASGs, but they're not really comparable to k8s. k8s isn't simply "scaling", it's a higher level of abstraction that has granular control and understanding of your application instances in a manner that allows it to efficiently spread workloads across all your hardware automatically, all while managing service discovery, routing, lb, rollbacks and countless more. Comparing it to ASG suggests you may not be that familiar with k8s.

I think it's fair to argue that k8s is overkill for many or even most organizations, but ASG is not even close to an alternative.


It seems that you don't understand ASGs. They do all the things that you listed.

K8s is essential when working with a fleet of bare metals. It's an unneeded abstraction if you're just going to deploy it on AWS or similar.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: