K8s is great - if you are solving infrastructure at a certain scale. That scale being a Bank, Insurance Company or mature digital company. If you're not in that class then it's largely overkill/overcomplex IMO when you can simply use Terraform plus managed Docker host like ECS and attach cloud-native managed services.
Again the cross cloud portability is a non starter, unless you're really at scale.
What k8s really scales is the developer/operator power. Yes, it is complex, but pretty much all of it is necessary complexity. At small enough scale with enough time, you can dig a hole with your fingers - but a proper tool will do wonders to how much digging you can do. And a lot of that complexity is present even when you do everything the "old" way, it's just invisible toil.
And a lot of the calculus changes when 'managed services' stop being cost effective or aren't an option at all, or you just want to be able to migrate elsewhere (that can be at low scale too, because of being price conscious).
We have a mature TF module library and can roll out complex, well configured infra in a matter of hours, reliably. That said it's platform specific.
Sure, managed service costs are certainly a thing, but to my point that only really start to become an issue at significant scale, assuming you're well configured.
The cost metrics that make "it's cheaper to use managed service than pay the cost of extra engineer to specialize in infrastructure" aren't universal. In fact, I usually have to work from the opposite direction, where hiring a senior Ops specialist who can wrangle everything from shelving the physical hw to network booting k8s cluster on-premises can be cheaper that Heroku/AWS/etc.
> you can simply use Terraform plus managed Docker host like ECS and attach cloud-native managed services
That's not actually simple at all, and you would need to build a lot of the other stuff that Kubernetes gives you for free.
Kubernetes gives you an industry standard platform with first-class cloud vendor support. If you roll your own solution with ECS, what you are really doing is making a crappy in-house Kubernetes.
I'd disagree - my team migrated from running containers on VMs (managed via Ansible) to ECS + Fargate (managed by Terraform and a simple bash script).
It wasn't a simple transition by any means, but one person wrapped it up in 4 weeks - now we have 0 downtime deployments, scaling up/down in matter of seconds, and ECS babysits the containers.
Previously we had to deploy a lot of monitoring on each VM to ensure that containers are running, we get alerted when one of the application crashed and didn't restart because Docker daemon didn't handle it etc etc.
Now, we only run stateless services, in a private VPC subnet, Load balancing is delegated to ALB, we don't need service discovery, meshes etc. Configuration is declarative, but written in much friendlier HCL (I'm ok with YAML, but to a degree).
ECS just works for us.
Just like K8S might work for a bigger team, but I wouldn't adopt it at our shop, simply because of all of the complexity and huge surface area.
k8s as a bunch of other benefits beside just scaling and you can run a single node cluster with the same uptime characteristics as your proposed setup and get all these benefits.
And, we only have to learn one complex system and avoid learning each cloud, one of which decided product names which have little relation to what they do was a good idear
Again the cross cloud portability is a non starter, unless you're really at scale.