Because everyone chases the newest, shiniest thing in tech, and it's not cool nor fun to make boring old stuff in C then copy one binary and maybe a config to the server.
Even if one does have a single binary and config file that one can just copy to a server and run, there's more to non-trivial deployments than that. For example, how do you do a zero-downtime deployment where you copy over a new binary, start it up, switch new requests over to the new version, but let the old one keep running until either it finishes handling all requests that it already received or a timeout is reached? One reason why Kubernetes is popular is that it provides a standard, cross-vendor solution to this and other problems.
Most web applications don't need any of that. Also, I didn't say k8s was useless, just that it's the new thing everyone wants (that they probably don't need).
Then you need to add management of storage for it, management of logs, integration of monitoring, healthchecks, maybe some multiple environment case because UAT is good thing to have, etc. etc.
That's more "basic farm vehicle / lorry" than Ferrari.
You always have those concerns, it's just implemented differently. Customer hurling abuse at you over the phone (or worse - in person) is a form of healthchecks and monitoring, if worse than often common "have someone log in to the server every day and check if it's alive".
So is frantically logging into server to manually truncate log files that filled your one and only disk volume and caused the above abuse.
So is "we're losing customers because of how slow it is" yet not having a single idea why it is slow, because it runs fast when dev checks on their laptop.
All of the above are based on actual real world events, sometimes involving large corporations. In fact, the large corps seem to have most issues with manual work, because they can afford throwing cannon fodder ^W^W "experienced engineers" at the problems.
At some point it becomes a question of what is good use of your time. I disagree heavily with people claiming that running kubernetes is somehow orders of magnitude more complex than anything else (especially with k3s and using non-etcd backing stores). The complexity is necessary complexity, which you can tackle in various ways including YOLO.
Sometimes the YOLO approach however bites in the worst moment, and spending time on bespoke scripts, or figuring out configuration drift, are all costs that show up as you tackle said complexity.
Personally, the reason I went with kubernetes in the first production deployment I did with it, after being vocal anti-docker person at work, was because of... cost efficiency. Both in terms of my time (even though we had to spend significant amount of time migrating, as it was lift&shift of existing software), and in terms of compute costs - thanks to heavily loaded nodes our worst compute bill never reached above 20% of previous "condition normal". I don't think we ever really had more than 10 servers on purpose. Using k8s paid for itself.