Yes, until you've scaled enough that it wasn't. If you're deploying a dev or staging server or even prod for your first few thousand users then you can get by with a handful of servers and stuff. But a lot of stuff that works well on one or three servers starts working less well on a dozen servers, and it's around that point that the up-front difficulty of k8s starts to pay off with the lower long-term difficulty
Whatever crossover point might exist for Kubernetes it's not at a dozen servers, at the low end it's maybe 50. The fair comparison isn't against "yolo scp my php to /var/www", but any of the disciplined orchestration/containerization tools other than Kubernetes.
I ran ~40 servers across 3 DCs with about 1/3 of my time going to ops using salt and systemd.
The next company, we ran about 80 in one DC with one dedicated ops/infra person also embedded in the dev team + "smart hands" contracts in the DC. Today that runs in large part on Kubernetes; it's now about 150 servers and takes basically two full ops people disconnected from our dev completely, plus some unspecified but large percentage of a ~10 person "platform team", with a constant trickle of unsatisfying issues around storage, load balancing, operator compatibility, etc. Our day-to-day dev workflow has not gotten notably simpler.