I don't think this is accurate which plays into the parents point, I guess.
Looking at the docs, ingress-nginx configures an upstream using endpoints, which are essentially Pod IPs, with skips kubernetes service based round-robin networking altogether.
Assuming you use an ingress that does configure services instead, and assuming you're using a service proxy that uses ipvs (i.e. kube-proxy in default settings) then your explanation would have been correct.
For the most part, kubernetes networking is as hard as networking with loads of automation. Often, depth in both those skills are pretty exclusive, but if you're using the popular and/or supported CNI not doing things like changing in-flight, your average dev just needs to learn basic k8s debugging such as kubectl get endpoints to check whether his service selectors are setup correctly, and curl them to check whether the pods are actually listening on those ports.
Looking at the docs, ingress-nginx configures an upstream using endpoints, which are essentially Pod IPs, with skips kubernetes service based round-robin networking altogether.
Assuming you use an ingress that does configure services instead, and assuming you're using a service proxy that uses ipvs (i.e. kube-proxy in default settings) then your explanation would have been correct.
For the most part, kubernetes networking is as hard as networking with loads of automation. Often, depth in both those skills are pretty exclusive, but if you're using the popular and/or supported CNI not doing things like changing in-flight, your average dev just needs to learn basic k8s debugging such as kubectl get endpoints to check whether his service selectors are setup correctly, and curl them to check whether the pods are actually listening on those ports.