Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From my experience long ago, all high performance networking under Linux was traditionally user space and pre-allocated pools (netmap, dpdk, pf-ring...). Did not follow, how much io_uring has been catching up for network stack usage... Maybe somebody else knows?


While I'm not very knowledgeable in specifics, there are many paths for networking in Linux now. The usual kernel based one is there, also there's kernel-bypass [0] paths used by very high performance cards.

Also, Infiniband can directly RDMA to and from MPI processes for making "remote memory local", allowing very low latencies and high performance in HPC environments.

I also like this post from Cloudflare [1]. I've read it completely, but the specifics are lost on me since I'm not directly concerned with the network part of our system.

[0]: https://medium.com/@penberg/on-kernel-bypass-networking-and-...

[1]: https://blog.cloudflare.com/how-to-receive-a-million-packets...


I have a service that beats epoll with io_uring (it reads gre packets from one socket, and does some lookups/munging on the inner packet and re-encaps them to a different mechanism and writes them back to a different socket). General usage for io_uring vs epoll is pretty comparable IIUC. It wouldn't surprise me if streams (e.g. tcp) end up being faster via io_uring and buffer registration though.

Totally tangential - it looks like io_uring is evolving beyond just io and into an alternate syscall interface, which is pretty neat imho.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: