Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’ve been curious about the performance gap here – you can open htop on a pi 4 and see that CPU utilization is relatively low, ~33% out of 400%, something like that, and yet some operations seem like they take 5-6x longer than they ought to on a “normal” computer.

Is it all down to the file system? Is the CPU just in interrupt overload all the time? I wish I had a better understanding of the issue here.



CPU load metrics are averages, typically over e.g. a second. Many operations take less time than that. If something takes 300ms on a Pi and 50ms on a PC, the Pi is six times slower in observed latency but will still only show <33% CPU utilization when averaged over a second. Some of the metrics are over even longer periods of time. The Linux load average metrics are 1 minute, 5 minute and 15 minute. You can have your ssh handshake take 20 full seconds with a CPU core at 100% and still see a 0.33 load average. And having three more cores available does nothing even when the system is busy if the application is single-threaded.

The small boards also typically have much slower I/O and less memory. On a PC with 16GB of RAM running as a server, usually the whole OS will end up cached in memory. A Raspberry Pi with less RAM is more likely to have to evict from the page cache, and then read it back from a slow SD card.


What do you expect from a computer that’s completely powered with less than 10W?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: