Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you want to do well on this benchmark, rather than calling sleep, the sleeping processes should call erlang:start_timer(10000, self(), []) to schedule a timer and then call erlang:hibernate/3 to wait for the message without having a normal heap. The efficiency guide [1] says a process will start at 326 words of memory, and a word on a 64-bit system is 8 bytes, so rough math says 2.5k per process and 2.5GB for 1M processes, so the observed 4GB isn't that far off, comments on the blog page show someone else got the example to 2.7GB by eliminating extra steps in the program, and reducing initial heap size got it to 1.1GB. Hibernation should make it smaller, but I'm not going to check.

Note also, that BEAM uses several allocation arenas and is slow to release those back to the OS. It's important to measure memory from the OS perspective, but in a real application you'd also measure memory from inside BEAM. There are knobs you can tune if you need to reduce apparent OS memory use at the expense of additional cpu use during allocation. There's tradeoffs everywhere.

Also, yes, 4GB is a lot of memory, but once you start doing real work, you'll probably use even more. Luckily memory is not that expensive and capacities are getting bigger and bigger. We ran some servers with 768GB, but it looks like the current Epycs support 6TB per CPU socket; if everything scales, you could run a billion sleeping BEAM processes on that. :p I recognize this sounds a lot like an argument to accept bloat, but it's different because I'm saying it ;) Also, I think the benefits of BEAM outweigh its memory costs in general; although I've certainly had some fights --- binary:copy/1 before storing binaries into ets/mnesia can be really helpful!

[1] https://www.erlang.org/doc/efficiency_guide/processes.html



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: