Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interpreters will always have an information advantage, they get to optimize based on what the system is actually doing, rather than clever profiles.

The big question is, is that information ever worth it? Does the cost of examining the running system completely overshadow any possible benefit?

So far, no. it's not worth it. It'll be really interesting to see what happens with massive parallelism though.



Well, compilers can always look at what happens during the actual operation of a program and then use that information to redo the compilation better. That's a lot of effort, but it can make a big difference. For a while the Windows version of Firefox was noticeably faster than the Linux version because the people packaging it for Windows did this but the Linux packagers didn't.

See http://en.wikipedia.org/wiki/Profile-guided_optimization


Maybe there will be two classes of applications. The elite, like browsers that are constantly fed data about how people use them, regularly updated so optimization is compiled right in; and the not so elite, that rely on per-run startup optimization for good performance.

IIRC gcc already links in a runtime system for recovering runtime type information about dynamic cast. It's not hard to imagine C++x20 (or whatever) including some kind of runtime optimizer.


Check out the Synthesis Kernel: http://en.wikipedia.org/wiki/Self-modifying_code#Massalin.27...

It uses run time information to modify its own code to perform optimally. It was incredibly fast, but the difficulty in using SMC far outweighs the benefits in most applications.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: