Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But knowing how the JIT deoptimizes is a good thing for anyone programming perf-sensitive code for that JIT. Even developing a sense of where branchy logic might deopt is a good thing, even if your code might not be perf-sensitive but could be called in a perf-sensitive call stack, because it’ll give you a good heuristic for when optimization is premature. Optimizing for the JIT isn’t and shouldn’t be a guiding principle for every dev, but it’s harmless or good for us to be aware.


> Optimizing for the JIT isn’t and shouldn’t be a guiding principle for every dev, but it’s harmless or good for us to be aware.

No, it can be quite harmful: you could end up overoptimizing for one engine and ruin performance on another engine, or even a different version of the same engine. When making non-trivial optimizations that are designed around the quirks of the optimizing compiler you should be very careful about the tradeoffs and also what the penalty will be if the optimization breaks in the future.


Just like it happens in C when driven by assembly output on a specific compiler version.


Yep, exactly. There are very few codebases that can actually write code directly to the optimizer and they are usually developed very closely with a specific compiler. For other cases usually you can trust that a number of basic optimizations can be performed and then be cautious about what the various compilers can do. As you gain more experience you can slowly expand the set of things you would expect to be "easy" for the compiler–when those passes don't run, they're generally considered to be bugs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: