Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If tracking changes in the specs, you'd be right, but I'm referring more to the mindset. And I don't mean to imply this is anything sudden --- if pressed, I'd probably roughly gesture in the direction of 1998-2000, possibly related to a slow "changing of the guard". In this incredibly long but fantastic thread, Anton Ertl points to GCC 2.9.5 as the turning point. http://compgroups.net/comp.arch/if-it-were-easy/2993157 (search for Ertl). His rhetoric might be excessive, but he makes a good point that in practical terms, his research on "threaded interpreters" went from being possible to do in C to impossible.

I'm also not referring to syntax --- I think many of the syntactical improvements are fantastic and have no downside. And the reliability of modern compilers is much better, and by-and-large the optimizations produce smaller and faster code. But what I think we're losing is the transparency of being able to reason about code execution by looking at source code. It's still the norm for academic papers to offer source for two approaches to a problem, and use execution time to prove that one is faster than the other because "it has more operations" or "it has more memory accesses", without ever looking at the generated assembling and noticing that one approach has been vectorized and unrolled, and the other has substituted a completely different algorithm that drops all the conditionals because the compiler realized they were never being exercised.

And I don't think it's simply a matter of programmers exploiting undefined behavior without properly considering the consequences. The inclusion of the standard library into the spec for the language creates "spooky actions at a distance" that never existed before, where passing a variable to "print_debug(a)" allows the compiler to remove the later check for "if (a == NULL)" if print_debug() is observed to pass the variable to printf() and if whole program optimization is being used. Whereas if print_debug() is in a compiled shared library, this won't happen, and the null-check will work as intended. I see the reasons compiler writers want this flexibility, but it sure makes development and debugging much harder than it used to be.



Summary: about a decade ago or so, Ertl ran into some regressions in GCC uncovered by benchmarking his Forth implementation, logged bugs for them, and they were fixed.

Some of them were in obscure GCC extensions, like notably computed goto. The compiler started generating code involving branches to a common instruction that does the actual indirect jump, instead of just doing that indirect jump.

It's easy to understand how a performance regression affecting such a feature can slip through, if the behavior is otherwise correct.

It doesn't mean that the GCC sky is falling, as all the trolling in that newsgroup would have you believe.


I come back to this a day later, after anyone is likely to read it, because this response continues to bother me. Your response isn't false, but I feel it really mischaracterizes the issue. These are not trolls in a newsgroup simply trying to get a rise out of people. They may be wrong, the current optimization approach may be the best choice available, and GCC may have made all the correct choices to fulfill its mission as it defines it, but summarizing as ~"once there bugs and then they were fixed" misses the point.

These are multiple well-respected CS researchers and extremely experienced C programmers saying they can no longer use C (not just GCC) in the manner they once did because they believe it's no longer a possible to make the language work the way they want it to. You appear to be an expert C programmer as well, and don't agree. Disagreement is fine, but I think you would be wrong to completely discount what they are saying.

I think it's a matter of what level of optimization you are aiming for. I'm mostly working on integer compression, and find that I am no longer able to use C to generate code that maximizes the performance of modern processors. Ertl's work on threaded dispatch has led to large improvement in the implementation of modern interpreters (http://bugs.python.org/issue4753). As a tool, C is stronger if it can take advantage of techniques such as this, and there don't appear to be any other languages higher than assembly where tip-top performance is possible. I may be just a cranky anonymous troll, but Ertl should be considered a canary telling us that we are in danger of losing the ability to make similar optimizations.


I agree with Ertl, and it's why I practically stopped using C: the sufficiently smartass compiler. Been playing with Rust a little, lately.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: