Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That distinction doesn't change my point. I am not surprised that a 40 year old project generates better code than this brand new one.




Not only is it new. There has been 0 performance optimization done. Well none prompted for at least. Once you give the agents a profiler and start a loop focusing on performance you'll see it start improving it.

We are talking about compiler here and "performance" referred above is the performance of generated code.

When you are optimizing a program, you have a specific part of code to improve. The part can be found with profiler.

When you are optimizing a compiler generated code, you have many similar parts of code in many programs and not-so-specific part of compiler that can be improved.


Yes, performance of the generated code. You have some benchmark of using a handful of common programs going through common workflows and you measure the performance of the generated code. As tweaks are made you see how the different performance experiments effect the overall performance. Some strategies are always a win, but things like how you layout different files and functions in memory have different trade offs and are hard to know up front without doing actual real world testing.

  > As tweaks are made...
  > ...how you layout different files and functions in memory have different trade offs and are hard to know up front without doing actual real world testing.
These are definitely not an algorithmic optimizations like privatization [1].

https://en.wikipedia.org/wiki/Privatization_(computer_progra...

To correctly apply privatization one has to have correct dependency analysis. This analysis uses results of many other analyses, for example, value range analysis, something like Fourier-Motzkin algorithm, etc.

So this agentic-optimized compiler has a program where privatization is not applied, what tweaks should agents apply?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: