Well, compilers can always look at what happens during the actual operation of a program and then use that information to redo the compilation better. That's a lot of effort, but it can make a big difference. For a while the Windows version of Firefox was noticeably faster than the Linux version because the people packaging it for Windows did this but the Linux packagers didn't.
Maybe there will be two classes of applications. The elite, like browsers that are constantly fed data about how people use them, regularly updated so optimization is compiled right in; and the not so elite, that rely on per-run startup optimization for good performance.
IIRC gcc already links in a runtime system for recovering runtime type information about dynamic cast. It's not hard to imagine C++x20 (or whatever) including some kind of runtime optimizer.
It uses run time information to modify its own code to perform optimally. It was incredibly fast, but the difficulty in using SMC far outweighs the benefits in most applications.
The big question is, is that information ever worth it? Does the cost of examining the running system completely overshadow any possible benefit?
So far, no. it's not worth it. It'll be really interesting to see what happens with massive parallelism though.