That's what I glossed over as "complicated analysis". In my mind, if a compiler can understand register and stack use (required for static transformations), it can (theoretically, and with some effort) understand heap use. Am I wrong?
Yes, you are wrong. This isn’t basic constant hoisting.
The compiler doesn’t reasonably have any of that information to understand what read is filling in at runtime because that information is encoded purely at runtime and the compiler has no reasoning mechanism even close to answering runtime data flow questions.
There’s also all sorts of complexity that has to do with the kinds of transformations that are possible as the legal information that exists at the language level is often erased before it gets to the stack/register piece and vice versa the language layer knows nothing about registers and minimal stuff about stack.
This is the same reason that the compiler fails to compile something like:
for _ in 1..10 {
let x: String = create_new_string();
eprintln(“{x}”);
}
Fails to hoist x out of the loop even if the returned string is String::new(“ABC”) unless maybe LTO is on (and even then maybe not). Basically the compiler’s “magic” is very limited to static transformations that follow as-if - the compiler must know the static transformation is blindly identical and the amount of reasoning about the structure is often very limited.
Said another way, if the compiler could do the optimizations you’re hypothesizing, it would be equivalent to applying a mid level performance engineer to every code base it encounters.
The problem is that that kind of analysis would require full program optimization, rather than optimizing individual functions, because the signature of the call to read doesn't provide enough information for the compiler to know if it ever reads the data. And indeed, it can't know until link time.
I think it's theoretically possible, but at the cost of much longer compile times, and greater complexity in the compiler.