If I am not mistaken, we are already past that. The pixel, or token, gets probability-predicted in real time. The complete, shaded pixel, as you will, gets computed ‘at once’ instead of layers of simulation. That’s the LLM’s core mechanism.
If the mechanism allows for predicting how the next pixel will look like, which includes the lighting equation, then there is no need anymore for a light simulation.
Would also like to know how Genie works. Maybe some parts get indeed already simulated in a hybrid approach.
The model has multiple layers which are basically a giant non-linear equation to predict the final shaded pixel, I don't see how it's inherently difference from a shader outputing a pixel "at once".
Correct me if I'm wrong, but I don't see how you can simulate a PBR pixel without doing ANY pbr computation whatsoever.
For example one could imagine a very simple program computing sin(x), or a giant multi-layered model that does the same, wouldn't it just be a latent, more-or-less compressed version of sin(x)?
If the mechanism allows for predicting how the next pixel will look like, which includes the lighting equation, then there is no need anymore for a light simulation.
Would also like to know how Genie works. Maybe some parts get indeed already simulated in a hybrid approach.