The fruits of massive language modeling are coming to RL. I envision such foundation models becoming cheap and standardized, like an AI operating system. If we could have a cheap, compact, multi-modal GPT-3 chip we could make all sorts of agents run on top. These RL agents would be like the libraries of skills in Matrix, you can load any skill you want on the player.
I think they wanted to express that learning to predict the correct output ("error minimization") puts a limit on the achievable performance. While ranking (not just RL, really) allows to improve beyond the current best-known answer.
> It should have (and has shown to have) better scaling laws
is a statement based on two anecdotes but I don't see a compelling reason why this should be the case in general.
Active learning approaches are not mentioned even though they allow incorporating human feedback during the fine-tuning process and this can be done with a purely supervised approach.
IMO the last point is the only compelling one : having for example agents that can browse the web during learning could open a lot of possibilities. It would have been interesting to develop this last point more : what are the current difficulties in training such agents?