While I think there's obvious merit to their skepticism over the race towards agi, Sutskever's goal doesn't seem practical to me. As Dwarkesh also said, we reach to a safe and eventually perfect system by deploying it in public and iterating over it until optimal convergence dictated by users in a free market. Hence, I trust that Google, OpenAI or Anthropic will reach there, not SSI.
> we reach to a safe and eventually perfect system by deploying it in public and iterating over it until optimal convergence dictated by users in a free market
Possibly... but also a lot of the foundational AI advancements were actually done in skunkworks-like environments and with pure research rather than iterating in front of the public.
It's not 100% clear to me if the ultimate path to the end is iteration or something completely new.
We are in a situation where the hardware is probably sufficient for AI to do as well as humans, but in terms of thinking things over, coming to understand the world and developing original insights about it, LLMs aren't good, probably due to the algorithm.
To get something good at thinking and understanding you may be better rebuilding the basic algorithm rather than tinkering with LLMs to meet customer demands.
I mean the basic LLM thing of have an array of a few billion parameters, feed in all the text on the internet using matrix multiplication to adjust the parameters, use it to predict more text and expect the thing to be smart is a bit of a bodge. It's surprising it works as well as it does.