I agree with Professor Mollick that the capabilities in specific task categories are becoming superhuman -- a precursor for AGI.
Until those capabilities are expanded for model self-improvement -- including being able to adapt its own infrastructure, code, storage, etc. -- then I think AGI/ASI are yet to be realized. My POV is SkyNet, Traveler's "The Director", Person of Interest's "The Machine" and "Samaritan." The ability to target a potentially inscrutable goal along with the self-agency to direct itself towards that is true "AGI" in my book. We have a lot of components that we can reason are necessary, but it is unclear to me that we get there in the next few months.
I don't think we should take it as a given that these are truly precursors for AGI.
We may be going about it the wrong way entirely and need to backtrack and find a wholly new architecture, in which case current capabilities would predate AGI but not be precursors.
I call them precursors because we would anticipate an ASI to be able to do thsse things. Perhaps necessary conditions would be a more appropriate term here.
Not saying I love the idea of an extant ASI, but the need to clearly define it is present. I feel these self-capable examples highlight what a basic API endpoint doesn't about ASI capability.
Until those capabilities are expanded for model self-improvement -- including being able to adapt its own infrastructure, code, storage, etc. -- then I think AGI/ASI are yet to be realized. My POV is SkyNet, Traveler's "The Director", Person of Interest's "The Machine" and "Samaritan." The ability to target a potentially inscrutable goal along with the self-agency to direct itself towards that is true "AGI" in my book. We have a lot of components that we can reason are necessary, but it is unclear to me that we get there in the next few months.