I would say... yes. But with the strong caveat that when used within the context of AGI, the individual/system should be able to showcase that intelligence, and the results should be comparable to those of a neurotypical adult human. Both a dog and a toddler can show signs of intelligence when compared to individuals of their similar nature, but not to an adult human, which is the criteria for AGI.
This is why I don't think that a system that underperforms the average neurotypical adult human in "most" cognitive tasks would constitute AGI. It could certainly be considered a step in that direction, but not strictly AGI.
But again, I don't think that a strict definition of AGI is helpful or necessary. The impact of a system with such capabilities would be impossible to deny, so a clear definition doesn't really matter.
> I don't think that a system that underperforms the average neurotypical adult human in "most" cognitive tasks would constitute AGI
What makes you say that it underperforms? I ask because evidence strongly suggests that it is actually vice-versa - AI models currently outperform humans in most of the tasks.
That isn't obvious to me at all. If you don't like the dog analogy, lets try another: Does a human toddler qualify as having general intelligence?