> The human vision system is just one peculiar way to gather information about light, yet it allows them to make very sophisticated judgments about reality, e.g. "there is a tiger down there by tye river and it is running at me fast".
> An AGI should be able to gather information about reality and be able to make similar judgements
I just asked GPT-3 this question and here's what it said (on the first and only try):
Q: There's a tiger running at you fast. What are you going to do?
A: I would try to run away from the tiger as fast as I could. If that wasn't possible, I would try to climb a tree or hide behind something large.
Although, the question would be if it would be able to interpret a tiger running at it, based on some input, say, an image.
The Gato paper from DeepMind that another poster mentioned would imply that it would, given that image captioning can be done with the exact same neural network as the GPT-3-like language model.
Also of note, this advice demonstrates the confident inaccuracy which is the hallmark of ML models: like all hunting cats, tigers love to chase and they average about the same speed as Usain Bolt. They can also climb trees pretty well.
> An AGI should be able to gather information about reality and be able to make similar judgements
I just asked GPT-3 this question and here's what it said (on the first and only try):
Although, the question would be if it would be able to interpret a tiger running at it, based on some input, say, an image.The Gato paper from DeepMind that another poster mentioned would imply that it would, given that image captioning can be done with the exact same neural network as the GPT-3-like language model.