Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The human vision system is just one peculiar way to gather information about light, yet it allows them to make very sophisticated judgments about reality, e.g. "there is a tiger down there by tye river and it is running at me fast".

> An AGI should be able to gather information about reality and be able to make similar judgements

I just asked GPT-3 this question and here's what it said (on the first and only try):

  Q: There's a tiger running at you fast. What are you going to do?

  A: I would try to run away from the tiger as fast as I could. If that wasn't possible, I would try to climb a tree or hide behind something large.
Although, the question would be if it would be able to interpret a tiger running at it, based on some input, say, an image.

The Gato paper from DeepMind that another poster mentioned would imply that it would, given that image captioning can be done with the exact same neural network as the GPT-3-like language model.



Also of note, this advice demonstrates the confident inaccuracy which is the hallmark of ML models: like all hunting cats, tigers love to chase and they average about the same speed as Usain Bolt. They can also climb trees pretty well.


> this advice demonstrates the confident inaccuracy which is the hallmark of ML models

... and humans.

Just browse Twitter, Facebook, ... or better yet, ask a random person on the street.

Further question to GPT-3 (on the first and only try as well):

  (...)

  Q: But can't tigers run faster than humans and climb trees?

  A: Yes, but it would still be worth a try.


This sounds like a google search result (in that it can very likely have been found almost verbatim in its training set).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: