Easy, GPT-3 doesn't provide a coherent world view. Just ask it about the things it tells you, and it will quickly start to directly contradicting itself, like "A is true" and then say "A is not true". Humans don't do that so blatantly, unless they have Alzheimer's or similar which makes it hard to tell if the human is still intelligent or not.
Gpt-3 is like looking up random parts of the internet with your queries, and changing them a little will lead you to a site that tells a completely different story. That is the most reasonable explanation to why it behaves as above.
Easy, GPT-3 doesn't provide a coherent world view. Just ask it about the things it tells you, and it will quickly start to directly contradicting itself, like "A is true" and then say "A is not true". Humans don't do that so blatantly, unless they have Alzheimer's or similar which makes it hard to tell if the human is still intelligent or not.
Gpt-3 is like looking up random parts of the internet with your queries, and changing them a little will lead you to a site that tells a completely different story. That is the most reasonable explanation to why it behaves as above.