So why would you trust GPT to provide an authoritative answer more than your own instincts?
I wouldn't right now, but I believe the accuracy of an LLM is a function of its size and how much you can reduce the loss function. I think OpenAI can improve those things faster than Google based on the things that have been shown so far. I could be completely wrong though.
I also suspect there's no benefit to corrupting a GPT model except for the lulz. If a search tool isn't showing links then there's no financial reason to put effort into setting up link networks and SEO content farms. GPT based search has an innate anti-spam advantage because it renders spam futile if the spam isn't getting visitors.
It also renders a lot of non-spam content futile too, which is problematic. It's not a magic bullet.
For ChatGPT and similar LLMs there is the loss function does not reward accuracy/truthfulness/correctness. But instead answers that seem correct to a superficial human check. These rewards making something plausible up instead of saying "I do not know", over-simplifying things, and passing off things with high confidence regardless of how much evidence backs the statement. Any successful deceit will bring down the loss. The overconfidence and lying is not due to model size beng too low - it is due to misalignment. If not corrected it will just get worse as size goes up.
A recent Computerphile video with Robert Miles discussed this.
I wouldn't right now, but I believe the accuracy of an LLM is a function of its size and how much you can reduce the loss function. I think OpenAI can improve those things faster than Google based on the things that have been shown so far. I could be completely wrong though.
I also suspect there's no benefit to corrupting a GPT model except for the lulz. If a search tool isn't showing links then there's no financial reason to put effort into setting up link networks and SEO content farms. GPT based search has an innate anti-spam advantage because it renders spam futile if the spam isn't getting visitors.
It also renders a lot of non-spam content futile too, which is problematic. It's not a magic bullet.