GPT-5 sounds like it knows a lot but the level of trust in ChatGPT is quickly eroding. Examples:
* Meeting notes which was read accurately from a handwritten note (impressive!) but the summary hallucinated information that was completely made up.
* Running omplex pytorch benchmarks while getting the simple parts of it completely wrong. We're talking getting variants of y=f(wx+b), which is what was being compared. All the graphs and visualizations look very convincing, but the details of what's tested completely bonkers.
Is there a petition to bring o3 back? Please? At least it was obvious when it failed.
o3 is available for certain paid plans. I see it in my legacy model dropdown and had to use it last night because 5-Pro chews on an Excel for 20 minutes and then never finishes but also never times out.
* Meeting notes which was read accurately from a handwritten note (impressive!) but the summary hallucinated information that was completely made up. * Running omplex pytorch benchmarks while getting the simple parts of it completely wrong. We're talking getting variants of y=f(wx+b), which is what was being compared. All the graphs and visualizations look very convincing, but the details of what's tested completely bonkers.
Is there a petition to bring o3 back? Please? At least it was obvious when it failed.