Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think that BBC article is technically detailed enough to make that case. The actual study may be, so I'm not saying you're wrong, but "generative AI" is far too large of an umbrella term that encompasses two very different views. The common thesis here is that AI is a wildly powerful tool to do fundamental science, and I think most technologists with any knowledge of neural networks would agree that's true. But the problem is that the Sam Altmans of the world are applying that thesis to the promise that GPT5 is on the path to AGI and they just need to keep scaling and pouring more billions of dollars into these massive models to get there. When I see actually interesting applications of AI in fundamental science the studies are usually of custom programs starting with smaller or more purpose-built foundational models, being hand tuned by knowledgeable researchers with deterministic/testable validation feedback loops. So what you're saying can be true while what Altman is promising can also be absolutely false. But it's hard to say without actually reading that MIT study.


I agree with you. I’m eager to see the details once MIT releases it.

Generative AI is a lot of things. LLM’s in particular (subset of generative AI) are somewhat useful, but nowhere near as useful as what Sam claims. And i guess LLM’s specifically - if we focus on chatgpt, will not be solving cancer lol.

So we agree that Sam is selling snake oil. :)

Just wanted to point out that a lot of the fundamental “tech” is being used for genuinely useful things!


The details were released previously in the Cell paper I link to a couple posts above this. It is behind a paywall (my university allowed me access).


thanks for the tip!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: