Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> So it made me wonder. Is Brainf*ck the ultimate test for AGI?

Absolutely not. Id bet a lot of money this could be solved with a decent amount of RL compute. None of the stated problems are actually issues with LLMs after on policy training is performed.





> None of the stated problems are actually issues with LLMs after on policy training is performed

But still , isnt it a major weakness they have to RL on everything that has not much data? That really weakens the attempt to make it true AGI.


No.

AGI would be a universal learner, not a magic genie. It still needs to do learning (RL or otherwise) in order to do new tasks.


> It still needs to do learning (RL or otherwise) in order to do new tasks.

Why ? As in - why isn't reading the Brainfuck documentation enough for Gemini to learn Brainfuck ? I'd allow for 3-7 days of a learning curve like perhaps a human would need but why do you need to kinda redo the whole model (or big parts of it) just so it could learn Brainfuck or some other tool? Either the learning (RL or otherwise) need to become way more efficient than it is today (takes today weeks? months? billions of dollars) or it isn't AGI I would say. Not in practical/economic sense and I believe not in the philosophical sense of how we all envisioned true generality.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: