Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We don't know enough about consciousness to be able to conclusively confirm or deny that LLMs are conscious.

Claiming otherwise is overconfident stupidity. Of which there is no shortage of that in AI space.



That's the sort of convenient framing that lets you get away with hand wavy statements which the public eats up, like calling LLM development 'superintelligence'.

It's good for a conversation starter on Twitter or a pitch deck, but there is real measurable technology they are producing and it's pretty clear what it is and what it isn't.

In 2021 they were already discussing these safety ideas in a grandiose way, when all they had was basic lego building blocks (GPT 1). This isn't just a thought experiment to them.


As they well should have. Because they have more foresight than a pile of rocks.

They knew they had the beginnings of an incredibly capable technology at their hands, and they knew that intelligence is an extremely dangerous thing.

And so far? The capabilities of today's systems are already impressive, and keep improving. If you're thinking "what those systems are doing today isn't that bad", you shouldn't be. Concern yourself with the capabilities of a bleeding edge AI from year 2035.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: