Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why would a text generator ever be conscious? Was this really worth writing a paper about?


Animals are also next token/action generators, and we also think (simulating a string of events). Maybe humans are better at grouping these events into more powerful network activations to retrieve better results


> Animals are also next token/action generators

But for humans, the concept/thought/idea/action is formed first and then a sequence of tokens are generated to communicate that concept/thought/idea/action.


And a lot of GPU cycles happen before next token prediction, what's your point?


The point was that next token generation for a human was based on constructing something that matches the thought that is held in the mind.

LLM's generate the next token based on a statistical relevance of trained data plus the previous tokens generated.

Under normal conditions, a human generating tokens would not diverge down a different path from the thought that they were trying to communicate. All of the words/tokens generated support the idea or thought being communicated.

LLM's frequently generate tokens that do not make sense, and mathematicians have shown that hallucinations can not be eliminated from the current model of LLM's.


I think gpt-image-2 at least incorporates representations from the base model, even if base model doesn't itself have the output capability. And it does have image input fused directly into it that helps make those representations more usable for image gen, so it's not just generating text.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: