Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I do this all the time. I start writing a comment then think about it some more and realize halfway through that I don't know what I'm saying

I have the luxury of a delete button - the LLM doesn't get that privilege.



Isn't that what thinking mode is?


I tried it with thinking mode and it seems like it spiraled wildly internally, then did a web search and worked it out.

https://chatgpt.com/share/68e3674f-c220-800f-888c-81760e161d...


AIUI, they generally do all of that at the beginning. Another approach, I suppose, could be to have it generate a second pass? Though that would probably ~double the inference cost.


If you didn't have the luxury of a delete button, such as when you're just talking directly to someone IRL, you would probably say something like "no, wait, that doesn't make any sense, I think I'm confusing myself" and then either give it another go or just stop there.

I wish LLMs would do this rather than just bluster on ahead.

What I'd like to hear from the AI about seahorse emojis is "my dataset leads me to believe that seahorse emojis exist... but when I go look for one I can't actually find one."

I don't know how to get there, though.


An LLM is kind of like a human where every thought they had comes out of their mouth.

Most of us humans would sound rather crazy if we did that.


There have been attempts to give LLMs backspace tokens. Since no frontier model uses it I can only guess it doesn't scale as well as just letting it correct itself in COT

https://arxiv.org/abs/2306.05426




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: