Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Very neat! A lot of small LLM's have a similar failure mode where they get stuck and repeat a token / get stuck in a 2-3 token loop until they hit the max message size cutoff. Very ironic that it's about a quine.


You mean an e-quine?


GPT-5 can't handle 2 things: an esoteric quine or an aquatic equine


You get the "more clever than GPT5" award today!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: