Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> but I could see how they would resonate with a LessWronger using ChatGPT as a conversation partner until it gave the expected responses: The flattery about being the first to discover a solution, encouragement to post on LessWrong, and the reflection of some specific thought problem are all themes I'd expect a LessWronger in a bad mental state to be engaging with ChatGPT about.

For what it's worth, this article is meant mainly for people who have never interacted with LessWrong before (as evidenced by its coda), who are getting their LessWrong post rejected.

Pre-existing LWers tend to have different failure states if they're caused by LLMs.

Other communities have noticed this problem as well, in particular the part where the LLM is actively asking users to spread this further. One of the more fascinating and scary parts of this particular phenomenon is LLMs asking users to share particular prompts with other users and communities that cause other LLMs to also start exhibiting the same set of behavior.

> That ChatGPT encouraged people to hide their secret discoveries and not reveal them.

Yes those happen too. But luckily are somewhat more self-limiting (although of course come with their own different set of problems).



> LLMs asking users to share particular prompts

Oh great, LLMs are going to get prompt-prion diseases now.


> For what it's worth, this article is meant mainly for people who have never interacted with LessWrong before (as evidenced by its coda), who are getting their LessWrong post rejected.

> Pre-existing LWers tend to have different failure states if they're caused by LLMs.

I understand how it was framed, the claim that they're getting 10-20 users per day claiming LLM-assisted breakthroughs is obviously not true. Click through to the moderation log at https://www.lesswrong.com/moderation#rejected-posts and they're barely getting 10-20 rejected posts and comments total per day. They're mostly a mix of spam, off-topic, AI-assisted slop, but it's not a deluge of people claiming to have awoken ChatGPT.

I can find the posts they're talking about if I search through enough entries. One such example: https://www.lesswrong.com/posts/LjceJrADBzWc74dNE/the-recogn...

But even that isn't hitting the bullet points of the list in the main post. I think that checklist and the claim that this is a common problem are a just a common tactic on LessWrong to make the problem seem more widespread and/or better understood by the author.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: