Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I personally take no opinion about whether or not they can actually solve anything, because I am not a psychologist and have absolutely no idea how good or bad ChatGPT is at this sort of thing, but I will say I'd rather the company at least tries to do some good given that Facebook HQ is not very far from their offices and appears to have been actively evil in this specific regard.


> but I will say I'd rather the company at least tries to do some good given that Facebook HQ is not very far from their offices and appears to have been actively evil in this specific regard.

Sure! let's take a look at OpenAI's executive staff to see how equipped they are to take a morally different approach than Meta.

Fidji Simo - CEO of Applications (formerly Head of Facebook at Meta)

Vijaye Raji - CTO of Applications (formerly VP of Entertainment at Meta)

Srinivas Narayanan - CTO of B2B Applications (formerly VP of Engineering at Meta)

Kate Rouch - Chief Marketing Officer (formerly VP of Brand and Product Marketing at Meta)

Irina Kofman - Head of Strategic Initiatives (formerly Senior Director of Product Management for Generative AI at Meta)

Becky Waite - Head of Strategy/Operations (formerly Strategic Response at Meta)

David Sasaki - VP of Analytics and Insights (formerly VP of Data Science for Advertising at Meta)

Ashley Alexander - VP of Health Products (formerly Co-Head of Instagram Product at Meta)

Ryan Beiermeister - Director of Product Policy (formerly Director of Product, Social Impact at Meta)


The general rule of thumb is this.

When given the right prompts, LLMs can be very effective at therapy. Certainly my wife gets a lot of mileage out of having ChatGPT help her reframe things in a better way. However "the right prompts" are not the ones that most mentally ill people would choose for themselves. And it is very easy for ChatGPT to become part of a person's delusion spiral, rather then be a helpful part of trying to solve it.


Is it better or worse than alternatives? Where else would a suicidal person turn, a forum with other suicidal people? Dry Wikipedia stats on suicide? Perhaps friends? Knowing how ChatGPT replies to me, I’d have a lot of trouble getting negativity influenced by it, any more than by yellow pages. Yeah, it used to try more to be your friend but GPT5 seems pretty neutral and distant.


I think that you will find a lot of strong opinions, and not a lot of hard data. Certainly any approach can work out poorly. For example antidepressants come with warnings about suicide risk. The reason is that they can enable people to take action on their suicidal feelings, before their suicidal feelings are fixed by the treatment.

I know that many teens turn to social media. My strong opinions against that show up in other comments...


> The reason is that they can enable people to take action on their suicidal feelings, before their suicidal feelings are fixed by the treatment.

I see that explanation for the increased suicide risk caused by antidepressants a lot, but what’s the evidence for it?

It doesn’t necessarily have to be a study, just a reason why people believe it.


Case studies support this. Which is a fancy way to say, "We carefully documented anecdotal reports and saw what looks like a pattern."

There is also a strong parallel to manic depression. Manic depressives have a high suicide risk, and it usually happens when they are coming out of depression. With akathisia (fancy way to say inner restlessness) being the leading indicator. The same pattern is seen with antidepressants. The patient gets treatment, develops akathisia, then attempts suicide.

But, as with many things to do with mental health, we don't really know what is going on inside of people. While also knowing that their self-reports are, shall we say, creatively misleading. So it is easy to have beliefs about what is going on. And rather harder to verify them.


Can you point me to one of these reviews of case reports? As it is, your reply is too vague to be helpful.


The article links to the case of Adam Raine, a depressed teenager who confided in ChatGPT for months and committed suicide. The parents blame ChatGPT. Some of the quotes definitely sound like encouraging suicide to me. It’s tough to evaluate the counterfactual though. Article with more detail: https://www.npr.org/sections/shots-health-news/2025/09/19/nx...


Holy shit this is so fucking wrong and dangerous. No, LLMs are not and cannot be “very effective at therapy”.


Can you give just a little bit more effort explaining why you say that?


You know, usually it’s positive claims which are supposed to be substantiated, such as the claim that “LLMs can be good at therapy”. Holy shit, this thread is insane.


You don't seem to understand how burden of proof works.

My claim that LLMS can do effective therapeutic things is a positive claim. My report of my wife's experience is evidence. My example of something it has done for her is something that other people, who have experienced LLMs, can sanity check and decide whether they think this is possible.

You responded by saying that it is categorically impossible for this to be true. Statements of impossibility are *ALSO* positive claims. You have provided no evidence for your claim. You have failed to meet the burden of proof for your position. (You have also failed to clarify exactly what you consider impossible - I suspect that you are responding to something other than what I actually said.)

This is doubly true given the documented effectiveness of tools like https://www.rosebud.app/. Does it have very significant limitations? Yes. But does it deliver an experience that helps a lot of people's mental health? Also, yes. In fact that app is recommended by many therapists as a complement to therapy.

But is it a replacement for therapy? Absolutely not! As they themselves point out in https://www.rosebud.app/care, LLMs consistently miss important things that a human therapist should be expected to catch. With the right prompts, LLMs are good at helping people learn and internalize positive mental health skills. But that kind of use case only covers some of the things that therapists do for you.

So LLMs can and do to effective therapeutic things when prompted correctly. But they are not a replacement for therapy. And, of course, an unprompted LLM is unlikely to on its own do the potentially helpful things that it could.


“My wife feels that…” and “people we paid to endorse our for-profit app said…” is not evidence no matter how much you want it to be.


No, it is evidence. It is evidence that can be questioned and debated, but it is still evidence.

Second, you misrepresent. The therapists that I have heard recommend Rosebud were not paid to do so. They were doing so because they had seen it be helpful.

Furthermore you have still not clarified what it is you think is impossible, or provided evidence that it is impossible. Claims of impossibility are positive assertions, and require evidence.


You added nothing to the thread. Just get out.


lol that’s rich given we’re in a thread about using ChatGPT as a therapist.


I wasn't saying your position is wrong, just that it doesn't really make a good contribution to the discussion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: