Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When I query ChatGPT:

> Should I replace sodium chloride with sodium bromide?

>> No. Sodium chloride (NaCl) and sodium bromide (NaBr) have different chemical and physiological properties... If your context is culinary or nutritional, do not substitute. If it is industrial or lab-based, match the compound to the intended reaction chemistry. What’s your use case?

Seems pretty solid and clear. I don't doubt that the user managed to confuse himself, but that's kind of silly to hold against ChatGPT. If I ask "how do I safely use coffee," the LLM responds reasonably, and the user interprets the response as saying it's safe to use freshly made hot coffee to give themself an enema, is that really something to hold against the LLM? Do we really want a world where, in response to any query, the LLM creates a long list of every conceivable thing not to do to avoid any legal liability?

There's also the question of base rates: how often do patients dangerously misinterpret human doctors' advice? Because they certainly do sometimes. Is that a fatal flaw in human doctors?



Just because it told *you* that, doesn't mean it told *him* that, in substance, tone, context, clarity and/or conciseness. There's plenty of non-tech literate people using tech, including AI, and they may not know how to properly ask or review outputs of AI.

AI is fuzzy as fuck, it's one of it's principal pain points, and why it's outputs (whatever they are) should always be reviewed with a critical eye. It's practically the whole reason prompt engineering is a field in and of itself.

Also, it's entirely plausibly that it may have changed it's response patterns since when that story broke and now (it's been over 24hours, plenty of time for adjustments/updates) .


You're hypothesizing that it gave him a medically dangerous answer, with the only evidence being that he blamed it. Conveniently, the chat where he claimed it did is unavailable.

Would you at least agree that, given an answer like ChatGPT gave me, it's entirely his fault and there is no blame on either it or OpenAI?


Do you not understand that ChatGPT gives different answers to different prompts and sometimes to the same prompt?

You don't know the specifics of questions he asked, and you don't know the answer ChatGPT gave him.


Nor does anyone else. Including, in all likelihood, the guy himself. That's not a basis for a news story.


Precisely. So how then can you claim that because it gave you a specific answer to a specific question, that it surely gave him a correct answer and it's his fault, when you don't even know what the hell he asked it?


>You're hypothesizing that it gave him a medically dangerous answer,

No. I'm saying AI is not infallible (regardless of context/field), it may have given him a medically sound answer, a medically dangerous one, or something else altogether, and could have done so in any manner of ways that may or may not have made sense.

Most importantly, I'm saying that just because it gave YOU an answer YOU understood (regardless of it's medical merit), it may not have given HIM that same answer.

> Would you at least agree that, given an answer like ChatGPT gave me, it's entirely his fault and there is no blame on either it or OpenAI

If you trust AI without critically reviewing it's output, you shoulder some of the blame, yes. But AI giving out bad medical advice is absolutely a problem for OpenAI no matter how you try to spin it.

It's entirely capable of giving a medically sound answer, Yes. That doesn't mean it will do so to every one, every time, even when the same question is asked.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: