Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the trick is observing what is “better” in this model. EQ is supposed to be “better” than 4o, according to the prose. However, how can an LLM have emotional-anything? LLMs are a regurgitation machine, emotion has nothing to do with anything.


Words have valence, and valence reflects the state of emotional being of the user. This model appears to understand that better and responds like it’s in a therapeutic conversation and not composing an essay or article.

Perhaps they are/were going for stealth therapy-bot with this.


But there is no actual empathy, it isn’t possible.


But there is no actual death or love in a movie or book and yet we react as if there is. It's literally what qualifying a movie as a "tear-jerker” is. I wanted to see Saving Private Ryan in theaters to bond with my Grandpa who received a Purple Heart in the Korean War, I was shutdown almost instantly from my family. All special effects and no death but he had PTSD and one night thought his wife was the N.K. and nearly choked her to death because he had flashbacks and she came into the bedroom quietly so he wasn't disturbed. Extreme example yes, but having him loose his shit in public because of something analogous for some is near enough it makes no difference.


You think that it isn’t possible to have an emotional model of a human? Why, because you think it is too complex?

Empathy done well seems like 1:1 mapping at an emotional level, but that doesn’t imply to me that it couldn’t be done at a different level of modeling. Empathy can be done poorly, and then it is projecting.


It has not only been possible to simulate empathetic interaction via computer systems, but proven to be achievable for close to sixty years[0].

0 - https://en.wikipedia.org/wiki/ELIZA


I don’t think it’s possible for 1s and 0s to feel… well, anything.


Imagine two greeting cards. One says “I’m so sorry for your loss”, and the other says “Everyone dies, they weren’t special”.

Does one of these have a higher EQ, despite both being ink and paper and definitely not sentient?

Now, imagine they were produced by two different AIs. Does one AI demonstrate higher EQ?

The trick is in seeing that “EQ of a text response” is not the same thing as “EQ of a sentient being”


i agree with you. i think it is dishonest for them to post train 4.5 to feign sympathy when someone vents to it. its just weird. they showed it off in the demo.


Why? The choice to not do the post training would be every bit as intentional, and no different than post training to make it less sympathetic.

This is a designed system. The designers make choices. I don’t see how failing to plan and design for a common use case would be better.


We do not know if it is capable of sympathy. Post training it to reliably be sympathetic feels manipulative. Can it atleast be post trained to be honest. Dishonesty is immoral. I want my AIs to behave morally.


AIs don't behave. They are a lot of fancy maths. Their creators can behave in ethical or moral ways though when they create these models.

= not to say that the people that work on AI are not incredibly talented, but more that it's not human


thats just pedantic and unprovable since you cant know if it has a qualitative experience or not.

trainimg it topretend to be a feelingless robot or sympathetic mother are both weird to me. it should state facts with us.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: