these cases have to play out to decide how to regulate "AI safety"
otherwise legislative bodies and agency rulemakers are just guessing at industry trends
nobody knew about "AI memory and sycophancy based on it being a hit with user engagement metrics" a year ago, not law makers, not the companies that implemented it, not the freaked out companies that implemented it solely to compete for stickiness
> otherwise legislative bodies and agency rulemakers are just guessing at industry trends
Assigning liability requires understanding the thing. But it is also a game of aligning incentives.
We make banks liable for fraud even when they’re not really culpable, just involved. Our justification is that the government is giving them a massive amount of power in being able to create money, and that this power comes with responsibilities. Well? We’re giving AI companies literally power. (Electricity.) Maybe once you’re a $10+ billion AI company, you become financially responsible for your users fucking up, even if you’re morally only tangentially involved. (Making no comment on the tangency of this case.)
If a year ago nobody knew about LLMs' propensity to encourage poor life choices, up to and including suicide, that's spectacular evidence that these things are being deployed recklessly and egregiously.
I personally doubt that _no one_ was aware of these tendencies - a year is not that long ago, and I think I was seeing discussions of LLM-induced psychosis back in '24, at least.
Regardless of when it became clear, we have a right and duty to push back against this kind of pathological deployment of dangerous, not-understood tools.
ah, this was the comment to split hairs on the timeline, instead of in what way AI safety should be regulated
I think the good news about all of this is what ChatGPT would have actually discouraged you from writing that. In thinking mode it would have said "wow this guy's EQ is like negative 20" before saying saying "you're absolute right! what if you ignored that entirely!"
I’m sorry but I’m going to call bullshit on the “nobody knew there could be issues with things this algorithm spits out” when these companies openly brag about training their models on such stable corpuses like…checks notes…Reddit among other things.
that has nothing to do with sycophancy and memory, reddit comments are quite adversarial in most communities and is the opposite of how these LLMs behave. the training is just associations
your comment is a perfect example of why legislative bodies would have tried to regulate the wrong thing without knowing the more nuanced industry trend
It doesn’t matter how much lipstick they put on the pig. The foundation is rotten and the everything that comes out of it needs to be treated as suspect.
Clearly in this case the “controls” they have do not work and frankly your comment is a perfect example of how these companies operate - move fast, break things, and accuse anyone trying to reign it in as unknowledgeable or without nuance.
Forgive me, but we’ve seen this play out before time and time again throughout history.
otherwise legislative bodies and agency rulemakers are just guessing at industry trends
nobody knew about "AI memory and sycophancy based on it being a hit with user engagement metrics" a year ago, not law makers, not the companies that implemented it, not the freaked out companies that implemented it solely to compete for stickiness