How many people do dogs kill each year, in circumstances nobody would justify?
How many people do frontier AI models kill each year, in circumstances nobody would justify?
The Pentagon has already received Claude's help in killing people, but the ethics and legality of those acts are disputed – when a dog kills a three year old, nobody is calling that a good thing or even the lesser evil.
> How many people do frontier AI models kill each year, in circumstances nobody would justify?
Dunno, stats aren't recorded.
But I can say there's wrongful death lawsuits naming some of the labs and their models. And there was that anecdote a while back about raw garlic infused olive oil botulism, a search for which reminded me about AI-generated mushroom "guides": https://news.ycombinator.com/item?id=40724714
Do you count death by self driving car in such stats? If someone takes medical advice and dies, is that reported like people who drive off an unsafe bridge when following google maps?
But this is all danger by incompetence. The opposite, danger by competence, is where they enable people to become more dangerous than they otherwise would have been.
A competent planner with no moral compass, you only find out how bad it can be when it's much too late. I don't think LLMs are that danger yet, even with METR timelines that's 3 years off. But I think it's best to aim for where the ball will be, rather than where it is.
Then there's LLM-psychosis, which isn't on the competent-incompetent spectrum at all, and I have no idea if that affects people who weren't already prone to psychosis, or indeed if it's really just a moral panic hallucinated by the mileau.
The same law prevents you and me and a hundred thousand lone wolf wannabes from building and using a kill-bot.
The question is, at what point does some AI become competent enough to engineer one? And that's just one example, it's an illustration of the category and not the specific sole risk.
If the model makers don't know that in advance, the argument given for delaying GPT-2 applies: you can't take back publication, better to have a standard of excess caution.
Sounds like you're betting everyone's future on that remaing true, and not flipping.
Perhaps it won't flip. Perhaps LLMs will always be worse at this than humans. Perhaps all that code I just got was secretly outsourced to a secret cabal in India who can type faster than I can read.
I would prefer not to make the bet that universities continue to be better at solving problems than LLMs. And not just LLMs: AI have been busy finding new dangerous chemicals since before most people had heard of LLMs.
> Absolutely everyone should be allowed to access AI models without any restraints/safety mitigations.
You recon?
Ok, so now every random lone wolf attacker can ask for help with designing and performing whatever attack with whatever DIY weapon system the AI is competent to help with.
Right now, what keeps us safe from serious threats is limited competence of both humans and AI, including for removing alignment from open models, plus any safeties in specifically ChatGPT models and how ChatGPT is synonymous with LLMs for 90% of the population.
Agreed for first part; for the second, that's straightfowardly
Is != ought
But do we want to be the kind of people who fail to even consider moral rights of some new group of (for the sake of argument, I don't expect them to be yet) conscious minds?
> That's never been how humans work. Going back to the specific example: the question is so nonsensical on its face that the only logical conclusion is that the asker is taking the piss out of you.
Or a typo, or changing one's mind part way through.
If someone asked me, I may well not be paying enough attention and say "walk"; but I may also say "Wa… hang on, did you say walk or drive your car to a car wash?"
> If you had no connections, money, investments, or any property/possessions to sell, what would be your first steps?
That'd a harsh set of constraints.
I think the only answers remaining with those in place are "prostitution" and "crime", aren't they?
Ditch a digit, $100 in a week, and you could do it with any normal job, stack shelves or whatever, but I don't see that getting you to $1k. $500 perhaps, from what I hear of US jobs (you did mean USD not any of the other dollars, right?), but not $1k.
Well… judging by his behaviour on all the times he's been told "no" by domain experts, and that random reward schedules are highly addictive (which in this context means "on some occasions he's even correct when he tells experts he knows better"), I think it's very plausible that someone told him what Epstein was and he ignored and/or fired them for doing so because he didn't want it to be the case.
But that's the positive spin, where Musk actually didn't know and was simply an idiot, and at this point I'm tired of giving him the benefit of the doubt.
Here's some rules about dogs: https://en.wikipedia.org/wiki/Dangerous_Dogs_Act_1991
reply