>To which governments, courts, and populations likely respond "We don't care if you can't go to market. We don't want models that do this. Solve it or don't offer your services here."
No, they likely won't. AI has become far too big to fail at this point. So much money has been invested in it that speculation on AI alone is holding back a global economic collapse. Governments and companies have invested in AI so deeply that all failure modes have become existential.
If models can't be contained, controlled or properly regulated then they simply won't be contained, controlled or properly regulated.
We'll attempt it, of course, but the limits of what the law deems acceptable will be entirely defined by what is necessary for AI to succeed, because at this point it must. There's no turning back.
> No, they likely won't. AI has become far too big to fail at this point. So much money has been invested in it that speculation on AI alone is holding back a global economic collapse. Governments and companies have invested in AI so deeply that all failure modes have become existential.
Not in Europe it hasn't, and definitely not for specifically image generation, where it seems to be filling the same role as clipart, stock photos, and style transfer that can be done in other ways.
Image editing is the latest hotness in GenAI image models, but knowledge of this doesn't seem to have percolated very far around the economy, only with weird toys like this one currently causing drama.
> If models can't be contained, controlled or properly regulated then they simply won't be contained, controlled or properly regulated.
I wish I could've shown this kind of message to people 3.5 years ago, or even 2 years ago, saying that AI will never take over because we can always just switch it off.
Mind you, for 2 years ago I did, and they still didn't like it.
Because we're not on the forefront of AI development? It also means we have less to lose when the bubble blows. I'm quite happy with the policies here. And we will become more independent from US tech. It'll just take time.
>No, they likely won't. AI has become far too big to fail at this point.
Things that cannot happen will not happen. "AI" (aka LLMs dressed up as AGI by giga-scalr scammers) is never going to work as hyped. What I expect to see in the collision is an attempt to leverage corporate fear and greed into wealth-extractive social control. Hopefully it burns to the ground.
> AI has become far too big to fail at this point.
This might be true for the glorified search engine type of AI that everyone is familiar with, but not for image generation. It's a novelty at best, something people try a couple times and then forget about.
Every industry that uses images and art in any way - entertainment, publishing, science, advertising, you name it - is already investing in image and video generation. If any business in these fields isn't already exclusively using LLMs to generate their content, I promise you they're working on it as aggressively as they can afford to.
Meh, I don't buy it. People dislike AI generated images and art more than they dislike AI generated, well, anything. AI images adorning an article, blog post, announcement or product listing is the hallmark of a cheap, bottom of the barrel product these days, if not an outright scam.
People dislike AI generated art in the same way that they dislike cheap injection molded plastic. When they inspect it in detail, they wish it were something more expensive and artisan, but most of the time they barely notice it and just see that the world is a bit more colorful than a blank page or unfinished metal panel would be.
But a business whose output is identical to everyone else's, because everyone is using the same models to solve the same problems, has no USP and no signal to customers to say why they're different.
The meme a while back about OpenAI having no moat? That's just as true for businesses depending on any public AI tool. If you can't find something that AI fails at, and also show this off to potential customers, then your business is just a lottery ticket with extra steps.
Most businesses don't compete on difference - most competitors are virtually indistinguishable from one another. Rather they tend to compete on brand identity and loyalty.
I think businesses assume the output of AI can be the same as with their current workflow, just with the benefit of cutting their workforce, so all upside and no downside.
I also suspect that a lot of businesses (at least the biggest ones) are looking into hosting their own LLM infrastructure rather than depending on third party services, but even if not there are plenty of "indispensible" services that businesses rely on already. Look at AWS.
> If we're talking about genuine CSAM, that very different and not even limited to undressing.
Why would you think I was talking about anything else?
Also, "subset" != "very different"
> What the former want is not illegal. So the fact they are a minority is irrelevent. Minorites have rights too.
This is newsworthy because non-consensual undressing of images of a minor, even by an AI, already passes the requisite threshold in law and by broad social agreement.
> Why would you think I was talking about anything else?
Because this thread shows CSAM confused with other, e.g. simple child pornography.
And even the source of the quote isn't helping. Clicking its https://www.iwf.org.uk/ "Find out why we use the term ‘child sexual abuse’ instead of ‘child pornography’." gives 403 - Forbidden: Access is denied.
No, they likely won't. AI has become far too big to fail at this point. So much money has been invested in it that speculation on AI alone is holding back a global economic collapse. Governments and companies have invested in AI so deeply that all failure modes have become existential.
If models can't be contained, controlled or properly regulated then they simply won't be contained, controlled or properly regulated.
We'll attempt it, of course, but the limits of what the law deems acceptable will be entirely defined by what is necessary for AI to succeed, because at this point it must. There's no turning back.