Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Grok Sexual Images Draw Rebuke, France Flags Content as Illegal (yahoo.com)
58 points by akutlay 6 days ago | hide | past | favorite | 94 comments




It seems X's Grok became the first large LLM provider to weaken the content moderation rules. If people don't react enough, we will likely lose the first line of defense for keeping AI safe for anyone. Large providers need to act responsibly as the barrier of entry is practically 0.

True, CSAM should be blocked by all means. That's clear as day.

However I think for Europe the regular sexual content moderation (even in text chat) is way over the top. I know the US is very prudish but here most people aren't.

If you mention something erotic to a mainstream AI it will immediately close down which is super annoying because it blocks using it for such discussion topics. It feels a bit like foreign morals are being forced upon us.

Limits on topics that aren't illegal should be selectable by the user. Not baked in hard to the most restricted standards. Similar to the way I can switch off safe search in Google.

However CSAM generation should obviously be blocked and it's very illegal here too.


Funnily Mistral is as much censored as ChatGPT.

One should search Huggingface for role-playing models to have a decent level of erotic content, but even that does not guarantee you a pleasant experience.


Some misunderstanding here. This article makes abolutely no mention of CSAM. The objection is to "sexual content on X without people’s consent".

It's nonconsensual generation of sexual content of real people that is breaking the law. And things like CSAM generation which is obviously illegal.

> It feels a bit like foreign morals are being forced upon us.

Welcome to the rest of the world, where US morals have been forced upon us for decades. You should probably get used to it.


whether it was the "first" definitely depends on your standards & focus: https://cloudsecurityalliance.org/blog/2025/02/19/deepseek-r...

This is already possible, just download open-weight model and run it locally. It seems absurd to me to enforce content rules on AI services and even more that people on Hacker News advocate for that.

Safety isn't just implemented via system prompts, it's also a matter of training and fine tuning, so what you're saying is incorrect.

If you think people here think that models should enable CSAM you're out of your mind. There is such thing as reasonable safety, it not all or nothing. You also don't understand the diversity of opinion here.

More broadly, if you don't reasonable regulate your own models and related work, then it attracts government regulation.


I’ve run into “safeguards” far more frequently than I’ve actually tried to go outside the bounds of the acceptable use policy. For example, when I was attempting to use ChatGPT to translate a journal that was handwritten in Russian that contained descriptions of violent acts. I wasn’t generating violent content, much less advocating it - I was trying to understand something someone who had already committed a violent act had written.

> If you think people here think that models should enable CSAM you're out of your mind.

Intentional creation of “virtual” CSAM should be prosecuted aggressively. Note that that’s not the same thing as “models capable of producing CSAM”. I very much draw the line in terms of intent and/or result, not capability.

> There is such thing as reasonable safety, it not all or nothing. You also don't understand the diversity of opinion here.

I agree, but believe we are quite far away from “reasonable safety”, and far away from “reasonable safeguards”. I can get GPT-5 to try to talk me into committing suicide more easily than I can get it to translate objectionable text written in a language I don’t know.


When these models are fine tuned to allow any kind of nudity, i would guess they also can be used to generate nude images of children. There is a level of generalization in these models. So it seems to me that arguing for restrictions that could be effectively implemented via prompt validation only is just indirect argumentation against open-weight models.

> When these models are fine tuned to allow any kind of nudity

If you're suggesting Grok is fine-tuned to allow any kind of nudity, some evidence would be in order.

The article suggests otherwise: "The service prohibits pornography involving real people’s likenesses and sexual content involving minors, which is illegal to create or distribute."


Why does that seem absurd to you?

Don't feed the troll


What's amazing to me is that this is silenced by HN. It should be a major topic of discussion here.

What makes you say it is silenced by HN?

It got upvoted quite quickly, then flagged. The way the algorithm works, if a hot topic is flagged for some time, the story will never show up on the front page.

This was in fact flagged (though no indication on the title) yesterday, approximately 2 hours after it was on the second page.

This seems like it should be on the HN front page.

And yesterday.


Surely it us missing just because many have flagged it. But that's far short of silencing it.

I imagine—but then that is just the HN community silencing it. Maybe flagging should have some different kind of weighting?

I think discussions should be made immune from users flagging them if they have more than a certain amount of comments (50? 100?). If it's a problematic topic, then the administrator(s) should be able to flag it if they deem it necessary.

That would surely suffer the cobra effect.

It is not silenced.


Grok breaks France's hate-speech laws all the time but they're only going after it because it can create images of naked people? Musk's propaganda nexus should have been banned years ago here, but not for this stupid reason.

It makes sexual images of real people without their consent. That's what's breaking the law.

Is an image of someone wearing only a bikini seriously claimed to be sexual here?

Not by this article, for sure.

"The service prohibits pornography involving real people’s likenesses and sexual content involving minors, which is illegal to create or distribute.

Still, users have prompted Grok to digitally remove clothing from photos — mostly of women — so the subjects appeared to be wearing only underwear or bikinis."


Try doing that to your coworker and report back on how HR describes it in your offboarding meeting.

The question is about an image not an action.

Removing people's clothes without their consent is assault, it doesn't matter if, in another setting, where they did consent to it, it would be fine. It obviously is sexual if you look at the intent of people doing it. Not the clothing itself.

> Removing people's clothes without their consent is assault

Didn't you know? Grok does not actually remove people's clothes. Instead it pastes from photos of /other people who are already naked/.


It makes it look realistic with their likeness and body shape though, so it's not merely "pasting" from photos of other people. And quite honestly I find it morally objectionable to have a tool that makes violating consent and bodily autonomy so trivial. Filters exist, they should be used. It's nothing like photoshop. It runs on their servers, using their software, and then is uploaded, by them, onto their website. Yeah I definitely hold X and grok accountable for the harm it causes. It's nothing like offline software.

It would be Musk automating CSAM. This is how we're starting 2026?

The article doesn't mention CSAM. It is about "created sexualized images of people including minors" and CSAM is not that.

“AI products must be tested rigorously before they go to market to ensure they do not have the capability to generate this material,”

Not possible.


> Not possible.

To which governments, courts, and populations likely respond "We don't care if you can't go to market. We don't want models that do this. Solve it or don't offer your services here."

Also… I think they probably could solve this. AI image analysis is a thing. AI that estimate age from an image has been a thing for ages. It's not like the idea of throwing the entire internet worth of images at a training sessions just to make a single "allowed/forbidden" filter is even ridiculous compared to the scale of all the other things going on right now.


>To which governments, courts, and populations likely respond "We don't care if you can't go to market. We don't want models that do this. Solve it or don't offer your services here."

No, they likely won't. AI has become far too big to fail at this point. So much money has been invested in it that speculation on AI alone is holding back a global economic collapse. Governments and companies have invested in AI so deeply that all failure modes have become existential.

If models can't be contained, controlled or properly regulated then they simply won't be contained, controlled or properly regulated.

We'll attempt it, of course, but the limits of what the law deems acceptable will be entirely defined by what is necessary for AI to succeed, because at this point it must. There's no turning back.


> No, they likely won't. AI has become far too big to fail at this point. So much money has been invested in it that speculation on AI alone is holding back a global economic collapse. Governments and companies have invested in AI so deeply that all failure modes have become existential.

Not in Europe it hasn't, and definitely not for specifically image generation, where it seems to be filling the same role as clipart, stock photos, and style transfer that can be done in other ways.

Image editing is the latest hotness in GenAI image models, but knowledge of this doesn't seem to have percolated very far around the economy, only with weird toys like this one currently causing drama.

> If models can't be contained, controlled or properly regulated then they simply won't be contained, controlled or properly regulated.

I wish I could've shown this kind of message to people 3.5 years ago, or even 2 years ago, saying that AI will never take over because we can always just switch it off.

Mind you, for 2 years ago I did, and they still didn't like it.


I'm sorry to tell you this, but the EU has already been lost.

Because we're not on the forefront of AI development? It also means we have less to lose when the bubble blows. I'm quite happy with the policies here. And we will become more independent from US tech. It'll just take time.

>No, they likely won't. AI has become far too big to fail at this point.

Things that cannot happen will not happen. "AI" (aka LLMs dressed up as AGI by giga-scalr scammers) is never going to work as hyped. What I expect to see in the collision is an attempt to leverage corporate fear and greed into wealth-extractive social control. Hopefully it burns to the ground.


> AI has become far too big to fail at this point.

This might be true for the glorified search engine type of AI that everyone is familiar with, but not for image generation. It's a novelty at best, something people try a couple times and then forget about.


Every industry that uses images and art in any way - entertainment, publishing, science, advertising, you name it - is already investing in image and video generation. If any business in these fields isn't already exclusively using LLMs to generate their content, I promise you they're working on it as aggressively as they can afford to.

Grok is a novelty, but that's Grok.


Meh, I don't buy it. People dislike AI generated images and art more than they dislike AI generated, well, anything. AI images adorning an article, blog post, announcement or product listing is the hallmark of a cheap, bottom of the barrel product these days, if not an outright scam.

People dislike AI generated art in the same way that they dislike cheap injection molded plastic. When they inspect it in detail, they wish it were something more expensive and artisan, but most of the time they barely notice it and just see that the world is a bit more colorful than a blank page or unfinished metal panel would be.

For context, the top 5 HN links as of this comment contain one attributed (https://xeiaso.net/notes/2026/year-linux-desktop/, characters page discloses Stable Diffusion usage) and one likely (https://www.madebywindmill.com/tempi/blog/hbfs-bpm/, high-context unattributed image with no Tineye results) AI generated image.


Fwiw, replacing that is in my TODO list, but my TODO list is long.

Entirely reasonable if you ask me!

Businesses don't care, it's more important to the bottom line to use AI than not.

And they know that eventually people will just learn to accept it.


I am uncertain about this.

Yes, GenAI content is cheap.

But a business whose output is identical to everyone else's, because everyone is using the same models to solve the same problems, has no USP and no signal to customers to say why they're different.

The meme a while back about OpenAI having no moat? That's just as true for businesses depending on any public AI tool. If you can't find something that AI fails at, and also show this off to potential customers, then your business is just a lottery ticket with extra steps.


Most businesses don't compete on difference - most competitors are virtually indistinguishable from one another. Rather they tend to compete on brand identity and loyalty.

I think businesses assume the output of AI can be the same as with their current workflow, just with the benefit of cutting their workforce, so all upside and no downside.

I also suspect that a lot of businesses (at least the biggest ones) are looking into hosting their own LLM infrastructure rather than depending on third party services, but even if not there are plenty of "indispensible" services that businesses rely on already. Look at AWS.


> Most businesses don't compete on difference - ... Rather they tend to compete on brand identity and loyalty.

Without a difference, brand identity and loyalty are impossible to build.


> We don't want models that do this.

But plenty enough people do want them. Grok is meeting demand.


"We the people" in agregate.

"Many individuals" != democratic majority.

To argue otherwise is to claim that the ~1% of the population who are into this are going to sway the governments or the people they represent.


If we're talking about undressing, there is no aggregate. Some people want something; others want them not to have it. Simple.

What the former want is not illegal. So the fact they are a minority is irrelevent. Minorites have rights too.

If we're talking about genuine CSAM, that very different and not even limited to undressing.


> If we're talking about genuine CSAM, that very different and not even limited to undressing.

Why would you think I was talking about anything else?

Also, "subset" != "very different"

> What the former want is not illegal. So the fact they are a minority is irrelevent. Minorites have rights too.

This is newsworthy because non-consensual undressing of images of a minor, even by an AI, already passes the requisite threshold in law and by broad social agreement.

This is not a protected minority.


> Why would you think I was talking about anything else?

Because this thread shows CSAM confused with other, e.g. simple child pornography.

And even the source of the quote isn't helping. Clicking its https://www.iwf.org.uk/ "Find out why we use the term ‘child sexual abuse’ instead of ‘child pornography’." gives 403 - Forbidden: Access is denied.

Fortunately a good explanation of the difference can be found here: https://www.thorn.org/blog/ai-generated-child-sexual-abuse-t...

> This is newsworthy because non-consensual undressing of images of a minor, even by an AI

That's not the usage in question. The usage is "generate realistic pictures of undressed minors". Undressing images of real people is prohibited.


These models generate probably a billion images a day. If getting it wrong for even one of those images is enough to get the entire model banned then it probably isn't possible and this de facto outlaws all image models. That may precisely be the point of this tbh.

If they can't prevent child porn, then it should be banned.

Should photoshop be outlawed? What about MS Paint? Both of them I’m pretty sure are capable of creating this stuff.

Also, lets test your commitment to consistency on this matter. In most jurisdictions possession and creation of CSAM is a strict liability crime, so do you support prosecuting whatever journalist demonstrated this capability to the maximum extent of the law? Or are you only in favor of protecting children when it happens to advance other priorities of yours?


Photoshop is fine, running a business where you produce CSAM for people with photoshop is not. And this has been very clear for a while now.

I did not see the details of what happened, but if someone did in fact take a photo of a real child they had no connection to and caused the images to be created, then yes, they should be investigated, and if the prosecutor thinks they can get a conviction they should be charged.

That is just what the law says today (AIUI), and is consistent with how it has been applied.


> Photoshop is fine, running a business where you produce CSAM for people with photoshop is not. And this has been very clear for a while now.

What if Photoshop is provided as a web service? This is analogous to running image generation as a service. In both cases provider takes input from the user (in one case textual description, on the other case sequence of mouse events) and generates and image with an automated process, without specific human intentional input from the provider.

Note that in this case using them for producing CSAM was against terms of service, so the business was tricked to produce CSAM.

And there are other automated services that could be used for CSAM generation, for example automated photo booths. Should their operator be held liable if someone use them to produce CSAM?


If you really care, ask a lawyer, not a tech forum.

I anticipate there will already be case law/prescident showing the shape of what is allowed/forbidden, and most of us won't know the legal jargon necessary to understand the answer.

Or answers, plural, because laws vary by jurisdiction.

Most of us here are likely to be worse at painting such boundaries than an LLM. LLMs can pass at least one of the bar exams, most of us probably cannot.


> Note that in this case using them for producing CSAM

There's no such report in this article.


> Photoshop is fine, running a business where you produce CSAM for people with photoshop is not.

The law disagrees - at least in UK. CSAM is illegal regardless of tool used.

> I did not see the details of what happened, but if someone did in fact take a photo of a real child they had no connection to and caused the images to be created

The article makes no report that happened. And it does report that is prohibited by the tool in question. But it does then quote a child safety advocate saying tools should not be allowed to "generate this material", so is misleading in the extreme.


Somehow I doubt the prosecutor will apply the same standard to the other image generation models, which I bet (obviously without evidence given the nature of this discussion) can be convinced by a motivated adversary to do the same thing at least once. But alas, selective prosecution is the foundation of political power in the west and pointing that out gets you nothing but downvotes. patio11 once put it that pointing out how power is exercised is the first thing that those who wield power prohibit when they gain it.

You often see (appropriately, IMO) a certain amount of discretion wrt prosecution when things are changing quickly.

I doubt anyone will go to jail over this. What (I think) should happen is state or federal law enforcement need to make it very clear to Xai (and the others) that this is unacceptable, and that if it keep happening, and you are not showing that you are fixing it (even if that means some degradation in the capability of the system/service), then you will be charged.

One of the strengths of the western legal system that I think is under appreciated by people here is that it is subject to interpretation. Law is not Code. This makes it flexible to deal with new situations, and this is (IME) always accompanied by at least a small amount of discretion in enforcement. And in the end, the laws and how they are interpreted and enforced are subject to democratic forces.


When the GP said “not possible” they were referring to the strict letter of the law that I was, not to your lower standard of “make a good effort to fix it”. Law is not code because that gives the lawgivers discretion to exercise power arbitrarily while convincing the citizens that they live under the “rule of law”. At least the Chinese for all their faults don’t bother with the pretense.

> When the GP said “not possible” they were referring to the strict letter of the law that I was

I, the GP, was referring to what I quoted:

“AI products must be tested rigorously before they go to market to ensure they do not have the capability to generate this material [CSAM],” and I agree this is in effect the law at least here in UK.


If you reject the foundation of liberal western civilization I don’t know what to tell you.

Move to china?


I’m just pointing out how the world works in real life not saying that it is desirable. Thinking in terms of that distinction is very useful.

> These models generate probably a billion images a day.

Collectively, probably more. Grok? Not unless you count each frame of a video, I think.

> If getting it wrong for even one of those images is enough to get the entire model banned then it probably isn't possible and this de facto outlaws all image models.

If the threshold is one in a billion… well, the risk is for adversarial outcomes, so you can't just toss a billion attempts at it and see what pops out, but a billion images, if it's anything like Stable Diffusion you can stop early, and my experiments with SD suggested the energy cost even for a full generation is only $0.0001/image*, so a billion is merely $100k.

Given the current limits of GenAI tools, simply not including unclothed or scantily clad people in the training set would prevent this. I mean, I guess you could leave topless bodybuilders in there, then all these pics would look like Arnold Schwarzenegger, almost everyone would laugh and not care.

> That may precisely be the point of this tbh.

Perhaps. But I don't think we need that excuse if this was the goal, and I am not convinced this is the goal in the EU for other reasons besides.

* https://benwheatley.github.io/blog/2022/10/09-19.33.04.html


Even the OP's quote made it clear this isn't the case. Companies need to show they rigorously tested that the model doesn't do this.

It's like cyber insurance requirements - for better or worse, you need to show that you have been audited, not prove you are actually safe.


It's extremely possible! As the source article notes, the Grok developers specifically chose to make their AI more permissive of sexual content than their competitors, which won't produce such images. This isn't a scenario where someone developed a complex jailbreak to circumvent Grok's built-in protections.

> “AI products must be tested rigorously before they go to market to ensure they do not have the capability to generate this material,”

> Not possible.

Note that the description of the accusation earlier in the article is:

> The French government accused Grok on Friday of generating “clearly illegal” sexual content on X without people’s consent, flagging the matter as potentially violating the European Union’s Digital Services Act.

It may be impossible to perfectly regulate what content the model can create, it is quite practical for the Grok product to enforce consent of the user whose content is being operated on before content can be generated based on it and, after the context is generated, before it can be viewed by or distributed to anyone else.


> it is quite practical for the Grok product to enforce consent of the user whose content is being operated on

No, because it cannot even ID that user.


Then maybe they shouldn't go to market.

AI is a nation defense issue. No nation has the luxury to stop their AI companies without the risk of losing national sovereignty.

> AI is a nation defense issue.

AI image editors attached to social media networks with a design that allows producing AI edits (including, but not limited to, nonconsensual intimate images and child pornography) of other user’s media without consent are not a national defense issue, and, even to the extent that AI arguably is a national defense issue, those particular applications can be curtailed entirely by a nation without any adverse impact on national defense.

You can distort any issue by zooming out to orbital level and ignoring the salient details.


"We have to make the revenge porn machine for national defense" is the sort of thing that makes people light bay area tech busses on fire.

Lumping image gen models, LLMs, and other forms of recent machine learning altogether and dressing it up in the "National Defence" ribbon doesn't seem like a great idea.

I don't think the ability for citizens to make deep fake porn of whoever they want is the same as a country not investing in practical defensive applications of AI.


I'm 90% sure LLMs are, just from how important code is, but image generators? Nah. They're as relevant to national sovereignty as having a local film industry: more than zero, because money is fungible, but still really really low.

So child porn is now a national security issue?

Then your business can fairly be ruled illegal.

You don't have the right to act in violation of the law merely because it's the only way to make a buck.


In practice, once a business reaches a size threshold, the law is creatively decided to preserve its existence rather than terminate it. Legality is a function of economics.

> Legality is a function of economics.

Sometimes it is. Sometimes "democracy" isn't just a buzzword.

X.com has been blocked by poorer nations than France (specifically, Brazil) for not following local law.


Until people have had enough and push back

And if you want to change the law to allow the business, go for it. But until then, we must follow the law.


Sure it is. Forbid training models on images of humans, humanoids, or living creatures, and they won't be able to generate images of those things. It's not like AI is some uncontrollable magic force that hatched out of an egg. It can only output what you put in.

If it's possible to create a model that generates photorealistic images based on a single line of text, it is 100% possible to restrict the output.

I'm sure it's possible. If anything they can just run an AI check after generation. Similar to the way Google makes sure it doesn't return CSAM in their results. If they can filter that, the AI providers can check their own output too.

I think you've mistaken CSAM for child pornography.

Possible or not, what about starting by criminal investigation, to force disclosure, and find out if Musk company had child porn in the training data?

It probably doesn't have pictures of fishes driving cybertrucks, but it's able to generate those, so I doubt there'd need to be CSAM in the database, but maybe I don't know how these things really work.

AI generates child porn, HN downvotes a proposal for an investigation...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: