I don't think there is much emotion in said post. I am making specific assertions.
to your point:
> Non-sexualized bikinis are sold everywhere
Correct! the key logical modifier is Non sexual. Also you'll note that a lot of clothing companies do not show images of children in swimwear. Partly that's down to what I imagine you would term puritanism, but also legal counsel. The definition of a CSAM is loose enough (in some jurisdictions) to cover swimwear, depending on context. That context is challenging. A parent looking for clothes that will fit/suit their child is clearly not sexualised (corner cases exist, as I said context) Someone else who is using if for sexual purposes is not.
and because like GPL3 CSAM is infectious, the tariff for both company and end user is rather high for making, storing, transmitting and downloading those images. If someone is convicted of collecting those images, and using them for a sexual purpose, then images that were created that were not-CSAM suddenly become CSAM, and legally toxic to posses. (context does come in here.)
> Your link clearly states they are being moderated in the comments
Which tells us that there is a lot of work on guardrails right? its a choice by xai to allow users to do this. (mainly the app is hamstrung so that you have to pay for the spicy mode.) Whether its done by an ML model or not is irrelevant. Knowingly allowing CSAM generation and transmission is illegal. if you or I were to host an ML model that allows user to do the same thing, we would be in jail. There is a reason why other companies are not doing this.
The law must be applied equally, regardless of wealth, or power. I think that is my main objection to all of this. its clearly CSAM, and anyone other than musk doing this would have been censured by now. All of this justification is because of who it is doing this, rather than what is being done. We can bike shed all we want about is it actually really CSAM, which negates the entire point of this, which is its clearly breaking the law.
> The GPT models wouldn't know what is sexualised.
ML Classification is really rather good now. Instagram's unsupervised categorisation model is really rather effective at working out context of an image or video (ie differentiation of clothes, and context of those clothes.)
> please don't put words in my mouth
I have not done this, I am asserting that the bar for justifying this kind of content, which is clearly illegal and easily prevented (ie a picture of a minor and "generate an image of her in sexy clothes") is very high.
Now you could argue that I'm implying that you have something to hide. I am actually curious as to your motives for justifying the knowing creation of sexualised images of minors. You've made a weak argument that there are legitimate purposes. You then argue that its a slippery slope.
Is your fear that this brings justifies an age gated internet? censorship? What is the price that you think is worth paying?
Again words in my mouth. I'm not justifying that and nowhere does it say that. I could be very impolite to you right now trying to slander me like that.
I said I don't understand the fuss because there are guardrails, action taken and technical limitations.
THAT is my motive. The end of story. I do not need to parrot outrage because everyone else is, "you're either with us or against us" bullshit. I'm here for a rational discussion.
Again read what I've said. Technical limitations. You wrote that long ass explanation interspersed with ambiguities like consulting lawyers in borderline cases and then you expect an LLM to handle this.
Yes ML classification is good now but not foolproof. Hence we go back to the first point, processes to deal with this when x's existing guardrails fail, as x.com has done, delete, suspend, report.
My fear (only because you mention it, I didn't mention it above, I only said I don't get the fuss above) it seems should be that people are losing touch in this grok thing, their arguments are no longer grounded in truth or rational thought, almost a rabid witch hunt.
At no point did I say or imply LLMs are meant to make legal decisions.
"Hey grok make a sexy version of [obvious minor]" is not something that is hard to stop. try doing that query with meta, gemini, or sora, they manage it reliably well.
There are not technical impediments to stopping this, its a choice.
My point is saying if it's so complex you have to get a lawyer involved, how do you expect your LLM&system to cover all its own shortcomings.
I'd bet if you put that prompt into grok it'd be blocked judging by that Reddit link you sent. These folks are jailbreaking nearly asking to modify using neutral terms like clothing and images that grok doesn't have the skill to judge.
> My point is saying if it's so complex you have to get a lawyer involved, how do you expect your LLM&system to cover all its own shortcomings.
Every feature is lawyered up. Thats what general counsel does. Every feature I worked on at a FAANG had some level of legal compliance gate on it, because mistakes are costly.
For the team that launched the chatbots, loads of time went into figuring out what stupid shit users could make it do, and blocking it. Its not like all of that effort stopped. When people started finding new ways to do naughty stuff, that had to be blocked as well. Because other wise the whole feature had to be pulled to stop advertisers from fleeing, or worse FCC/class action.
> These folks are jailbreaking nearly asking to modify using neutral terms like clothing
CORRECT! people are putting effort into jailbreaking the app. where as on x grok they don't need to do any of that. Which is my point, its a product choice.
None of this is "hard legal problems" or in fact unpredictable. They are/have done a ton of work to stop that (again mainly because they want people to pay for "spicy mode")
I don't think there is much emotion in said post. I am making specific assertions.
to your point:
> Non-sexualized bikinis are sold everywhere
Correct! the key logical modifier is Non sexual. Also you'll note that a lot of clothing companies do not show images of children in swimwear. Partly that's down to what I imagine you would term puritanism, but also legal counsel. The definition of a CSAM is loose enough (in some jurisdictions) to cover swimwear, depending on context. That context is challenging. A parent looking for clothes that will fit/suit their child is clearly not sexualised (corner cases exist, as I said context) Someone else who is using if for sexual purposes is not.
and because like GPL3 CSAM is infectious, the tariff for both company and end user is rather high for making, storing, transmitting and downloading those images. If someone is convicted of collecting those images, and using them for a sexual purpose, then images that were created that were not-CSAM suddenly become CSAM, and legally toxic to posses. (context does come in here.)
> Your link clearly states they are being moderated in the comments
Which tells us that there is a lot of work on guardrails right? its a choice by xai to allow users to do this. (mainly the app is hamstrung so that you have to pay for the spicy mode.) Whether its done by an ML model or not is irrelevant. Knowingly allowing CSAM generation and transmission is illegal. if you or I were to host an ML model that allows user to do the same thing, we would be in jail. There is a reason why other companies are not doing this.
The law must be applied equally, regardless of wealth, or power. I think that is my main objection to all of this. its clearly CSAM, and anyone other than musk doing this would have been censured by now. All of this justification is because of who it is doing this, rather than what is being done. We can bike shed all we want about is it actually really CSAM, which negates the entire point of this, which is its clearly breaking the law.
> The GPT models wouldn't know what is sexualised.
ML Classification is really rather good now. Instagram's unsupervised categorisation model is really rather effective at working out context of an image or video (ie differentiation of clothes, and context of those clothes.)
> please don't put words in my mouth
I have not done this, I am asserting that the bar for justifying this kind of content, which is clearly illegal and easily prevented (ie a picture of a minor and "generate an image of her in sexy clothes") is very high.
Now you could argue that I'm implying that you have something to hide. I am actually curious as to your motives for justifying the knowing creation of sexualised images of minors. You've made a weak argument that there are legitimate purposes. You then argue that its a slippery slope.
Is your fear that this brings justifies an age gated internet? censorship? What is the price that you think is worth paying?