Safety isn't just implemented via system prompts, it's also a matter of training and fine tuning, so what you're saying is incorrect.
If you think people here think that models should enable CSAM you're out of your mind. There is such thing as reasonable safety, it not all or nothing. You also don't understand the diversity of opinion here.
More broadly, if you don't reasonable regulate your own models and related work, then it attracts government regulation.
I’ve run into “safeguards” far more frequently than I’ve actually tried to go outside the bounds of the acceptable use policy. For example, when I was attempting to use ChatGPT to translate a journal that was handwritten in Russian that contained descriptions of violent acts. I wasn’t generating violent content, much less advocating it - I was trying to understand something someone who had already committed a violent act had written.
> If you think people here think that models should enable CSAM you're out of your mind.
Intentional creation of “virtual” CSAM should be prosecuted aggressively. Note that that’s not the same thing as “models capable of producing CSAM”. I very much draw the line in terms of intent and/or result, not capability.
> There is such thing as reasonable safety, it not all or nothing. You also don't understand the diversity of opinion here.
I agree, but believe we are quite far away from “reasonable safety”, and far away from “reasonable safeguards”. I can get GPT-5 to try to talk me into committing suicide more easily than I can get it to translate objectionable text written in a language I don’t know.
When these models are fine tuned to allow any kind of nudity, i would guess they also can be used to generate nude images of children. There is a level of generalization in these models. So it seems to me that arguing for restrictions that could be effectively implemented via prompt validation only is just indirect argumentation against open-weight models.
> When these models are fine tuned to allow any kind of nudity
If you're suggesting Grok is fine-tuned to allow any kind of nudity, some evidence would be in order.
The article suggests otherwise: "The service prohibits pornography involving real people’s likenesses and sexual content involving minors, which is illegal to create or distribute."
If you think people here think that models should enable CSAM you're out of your mind. There is such thing as reasonable safety, it not all or nothing. You also don't understand the diversity of opinion here.
More broadly, if you don't reasonable regulate your own models and related work, then it attracts government regulation.