Maybe I'm being too simplistic/idealistic here - but if I had a company that controlled an LLM product, I wouldn't even think twice about banning CSAM outputs.
You can have all the free speech in the world, but not with the vulnerable and innocent children.
I don't know how we got to the point where we can build things with no guardrails and just expect the user to use it legally? I think there should be responsibility on builders/platform owners to definitely build guardrails in on things that are explicitly illegal and morally repugnant.
>I wouldn't even think twice about banning CSAM outputs.
Same, honestly. And you'll probably catch a whole lot of actual legitimate usage in that net, but it's worth it.
But you'll also miss some. You'll always miss some, even with the best guard rails. But 99% is better than 0%, I agree.
> ... and just expect the user to use it legally?
I don't think it's entirely the responsibility of the builder/supplier/service to ensure this, honestly. I don't think it can be. You can sell hammers, and you can't guarantee that the hammer won't be used to hurt people. You can put spray cans behind cages and require purchasers to be 18 years old, but you can't stop the adult from vandalism. The person has to be held responsible at a certain point.
I bet most hammers (non-regulated), spray cans (lightly regulated) and guns (heavily regulated) that are sold are used for their intended purposes. You also don't see these tools manufacturers promoting or excusing their unintended usage as well.
There's also a difference between a tool manufacturer (hardware or software) and a service provider: once the tool is on the user's hands, it's outside of the manufacturer's control.
In this case, a malicious user isn't downloading Grok's model and running it on their GPU. They're using a service provided by X, and I'm of the opinion that a service provider starts to be responsible once the malicious usage of their product gets relevant.
> I don't know how we got to the point where we can build things with no guardrails and just expect the user to use it legally?
Historically tools have been uncensored, yet also incredibly difficult and time-consuming to get good results with.
Why spend loads of effort producing fake celebrity porn using photoshop or blender or whatever when there's limitless free non-celebrity porn online? So photoshop and blender didn't need any built-in censorship.
But with GenAI, the quantitive difference in ease-of-use results in qualitative difference in outcome. Things that didn't get done when it needed 6 months of practice plus 1 hour per image are getting done now it needs zero practice and 20 seconds per image.
You can have all the free speech in the world, but not with the vulnerable and innocent children.
I don't know how we got to the point where we can build things with no guardrails and just expect the user to use it legally? I think there should be responsibility on builders/platform owners to definitely build guardrails in on things that are explicitly illegal and morally repugnant.