Photoshop is fine, running a business where you produce CSAM for people with photoshop is not. And this has been very clear for a while now.
I did not see the details of what happened, but if someone did in fact take a photo of a real child they had no connection to and caused the images to be created, then yes, they should be investigated, and if the prosecutor thinks they can get a conviction they should be charged.
That is just what the law says today (AIUI), and is consistent with how it has been applied.
> Photoshop is fine, running a business where you produce CSAM for people with photoshop is not. And this has been very clear for a while now.
What if Photoshop is provided as a web service? This is analogous to running image generation as a service. In both cases provider takes input from the user (in one case textual description, on the other case sequence of mouse events) and generates and image with an automated process, without specific human intentional input from the provider.
Note that in this case using them for producing CSAM was against terms of service, so the business was tricked to produce CSAM.
And there are other automated services that could be used for CSAM generation, for example automated photo booths. Should their operator be held liable if someone use them to produce CSAM?
If you really care, ask a lawyer, not a tech forum.
I anticipate there will already be case law/prescident showing the shape of what is allowed/forbidden, and most of us won't know the legal jargon necessary to understand the answer.
Or answers, plural, because laws vary by jurisdiction.
Most of us here are likely to be worse at painting such boundaries than an LLM. LLMs can pass at least one of the bar exams, most of us probably cannot.
> Photoshop is fine, running a business where you produce CSAM for people with photoshop is not.
The law disagrees - at least in UK. CSAM is illegal regardless of tool used.
> I did not see the details of what happened, but if someone did in fact take a photo of a real child they had no connection to and caused the images to be created
The article makes no report that happened. And it does report that is prohibited by the tool in question. But it does then quote a child safety advocate saying tools should not be allowed to "generate this material", so is misleading in the extreme.
Somehow I doubt the prosecutor will apply the same standard to the other image generation models, which I bet (obviously without evidence given the nature of this discussion) can be convinced by a motivated adversary to do the same thing at least once. But alas, selective prosecution is the foundation of political power in the west and pointing that out gets you nothing but downvotes. patio11 once put it that pointing out how power is exercised is the first thing that those who wield power prohibit when they gain it.
You often see (appropriately, IMO) a certain amount of discretion wrt prosecution when things are changing quickly.
I doubt anyone will go to jail over this. What (I think) should happen is state or federal law enforcement need to make it very clear to Xai (and the others) that this is unacceptable, and that if it keep happening, and you are not showing that you are fixing it (even if that means some degradation in the capability of the system/service), then you will be charged.
One of the strengths of the western legal system that I think is under appreciated by people here is that it is subject to interpretation. Law is not Code. This makes it flexible to deal with new situations, and this is (IME) always accompanied by at least a small amount of discretion in enforcement. And in the end, the laws and how they are interpreted and enforced are subject to democratic forces.
When the GP said “not possible” they were referring to the strict letter of the law that I was, not to your lower standard of “make a good effort to fix it”. Law is not code because that gives the lawgivers discretion to exercise power arbitrarily while convincing the citizens that they live under the “rule of law”. At least the Chinese for all their faults don’t bother with the pretense.
> When the GP said “not possible” they were referring to the strict letter of the law that I was
I, the GP, was referring to what I quoted:
“AI products must be tested rigorously before they go to market to ensure they do not have the capability to generate this material [CSAM],” and I agree this is in effect the law at least here in UK.
I did not see the details of what happened, but if someone did in fact take a photo of a real child they had no connection to and caused the images to be created, then yes, they should be investigated, and if the prosecutor thinks they can get a conviction they should be charged.
That is just what the law says today (AIUI), and is consistent with how it has been applied.