Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Are iCloud Photos in their data centers not scanned?

No outright statement confirming or denying this has ever made to my knowledge, but the implication, based both on Apple's statements and the statement of stakeholders, is that this isn't currently the case.

This might come as a surprise to some, because many companies scan for CSAM, but that's done voluntarily because the government can't force companies to scan for CSAM.

This is because based on case law, companies forced to scan for CSAM would be considered deputized and thus subsequently it would be a breach of the 4th amendments safeguards against "unreasonable search and seizure".

The best the government can do is to force companies to report "apparent violations" of CSAM laws, this seems like a distinction without a difference, but the difference is between required to actively search for it (and thus becoming deputized) v. reporting when you come across it.

Even then, the reporting requirement is constructed in such a way as to avoid any possible 4th amendment issues. Companies aren't required to report it to the DOJ, but rather to the NCMEC.

The NCMEC is a semi-government organization, autonomous from the DOJ, albeit almost wholly funded by the DOJ, and they are the ones that subsequently report CSAM violations to the DOJ.

The NCMEC is also the organization that maintains the CSAM database and provides the hashes that companies, who voluntarily scan for CSAM, use.

This construction has proven to be pretty solid against 4th amendment concerns, as courts have historically found that this separation between companies and the DOJ and the fact that only confirmed CSAM making its way to the DOJ after review by the NCMEC, creates enough of a distance between the DOJ and the act of searching through a person's data, that there aren't any 4th amendment concerns.

The Congressional Research Service did a write up on this last year for the ones that are interested in it[0].

Circling back to Apple, as it stands there's nothing indicating that they already scan for CSAM server-side and most comments both by Apple and child safety organizations seem to imply that this in fact is currently not happening.

Apple's main concerns however, as stated in the letter by Apple, echo the same concerns by security experts back when this was being discussed. Namely that it creates a target for malicious actors, that it is technically not feasible to create a system that can never be reconfigured to scan for non-CSAM material and that governments could pressure/regulate it to reconfigure it for other materials as well (and place a gag order on them, prohibiting them to inform users of this).

At the time, some of these arguments were brushed off as slippery slope FUD, and then the UK started considering something that would defy the limits of even the most cynical security researcher's nightmare, namely a de facto ban on security updates if it just so happens that the UK's intelligence services and law enforcement services are currently exploiting the security flaw that the update aims to patch.

Which is what Apple references in their response.

0: https://crsreports.congress.gov/product/pdf/LSB/LSB10713



To add a bit more color, 18 U.S. Code § 2258A specifically states:

> Nothing in this section shall be construed to require a provider to—

> (1) monitor any user, subscriber, or customer of that provider;

> (2) monitor the content of any communication of any person described in paragraph (1); or

> (3) affirmatively search, screen, or scan for facts or circumstances described in sections (a) and (b).

The core of 18 U.S. Code § 2258A - Reporting requirements of providers is available at https://www.law.cornell.edu/uscode/text/18/2258A.


I was looking for that!

Great addition to provide more context.


Don't forget this part:

>(e) Failure To Report.—A provider that knowingly and willfully fails to make a report required under subsection (a)(1) shall be fined— (1) in the case of an initial knowing and willful failure to make a report, not more than $150,000; and (2) in the case of any second or subsequent knowing and willful failure to make a report, not more than $300,000.

I find these clauses at odds with one another in that the Failure to Report clause created a tangible duty upon the provider, which, were I a judge, would satisfy me that rhe provider was, in fact, deputized.

Does nobody actually read the legislation that is passed and realize that oops, I just passed am unconstitutional law.

That they include the construed... clause just solidifies for me that the legislators in question were trying to pull a fast one.


It’s because they wanted their cake and eat it too, get as close as possible to the 4th amendment without crossing the line.

Put simply, if they have knowledge of it they have a duty to report, but they can’t be compelled to try and find out.

In theory this means that if they happen to stumble upon it or are being alerted to it by a third party (e.g. user report) then they have to report it, in practice many voluntarily monitor it, maybe because they want to avoid having to litigate that they didn’t have knowledge of it or maybe because it’s good PR or maybe because they care for the case.

I think in most cases it’s all of the above in one degree or another.


I have no qualms with voluntary monitoring and reporting. However the inclusion of the penalty imposes a tangible duty. That tangible duty is enough to convince me this act is effectively a de facto deputization. The act of searching is, in essence "look out for, raise signal when found". This Act does everything it can to try to cast the process that happens after the search phase as "the search forbidden by the 4th Amendment" instead of the explicitly penalized activity, which is couched as "voluntary, and not State mandated despite a $150000 price tag assessed by... The State". Even going so far as creating a quasi-government entity, primarily funded by the State whose entire purpose is explocitly intended to act as a legal facade to create sufficient "abstract distance" through which the State can claim "it twas not I who did it, but a private organization, Constitional protections do not apply"

Words mean things, and we've gotten damned loose with it these days in my opinion when the want strikes. "Voluntary" anything with a $150000 fine for not doing it is no longer voluntary. It's now your job. If it's your job, and the State punishes you for not doing it, you are a deputy of the State. I do not care how many layers of legal fiction and indirection are between you and the State.

If you can't not comply without jeopardy, it ain't voluntary.


> I find these clauses at odds with one another in that the Failure to Report clause created a tangible duty upon the provider, which, were I a judge, would satisfy me that the provider was, in fact, deputized.

Absolutely not. That section requires a report under the circumstances where a provider has obtained “actual knowledge of facts and circumstances” of an “apparent violation” of various code sections (child porn among others). It doesn’t place on the provider the burden of seeking out that knowledge. In other words, it covers the cases where, for example, a provider receives a report that they are hosting a child porn video and are pointed to the link to it. Providers can’t jam their fingers in their ears and shout LALALA when they’re told they’re hosting (or whatever) CSAM and given the evidence to support it. They don’t have to do anything at all to proactively find it and report it, however.

Think of it like this. I, as a high school, teacher, am a mandated reporter of child abuse. It’s literally a crime (a misdemeanor) for me not to report suspected child abuse. But I don’t have to go out and suss out whether any of my students are being abused. That doesn’t make me a state actor for 4th Amendment purposes (although I am otherwise, because I am a public school teacher, but that’s a different issue).


Except it does make you a state actor, and even children know it, as even the 9-11 year old demographic has literally disclosed to me, the "crazy uncle" in their life, that they are not comfortable being open with any type of guidance counselor or state licensed therapist due to knowledge of just such a dynamic.

A spade, is a spade by any other name. If the state will come down on you for not doing something (message generation), you are a deputy of the State. Period.


It WASN’T the case. Photos are listed on their page of stuff that’s not end to end encrypted.

Since it all went down they added the advanced security option that encrypts photos, messages, and even more.

But that option is opt-in since if you mess it up they can’t help you recover.


Non-encryption ≠ CSAM scanning

That said, I could be wrong about them not scanning currently, I simply don’t have anything authoritative saying either way.

Only statements that imply that they currently don’t, nothing more.


I don’t know if they do or not, but like everyone else I assume they are. Seems like it would be a massive legal (and PR!) liability if it was discovered they weren’t.


Why do you think so? AFAIK there is no legal requirement to scan uploaded files.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: