I see a lot of very intelligent people here unable to agree upon a
matter that seems, in essence, simple enough.
That is in itself troubling and partly answers a question.
If developers on Hacker News cannot fathom whether Apple deceptively
transmitted PII, or whether zealous journalists are over-egging the
pudding, then we have another problem.
Obfuscation is a form of deception through complexity. It can be hard
to tell from the outside whether that complexity is "necessary" and
whether its ill effects are deliberate or accidental.
Nevertheless, it remains a form of deception if you present a system
as simple, with controls that apparently do understandable things as a
front for another system that even you, as a developer, no longer
understand. This same theme is coming up in AI, social algorithms,
moderation/censorship of speech. We are muddying the waters in the
hope that people believe they are shallow.
And some would say this is a deliberate adversarial tactic to guide people to surrender their privacy and freedoms, because those that would defend them can't sufficiently explain the complexity to be more convincing than 'simple' messages.
We say that because Apple has a history of using deliberate adversarial tactics to abuse the market and claim dominance. It's almost as iconic as Google killing off their own products.
You’re logged into an Apple device with your Apple ID going to an Apple online service where Apple obviously knows what you bought. Saying that Apple is keeping track of what you do on their service is like saying that you didn’t know your doctor has your medical records and took notes of your interactions with them
I think this can be explained by simple denial; Apple's reality distortion field, or some variation on "It is difficult to get a man to understand something, when his salary depends on his not understanding it." Maybe not salary, but a foundational world view, much like religion.
I mean, people on HN will argue that it's wrong to block ads, a point of view that only makes sense to me through the lens of the above quote.
But, yes, it's a big problem because people that don't factor in the inherent biases of those making the arguments will take on those biases without the salary that makes it make sense. Is that like a Stand Alone Complex?
> I think this can be explained by simple denial; Apple's reality distortion field, or some variation on "It is difficult to get a man to understand something, when his salary depends on his not understanding it." Maybe not salary, but a foundational world view, much like religion.
My personal view, which I presume is the same as many others, is that these things keep being a "if there is smoke, there's fire" situation. Yes, this could be abused, but is it?
Meanwhile, in full "what aboutism" mode, I know Google does crappy shit with my data, I know Meta is full blown flaming evil. I know Ad Tech has the entire world ablaze with privacy abuse.
Meanwhile, people keep pointing to smoke from Apple and screaming that I just can't see what they think they see ...
I want you to take that sentence and throw it away and instead have a mental paradigm shift.
"Where there is fuel there is risk".
One day when you have a lot of time look up the USCSB (United States Chemical Saftey Board) channel on youtube and look at the decade of very well done videos on deadly industrial disasters they have done. People will ignore risk for years accepting the danger because it's "always been that way", they will turn off alarms because they are annoying, they will bypass safety controls because they slow the task down.
I don't care how dangerous FB/Google/whoever is, Apple is its on seperate factory capable of blowing up in it's own spectacular fashion, and much like a gasoline refinery they are building up a massive amount of fuel that is at risk of a spark.
That's good point of view. I'm quite glad that the EU is finally tackling this stuff seriously and would LOVE to see strict regulations about what data you can track (as little as possible) AND what you can share (nothing at all, preferably).
Content based advertising should be good enough, if everyone has the same playing field.
Out of curiosity, what does Google do with your data which would be considered bad, ie. beyond the basics of ad targeting and improving their internal services?
Not that I can find in any hurry, and it's also worth my mentioning that occurrences are decreasing, and I'm not sure if that's because there are fewer people legitimately taking the ad-defensive stance, or whether there's a 'chilling effect' caused by (in my opinion) the increasing volume (in both definitions of the word) of arguments against ads.
My memory also tells me that most of the pro-ad stances on HN have been cagey; justification within a certain set of privacy-respecting or customer-service-improvement ideals.
That is in itself troubling and partly answers a question.
If developers on Hacker News cannot fathom whether Apple deceptively transmitted PII, or whether zealous journalists are over-egging the pudding, then we have another problem.
Obfuscation is a form of deception through complexity. It can be hard to tell from the outside whether that complexity is "necessary" and whether its ill effects are deliberate or accidental.
Nevertheless, it remains a form of deception if you present a system as simple, with controls that apparently do understandable things as a front for another system that even you, as a developer, no longer understand. This same theme is coming up in AI, social algorithms, moderation/censorship of speech. We are muddying the waters in the hope that people believe they are shallow.