I am glad that the public backlash forced them to fix a deliberate BACKDOOR that they had introduced (by design) in the Network Extension Framework that macOS Big Sur now forces all the firewalls to use. (At least, they claim to have removed it). But it is hard to trust them again, and I would prefer to use a firewall that uses its own kernel extension to manage the network than using Apple's API again. (Obviously that's going to be really hard with the changes they have made to the OS).
I know many Apple's fan see this as a positive move.
But let's not ignore the pattern of privacy violations and user data collections due to deliberate design and the "apology" and "changes" that follow once CAUGHT. A few of these that immediately come to mind are:
(For those who want to diss me for the above, realise that Apple's new found love for privacy doesn't mean shit without such public scrutiny and discussions. And if you want it to last, remain suspicious and VOCAL on any such possible violations.)
Apple has no love for privacy nor ever had. They are in a market position where their main competitors - Google primarily, Microsoft and Amazon - are highly dependent on revenue streams extracted by monetizing personal information.
Apple is in a position to cut that stream without affecting its bottom line, so it does it and claims privacy as a core value.
I won't look a gift horse in the mouth, but I have no doubt that the tables could switch at any time.
When Apple launched iOS 6, it was the first operating system to include per-app privacy controls around access to things like microphone, camera, photos, etc. Controls we consider fundamental today. It did not mention it a single time in any of its PR or marketing at all. The first reference you find to it will be from Apple blogs who were surprised to stumble upon it in the iOS 6 beta. It took Android two more years to launch a similar feature.
Yes, Apple is doubling down on its competitive advantage here. But to claim it does not and never cared for privacy is just ignoring the facts and the history. It moved the industry forward.
The first except for Blackberry. Before IOS or Android even existed Blackberry had granular per app permissions.
I don't disagree with your argument though, of the modern mobile OSs Apple moved to towards the per-app model before everyone else. I just find it interesting when Apple or Android gets coverage/credit for a feature that has long existed but was forgotten or ignored.
Actually, BB10 had an even better feature, you could provide dummy data to wrapped Android applications. So you could fill the Android contact data with an empty contact list and the app would be none-the-wiser. This feature was never advertised.
I did not know this, thanks for teaching me something new! It goes at least back to BlackBerry OS 4, running on the BlackBerry Pearl, released in 2008 (but likely further back):
> You can set permissions that control how third-party applications on your BlackBerry device interact with the other applications on your device. For example, you ca control whether third-party applications can access data or the Internet, make calls, or use Bluetooth® connections.
That's not true at all. Even early Nokia (Symbian) or Blackberry phones had fine grained permission prompts. As did early Android, actually, but they were deemed to annoying for users.
Apple does deserve credit for leading and influencing Android in this regard, but neither the concept nor the implementation is new.
Do you have a link to the Android thing? I'd be curious to learn more. I always thought early Android had transparency (showing what an app uses) but no control over changing it.
Android had permission system before iOS implement it, but it was just a prompt for list of permissions at install time so not much worth like current one.
In that video, Steve mentions the fact that iOS had location permission prompts before iOS 6 – is that what you are referring to as incorrect? Because that is a good catch, permission prompts were present at least as early as iOS 4.2:
EDIT: Oh, I see, if you are referring to the PR or marketing portion, I think it is certainly clear that Apple had a pro-privacy stance, but that did not make its way into the company's _consumer_ marketing:
Having been a programmer through the 90's, and watched Microsoft (and Oracle, et. al.) through their most malfeasant years, I'm very cautious about giving companies the benefit of the doubt. However, the fact that Apple is leaving hundreds of billions of dollars on the table by NOT monetizing their aggregated user data does seem to indicate that their will is strong here, and that "they" mean what "they" say about privacy. The kind of money they AREN'T making from this move would try any mortal's soul.
Isn't "it's in our financial interests right now" about as much "love" as you'll get for anything by a corporation? Saying "Apple has no love for privacy, they're only doing it because it sells" sounds moot to me, every company only does things because they sell.
By choosing which markets you operate in and which products you develop you have a fair bit of influence which things are in your "financial interest".
E.g. creating a company with a business model which benefits from taxing CO2 emissions (Tesla) is morally great. Whereas having a business model which benefits from cheap oil (VW) is less so. Product decisions (electric vs. fuel engines) have a large effect on your long-term financial interests.
If Apple knew it could make more money in say for example, arms dealing, wouldn't it be obliged to pivot to serve the shareholders? I guess it's somewhat democratic as shareholders could vote against it for moral reasons
No. The common refrain of "but publicly-traded corporations have to maximize shareholder value" is a myth, not a law.
There are bits in the law about management having a fiduciary duty to shareholders, but that mainly means that they aren't supposed to be stuffing their own pockets at the expense of investors. There's a wide birth for management to decide what kinds of profit are and are not worth it.
It’s never just about “more money”. It’s about time-discounted, risk-adjusted “more money”. Doing morally repugnant things increases risk substantially. This is not the risk profile the owners signed up for.
This change may even depress the stock price directly as disgusted owners sell (this harms the ability of current shareholders to earn a return, as the loss is now but the future revenues are discounted).
The market is as moral as its shareholders, which is far less than perfection but a lot better than zero.
I'm all for pointing someone to the sources, but let's not instill the idea that consulting Wikipedia is doing substantially more diligent research than looking at memes.
In common speech, "love" is expected to last eternal, not just to the end of the quarter. I'm not really a romantic, but using the word "love" in a corporate context defiles it.
Only nominally. In practice you will have a board and executives and managers who are subject to the conflicts of interest inherent to the principal-agent relationship. This is a major problem of economics.
That's quite an extreme-end of capitalist way of looking at it.
Companies build a vision or image for how they behave and a lot of that is going to be driven by marketability.
For example Microsoft has taken a very pro-developer stance since Satya Nadella took over. Not just because it's directly profitable to be pro-developer, but because it helps their long term image, culture etc. This goes a long way to explaining a lot of their recent actions like helping Github be available in Iran again and open sourcing large parts of C# / .NET.
So the question becomes: are Apple being pro-privacy because it's a long term stance they want to take and make a basis for their company culture because it's something their customers really want. Or are they taking the stance simply because it doesn't impact their own profitability right now, but would drop it if there was an obvious potential income stream.
> Not just because it's directly profitable to be pro-developer, but because it helps their long term image, culture etc.
And thus is indirectly profitable.
> That's quite an extreme-end of capitalist way of looking at it.
It's only extreme if you can show that companies routinely take the moral stance even when it impacts their short- or long-term profitability. Is Microsoft good now despite its best interests, just because it decided to take a moral stance?
> ironically, most of these companies are out of China
Of the three companies named:
- Google's user-facing services (search, email, app store, docs, ...) are blocked, but Google Ads (which are censored) and Android (which comes without any content that would require censorship) are still sold.
- Microsoft: I'm not aware of any of their products being unavailable. Windows is the dominant desktop operating system in China, and I'd be surprised if the app store wasn't censored. Bing search results are definitely censored (they tell you so at the bottom of the page).
- Amazon isn't selling much that could run afoul of censorship, except possibly books (remember when Amazon used to be an online bookstore?) but in China their market is mostly targeted at the niche of high-end imported goods. (Note the country-of-origin indicators on https://www.amazon.cn/ )
Worse than this, and to the point of this post - you can’t offer comms service in China unless it has a way to be snooped by government. So some comm services like Skype have to offer a separate app, just for China, which affects all messages sent and received by users of this version of the app, even if it comes from a regular app.
I was telling my friends who moved from China and kept their iPhones, to just buy a new one... just in case...
Google doesn't certify devices in the Chinese market which run Android. Android is open source. Devices made for the domestic Chinese market run versions of Android created by the manufacturers and lack Google apps and services.
The majority of Android phones manufactured in China are not intended for the domestic market but are exported, and do come with Google apps and services, which Google licenses to the manufacturers.
"Brain test game was deleted from the Chinese App Store".
These are just changes to the app catalog, what reason do we have to believe that a bunch of brain training apps being dropped or withdrawn is evidence of censorship? Has any of this been verified with the App publishers?
Apple is leaving a massive amount of money on the table by not monetizing user data or using it to serve ads. They are by far the most privacy focused of the large tech companies, though they're obviously not perfect.
The only way to do better than Apple is a full FOSS stack, and that comes with different challenges and is more of a hassle to maintain.
The PRISM revelations in particular made me realise that we can really only rely on Linux for security, since Apple, MS, Amazon and all the big tech companies are onboard with cooperating with the NSA. If you've read the way eg the CIA installs snooping software on Macs and PC's, they hide the Mac version in your hidden EFI boot volume, even from the factory.
OK, so some Taiwanese network device manufacturers have poor default account practices, news at 11:00. I'm not seeing the CIA connection.
Devices like this are used by the government and military contractors as well, and as you can see such vulnerabilities are trivial to detect so you can't count on the opposition finding out about it and using it. This one was picked up days after the firmware release. The smoking gun would be government and military admins secretly being advised by the CIA to close these security loopholes, so the government is protected but everyone else isn't. IMHO that would get Snowdened almost immediately. There's no way they'd keep a lid on that, there would just be too many people involved.
As with a lot of this conspiracy theory stuff, it only makes sense if you don't think about it too much. Once you actually start thinking through the consequences and practicalities, it doesn't hold together.
Right so they have firmware malware and tools for infiltrating it into machines. That’s not a surprise. The extraordinary claim that I challenged was that this is being installed on Apple computers at the factories. So far as I can tel, there is no evidence for it.
This is like someone claiming it will rain next week and when asked how they know, they say they can prove it rained last week. That’s irrelevant. Yes I know they have firmware attacks. Where does the claim they are putting it on machines in the factory come from? How many times do I need to ask the same question?
>"NightSkies 1.2" a "beacon/loader/implant tool" for the Apple iPhone. Noteworthy is that NightSkies had reached 1.2 by 2008, and is expressly designed to be physically installed onto factory fresh iPhones. i.e the CIA has been infecting the iPhone supply chain of its targets since at least 2008.
Factory fresh just means fresh from the factory, not necessarily in the factory. The attack targets phones in their manufactured state with the OS and vendor firmware installed. In other words it's not an attack that depends on end user software (Apps) being installed, or on user behaviour, or even on features of the mobile network.
By supply chain, when they say mail orders and other shipments, they just mean between the vendor and the customer. In this case the use of "supply chain" could be miss-understood, this is a post-factory attack which would be carried out in transit, probably at a US border.
We have seen that done before to shipments of devices such as computers and network gear that have been intercepted and hacked before delivery to a suspect, or a target organisation or country.
I don't think this can be reasonably construed as evidence for Apple conniving with the CIA. In fact I still don't think that would make any sense from a CIA perspective. The factories aren't even in the US. Apple employees aren't background checked or sworn agents, they're a potential security risk. Why involve them if you don't need to?
Alright then they probably aren't infected straight from the factory. However Apple is definitely collaborating with NSA as are other major US tech companies.
It isn't the dichotomy you set it up to be. macOS solved this without "breaking almost all software by default" using per-app, per-directory permissions for the file system, over and above the decades-old POSIX file modes model.
You're making excuses for the lack of security innovation on Linux workstations. They've fallen behind.
To be frank, Mac is not a model I would want to follow.
I am the sysadmin and owner of my machine, not Apple or some other organization. They have no business telling me what software I can and can't run, or what files that software can access.
> Beginning in macOS 10.14.5, software signed with a new Developer ID certificate and all new or updated kernel extensions must be notarized to run. Beginning in macOS 10.15, all software built after June 1, 2019, and distributed with Developer ID must be notarized
So, no. You need Apple's approval to be able to create software that can actually be ran by end-users even if you do not distribute using the App Store.
How can you notarize your software when apple has suspended your developer account? Would you not say that notarization requires you to have a developer account, which requires Apple's approval?
Via CLI you can, but GUI apps connect to your X server session, and then the fun begins - any application you allow to connect can essentially capture your keyboard, mouse, clipboard and a ton of other fun things,as there is no sandboxing applied between them. It's inherent in the design of the X protocol.
There are solutions that are intended to force the sandboxing by opening a new Xserver for every application, e.g. Firejail [0], but that comes with another set of interoperability problems.
Wayland was supposed to address some of these concerns, but it will only do so for applications that natively talk wayland protocol, not the ones that connect through x-protocol via xwayland
XWayland is essentially a translation layer consisting of Xserver and Wayland client [0]. Therefore it has all the same problems a normal Xserver has, which they do acknowledge:
> A Wayland compositor usually spawns only one Xwayland instance. This is because many X11 applications assume they can communicate with other X11 applications through the X server, and this requires a shared X server instance. This also means that Xwayland does not protect nor isolate X11 clients from each other, unless the Wayland compositor specifically chooses to break the X11 client intercommunications by spawning application specific Xwayland instances. X11 clients are naturally isolated from Wayland clients.
I use QubesOS, but it comes with its own set of problems as well.
That does not sound different from what Windows does. By default all programs running under the same user can access all windows of other applications (except UAC elevated ones). It's a relic from when OLE and Clipboard in Windows 3 just was (very simplified) a pointer to RAM.
The only reason it is worse with X11 is that it is an inherently networked protocol, so the same statements also apply to any remote connections you might allow to your X server. It also makes it somewhat easier to capture Xkb / Xinput events purely through API, without need for any elevation or excessive polling of the devices ("it just works").
That includes any systems you might have SSHed into with X forwarding enabled, as it automatically extends the trust there. Yes, your ssh client might try to enable X SECURITY extension (which clamps acesss to just the current window), but it is disabled by default or bypassed anyway by the users as that extension is known to crash quite a few programs.
Both are a product of their time when the prevailing approach was to trust the programs you run.
It’s inaccurate to say “Apple selling user data to US government”. That’s not what the article claims (the word “sell” doesn’t even appear in the text), and there are in fact many consumer data brokers who really do sell data to law enforcement.
I don't think it's inaccurate; the IC pays the data providers (presumably for implementation/overhead) for receiving the FAA702 (PRISM/FISA) data.
Which data is picked by the US government, and no warrant is required. Apple provided data on 30,000+ users to the US government without a warrant in 2019, per their own transparency report.
If they received money for the program, they are indeed "selling user data to [the] US government".
A reimbursement for effort/overhead is not the same as selling for profit. Again, there *really are companies who sell consumer data to law enforcement for profit*, so it's important to use the correct language and make the appropriate distinction. Do I like that Apple does that? No. Do I think the actual policy conversation is best served by accuracy in language? Yes.
(That tweet has been deleted by the Apple developer).
Before macOS Big Sur / Catalina, many of these application firewalls - Lulu, Little Snitch, HandsOff, TripMode, RadioSilence etc. - all used their own kernel extensions to effectively monitor and block any processes from connecting to the internet.
Firewalls are system security softwares. And naturally Apple would prefer to oversee and have this in-built in their OS. Apple also wants to discourage kernel extensions on macOS (they have some good reasons - a poorly designed kernel extension can make the OS unstable; but mostly its about feature control with Apple).
So they informed all such firewall app developers that their individual kernel extensions will no longer be allowed, and Apple had instead created an OS API specifically for their use case. (They described the features it would have and invited them to give their feedback). And so all application firewalls were forced to update their apps and use this OS API.
But this API had an undisclosed, in-built list of Apple approved applications that no firewall was allowed to block. Someone created that list. Someone added that list in the system, and coded the API to specifically give them special privileges to bypass any application firewalls.
Bugs are accidental. Backdoors like these are intentional.
(You can however take exception to the usage of "Backdoor" here - perhaps from Apple's perspective it was a good design decision as many of these services go wacko, and sometimes even freeze your system, when they aren't allowed to do what they are coded to, like do some operation over the internet. I've often seen CPU spikes and slowdowns when you block some of these services.)
I agree, this is 90% likely malicious. The non-malicious usage I can imagine is that for debugging the firewall you don't want to lock your other services out of it in case something goes wrong (or as you said, for reliability issues)
My fear is that Apple will now make the design decision to make these services more unreliable if blocked. Like I experienced, others too have noticed similar behaviour:
> It’s worth noting that Big Sur and its predecessors are built to assume that they can talk to Apple at any time, but when we don’t allow it, a few unwanted side effects pop up. For example, the keyboard sometimes takes longer to wake up from sleep mode. Or, in certain situations, the Mullvad app takes longer to detect that the computer is online.
(Ofcourse, as a developer, I can sympathize with the Apple developers - when you design a product to use the internet, you don't really think hard about all kinds of use cases where internet access is deliberately denied).
> when you design a product to use the internet, you don't really think hard about all kinds of use cases where internet access is deliberately denied
Why wouldn't you, though? That seems like a pretty big oversight. Lazy at best, negligent at worst.
Not everyone in the world has constant internet access, and it seems ridiculous to design an _operating system_ with that assumption.
I like to take an eBook reader to the park, for example. Prior to owning that device, I took a laptop when learning a new programming language. If that laptop had been running BigSur, I'd see all these same issues based on Apple's un-thought-out "design decisions".
It's not like these things fail to work without internet access. Obviously, the vast majority of people using the OS are going to have internet access so assuming actions based on internet access and then failing if you can't connect is accounted for, regardless of the reason (weak connection, no connection, etc). I don't see how anyone can call this behavior an oversight when it works exactly as intended.
If they claim "here's the API you can use to control network access" but can then put arbitrary apps to work around that, that's the definition of backdoor access to the device.
A "back door" is any kind of mechanism that was added by the vendor to circumvent security mechanisms.
Back doors are typically not disclosed to the user, and can't be turned off. So for example an automatic software update mechanism isn't a back door, as the user is aware of it and can typically turn it off if they are concerned about security.
An undisclosed mechanism that allows Apple apps to circumvent firewalls does very much fit the description of a back door.
Intent doesn't matter with regards to back doors. Most back doors are not made with malicious intent, or at least the vendors usually claim that they only had good intentions for the back door. (Eg. see the recent reports where a router manufacturer had a secret password that they claimed was only used for software updates)
The danger about back doors is that malicious software can use the back doors to circumvent the security measures, just like Patrick Wardle demonstrated that it was possible to use Apple's content filter exclusion to circumvent firewalls.
They can say whatever they like, it's another story they've got no credibility. It was quite obvious it was very much a deliberate action (just look at the naming, itself)
I will not assume anything. When they roll official update and people do proper testing (Wireshark etc), may be I will update. But for me the romantic days of "trust us we care for users privacy" are over. Period.
I read Apple actions, not intents. And reading actions for me is: "Mac OS is a valuable part of vertical integration map of Apple services". When I buy something - I own something. If idea of ownership is problematic for Apple they must state it openly and rename all the hardware store buttons from "Buy" to "Rent".
Keep in mind, pf still worked as intended. With caveats that macOS doesn't work properly if all traffic is dropped, imposing 30 second timeouts before publishing a default route to the routing table, among other things throwing a hissy fit. But, at least on macOS, there has always been a way to block all traffic.
While I agree that one should remain suspicious and be vocal about privacy violations and security issues, I find your attitude of continuing to attack Apple inappropriate.
Apple competitors Google and Microsoft which control the great majority of OS installs both for mobile and desktop don't even pretend to care about privacy. I have collected over the years reports about dozens of underhanded tactics they use to manipulate users into sharing data, when they're not downright forcing them to do it.
Currently Google is showing in the EU a modal pop-up asking users to accept to be tracked or fuck off to a labyrinth of "See more" and "Other options" which are obviously violating the GDPR. They got fined five times already for GDPR violations.
Microsoft have told their non-enterprise customers to bugger off and learn to live with telemetry. They're actively working around people blocking telemetry and they're being investigated for these practices.
How about a thank you that at least someone at Apple listens to their customers?
> I find your attitude of continuing to attack Apple inappropriate.
I do so because I am an Apple user - this is being typed on a mac mini. I also own other Apple hardwares.
I also advocated for Apple hardware within my family & friends to switch from Android to Apple quite successfully (I am the IT guy in my circle). I did so because I would like to believe their commitment to privacy they have publicly stated. (Tim Cook being Gay adds to that trust because he understands that privacy is not just about hiding secrets but protecting ourselves from political persecutions by those who do not like some part of our identity - whether it be regional, gender, political, cultural, religious, sexual etc.).
It doesn't mean I trust them blindly or completely or will allow them to screw my customer rights (like right to repair, and OWN my device). Would you?
Your CV or Apple credentials are not relevant. If one considers privacy important, as you seem to, then they should engage with companies which at least try to behave in a privacy-friendly way instead of typing backdoor in all caps several times and painting those companies in a bad light while not recognizing any of their contributions to improving the privacy of their customers.
And here are those contributions spelled out for you: Apple is the only company preventing Google from having the private information of all smartphone users on the planet on their servers.
Maybe not to you. It should be for Apple, if they actually care about their users / customers.
> Apple is the only company preventing Google from having the private information of all smartphone users ...
No, it isn't. There are other worthy contenders to both ios and Android, like Sailfish OS. (In fact, using a Sailfish OS mobile phone actually protects my data from both Apple and Google - it's a double win for me).
And unlike you, my idea of privacy isn't trusting one corporate over another, but ensuring that no corporate has access to my personal data itself in the first place - I absolutely do not want Apple to have access to any of my data. (And whether you like it or not, until Apple does precisely that, I will keep criticising it).
As a queer person myself, I think your trust in the "gay experience" of rich guys is dangerous.
We just had this Szajer scandal, where a powerful outspoken homophobe was caught in an gay orgy.
The gay experience (shame, rejection and discrimination) also comes with increased chance of "co-morbid" personality defects, which may be more pronounced worh exceptional wealth and status.
Queer solidarity by gay men is not a given anymore.
The fact that their competitors are as bad or worse in this regard does not make Apple saints - and this has all the hallmarks of an intentional addition to position Apple apps differently from the others, which is a classic Apple move.
Compromise in security and prviacy clearly has been deemed worth by someone at Apple before the stink was raised.
This kind of black and white thinking is very impractical and self-defeating, except maybe for RMS, to remind us what we should strive for.
For most of us, the real world decision is to either work with a company which is actively working on undermining privacy or with one which is trying to improve things.
In the real world, Apple still gets to have their business and people work with them, but enough stink is raised to both externally and privately to get them to change their decision. Which they have evidently done here, so working as intended. Reputation damage is a thing if it involves conversations with other F100 companies.
This particular debacle is one of the reasons why $CurrentCorpo I am occasionally working with decided to skip Big Sur until much later in the lifecycle - not the only one, though.
I know many Apple's fan see this as a positive move.
But let's not ignore the pattern of privacy violations and user data collections due to deliberate design and the "apology" and "changes" that follow once CAUGHT. A few of these that immediately come to mind are:
- Apple selling user data to US government: https://www.theguardian.com/world/2013/jun/06/us-tech-giants...
- Apple iPhone 11 tracks user location even when location services are explicitly turned off by user (another BACKDOOR): https://www.silicon.co.uk/mobility/smartphones/apple-iphone-...
- Apple macOS tracks every app that you use: https://sneak.berlin/20201112/your-computer-isnt-yours/
- Apple introduces BACKDOOR in its API to allow Apple apps to bypass application firewalls: https://www.patreon.com/posts/hooray-no-more-46179028
(For those who want to diss me for the above, realise that Apple's new found love for privacy doesn't mean shit without such public scrutiny and discussions. And if you want it to last, remain suspicious and VOCAL on any such possible violations.)