Native binaries. They even come with the additional benefit that I can firewall them off individually, so I can selectively allow telemetry collection if I'm OK with it.
It's funny to see this comment, because any time someone complains about the Zoom desktop app, there's a comment bragging about how they only use Zoom through their web browser.
I'm surprised that anyone claiming to be security- or privacy-minded would prefer a native desktop app to a website running in a sandboxed web browser. Even if you build the code yourself, you're probably safer with the web app. At least in the browser, you can monitor and block any connections from the web app. Good luck doing that for a native binary without inspecting the source and build process for underhanded code exfiltrating data from your machine through some unknown number of obfuscation techniques.
Amusingly enough, the OP article is about telemetry of "frontend tooling," which does not refer to "web apps," but to "native binaries for building web apps."
> I'm surprised that anyone claiming to be security- or privacy-minded would prefer a native desktop app to a website running in a sandboxed web browser.
That shouldn't be so surprising, really. It's a question of what threats you are the most concerned about. That decision is pretty individual.
> Good luck doing that for a native binary without inspecting the source [...]
I don't have to do all of that. I firewall off all outgoing traffic by default. If a binary is trying to exfiltrate data, it won't get past the firewall. That's hard to do in a web context.
> which does not refer to "web apps," but to "native binaries for building web apps."
Indeed so! I'm not saying that just using native binaries all by themselves is sufficient. I'm saying that I have more tools available to mitigate the problem when it's a native binary.
Just look at the impossibility of effectively stopping browser fingerprinting for an example of the difference between the two things.
At what level? Unless you're running every native binary on its own hardware, or maybe within a VM on an isolated VLAN, how can you be so confident that your firewalling method is less leaky than the battle-tested sandboxing of Chromium or WebKit?
Also, why not both? The most secure option might be running a web app in an isolated Chromium process, with a Chromium extension allowlisting outbound connections, and then also firewalling the Chromium process itself at the operating system level.
I run exclusively Linux, and on each machine I've set up, all outbound traffic is dropped. I whitelist specific applications as needed. I do this by using rules that match --gid-owner, and change the group ownership of whitelisted applications to a special group that those rules will catch.
> how can you be so confident that your firewalling method is less leaky than the battle-tested sandboxing of Chromium or WebKit?
I can't be 100% certain, of course -- although that's equally true of Chromium and webkit. But I have a monitoring system that checks my firewall logs to catch anything suspicious.
> The most secure option might be running a web app in an isolated Chromium process
Because fingerprinting. I can't think of any way to stop fingerprinting. The best I can do (and I do) is to disallow JS from running, but that still leaves many possible signals.
> Because fingerprinting. I can't think of any way to stop fingerprinting. The best I can do (and I do) is to disallow JS from running, but that still leaves many possible signals.
This is even worse for native apps. webapps do fingerprinting because they don't have access to lowlevel native information. Any native App does not need to do Fingerprinting, they can just read device IDs directly, much more reliable that any method of fingerprinting.
Unfortunetly less and less apps nowdays can work in an exclusive offline mode.
There is no way to distinguish between legitimate traffic and "telemetry" trafic.
How do you know the app is not using a side channel to exfiltrate data through the normal mechanism of operation? Zoom - I expect it to require significant amount of bandwidth to operate. How am I to know that if remote call X fails it will not just route data through the video path?
Not sure how effective this is unless you're exclusively talking about offline binaries rarely updated, coupled with an external firewall to mitigate workarounds.
I'm not sure what you mean by "offline binaries" here. Do you mean applications that don't need to talk over the internet? If so, then yes, that's all I can effectively cover. I'm extremely cautious about such what applications I'll use that require talking over the network.
I in no way claim that my approach is airtight. It's purely a "best effort" sort of thing. But it's far better than nothing. I'm better off for it even if it doesn't stop everything.