> If someone has access to the hardware you have lost already.
Tell that to my Chromebook that you can't crack. Or a Macbook. Or your phone.
Or even a UEFI device absent the occasional vulnerability like this. There are vulnerabilities, but they're comparatively rare and they get patched.
In fact this is simply wrong. Defense of systems against attackers with physical access is a mostly solved problem, and secure boot is the answer. It is not (and never will be) perfect, but it does work.
> Tell that to my Chromebook that you can't crack.
Can I have it for a week or two, then send it back?
Chromebooks are quite robust against remote attacks, and they're fairly robust against local physical attacks, but "Put an external interface on the NOR SPI flash and put whatever you want there" defeats just about everything they do with secure boot, because you can put your own code there instead. Or, on at least some devices, just remove the write protect screw and run some incantations[0].
If you have physical access, very few systems are designed to be trustworthy in those cases. Even if you have a ROM root of trust somewhere, if it's on the board it can be desoldered and replaced with a different one (and I'm not aware of any hardware that does more than "write protect regions of the SPI flash - it can be done, but it's certainly not common).
Even the TPM can be physically de-encapsulated and be manipulated/have data read out, if it's a discrete physical device.
> Chromebooks are quite robust against remote attacks, and they're fairly robust against local physical attacks, but "Put an external interface on the NOR SPI flash and put whatever you want there" defeats just about everything they do with secure boot, because you can put your own code there instead
This hasn't been true for a decade or more. Boot ROMs are validated by on-chip firmware in the modern world (not just on Chromebooks, everywhere). You can flash the chip with your JTAG gadget, sure, but if doesn't have a signature that works it won't do anything but brick your board.
No, the obvious holes have long since been plugged. The design is secure. The implementation may have holes, but on the whole you can't break into an arbitrary box. You need to get lucky with a crack like the one in the linked article.
> Note that even in case of the devices protected by the SE, opening up the device and disconnecting the battery would still disable write protection.
Unless I'm missing something, the "read only" region is simply a normally write-protected region of the flash chip, and with physical access, there a range of ways to rewrite that region.
Who is cracking anything? The problem with physical access is that at the end of the day your security is only as good as the flat flex cable transporting your key presses.
This is not some science fiction scenario; look at the "addin boards" they found in CryptoPhones (you know that thing was using secure boot!):
Nobody cares to exploit or modify the software if at the end of the day what you are trying to protect is running across a PCB trace and they have physical access.
Very interesting link, though I'm very disappointing it doesn't include speculation on when/where/how the bug was introduced to the phone. Also, I'm pretty surprised I hadn't come across the info that Wikileaks was bugged. Thanks for sharing.
Mad respect for whoever designed that, for a tailor made small series design that is an incredible piece of work. The component density alone is probably some kind of record. Note the Spartan 6.
That literally seems like it taps the microphone wire to record raw audio signals.
Brilliant, but not a software or hardware issue. (Although actually having the device brick itself if it is opened up would have prevented the bug from being inserted).
Likewise secure PIN pads are easily "defeated" by a camera.
Plenty of TPM devices are encased in epoxy and designed to self destruct if tampered with. And lots of modern day devices (iPhones, game consoles) have stood up to years of attempts to exfiltrate their secrets.
Work arounds are possible, but the industry has, for better or for worse, figured out how to make secure secret stores.
There are a lot of companies that are paid big bucks by nation states to find vulnerabilities and not make them public. NSO for example. So yes Apple only fixes the bugs people or companies find that make them public. But not for the reason you are implying.
This might be more true if we didn’t have counter examples. It’s much easier to suborn a PC than a Chromebook or Apple product, and while that’s certainly not perfect nobody should be complacent about their enterprise laptops being softer target than an AppleTV.
None of which alters the fact that there have been multiple reported attack vectors on both Apple and Google products in the last 2 months. They can be and are hacked daily.
Yes, but note that this attack works against a locked or powered off device. I’m not saying it’s perfect or that we can stop replacing C code, only that it’s safe to buy a used Apple or ChromeOS device in a way which is not true of a PC.
Tell that to my Chromebook that you can't crack. Or a Macbook. Or your phone. Or even a UEFI device absent the occasional vulnerability like this. There are vulnerabilities, but they're comparatively rare and they get patched.
In fact this is simply wrong. Defense of systems against attackers with physical access is a mostly solved problem, and secure boot is the answer. It is not (and never will be) perfect, but it does work.