Hacker Newsnew | past | comments | ask | show | jobs | submit | romaniv's commentslogin

> It's fun to complain about the good ol' days, but I'd rather face the world as it is and find the joy in it.

This is a manipulative combination of condescension, gaslighting and emotionalization.

"It's fun to complain" trivializes and dismisses a valid observation about the content being submitted as self-indulgent whining.

"I'd rather face the world" implies that people who want to see carefully constructed projects and human-written articles about them are refusing to face the world, i.e. delusional.

"Find the joy in it" reduces the whole discussion to the question of self-imposed mindset, as if there is no possible rational reason to be unhappy about what's going on.


_Nobody_ has the right take. Believe it or not, being seemingly laissez-faire about something can be a well evaluated and rigorous position. I highly doubt that OP doesn't care about the potential negative ramifications of AI, and it's frankly disingenuous and confusing to see every clause interpreted in the worst way possible.

Each clause you've highlighted has a nugget of truth, but that nugget is not inherently negative, it's just a different perspective which you aren't picking up on.


The problem with all these discussions about banning stuff is that privacy is always on the back foot. It's by design. People who want to surveil and manipulate us are actively investigating new ways of doing it, they get paid for it and they risk nothing in the long run. All of these discussions about specifics are just reactions. They aren't even reactions to the surveillance itself, but rather to a discovery by someone that a new surveillance machine has been constructed and launched.

So the current feedback process involves: construction → exploitation → reporting → public awareness → legislation. This is too slow. Moreover, operating in this environment is exhausting.

We need a different feedback loop altogether. I'm not sure which one would work best, but something different needs to be considered.


Yeah, abuse of privacy should be the crime, the same way theft is. How exactly the crime is committed shouldn't matter. Companies can have every right to make a compelling argument that what they did was not an abuse of privacy when they are defending themselves in court.

And critically, it is not someone becoming aware of private information that is the abuse of privacy, it is exploiting that private information which is the abuse. There may be countless legitimate technical reasons you need to collect data, but there can not possibly be a technical justification for selling it.


A culture that values privacy, out of respect, necessity and/or fear, has potential to sabotage each step of the process even if it were not to change.

There was a point, at least in my bubble, where there was a general sense that government surveillance is bad (expect against those people). I think coming out of the Cold War, then 9/11, and followed by propaganda obfuscating the increase, purpose and prevalence of private surveillance took us from "no, we aren't Stalinist Russia" to "I don't care, I have nothing to hide" to just "I don't care" when it comes to the topic of surveillance at all.

Unfortunately, it will take great shocks to instill it so the next generation can learn from the suffering of the previous, and then forget it when privacy is taken for granted again.


>For people who support this kind of ban, I'd ask if you would support a similar ban on new factories for, say, car parts.

If car parts factories produced nothing, employed no one and were made with equipment that will get outdated in a couple of years... Oh, gee, I dunno, it's a tough one.


I still hope that one of these days people in general will realize that executable signing and SecureBoot are specifically designed for controlling what a normal person can run, rather than for anything resembling real security. The premises of either of those "mitigations" make absolutely no sense for personal computers.


I strongly disagree on the Secure Boot front. It's necessary for FDE to have any sort of practical security, it reduces malicious/vulnerable driver abuse (making it nontrivial), bootkits are a security nightmare and would otherwise be much more common in malware typical users encounter, and ultimately the user can control their secure boot setup and enroll their own keys if they wish.

Does that mean that Microsoft doesn't also use it as a form of control? Of course not. But conflating "Secure Boot can be used for platform control" with "Secure Boot provides no security" is a non-sequitur.


Full disk encryption protects from somebody yanking a hard drive from running server (actually happens) or stealing a laptop. Calling it useless because it doesn't match your threat model... I hate todays security people, can't threat model for shit.


> Full disk encryption protects from somebody yanking a hard drive from running server (actually happens) or stealing a laptop.

Both of these are super easy to solve without secure boot: The device uses FDE and the key is provided over the network during boot, in the laptop case after the user provides a password. Doing it this way is significantly more secure than using a TPM because the network can stop providing the key as soon as the device is stolen and then the key was never in non-volatile storage anywhere on the device and can't be extracted from a powered off device even with physical access and specialized equipment.


> The device uses FDE and they key is provided over the network during boot, in the laptop case after the user provides a password.

Sounds nice on paper, has issues in practice:

1. no internet (e.g. something like Iran)? Your device is effectively bricked.

2. heavily monitored internet (e.g. China, USA)? It's probably easy enough for the government to snoop your connection metadata and seize the physical server.

3. no security at all against hardware implants / base firmware modification. Secure Boot can cryptographically prove to the OS that your BIOS, your ACPI tables and your bootloader didn't get manipulated.


> no internet (e.g. something like Iran)? Your device is effectively bricked.

If your threat model is Iran and you want the device to boot with no internet then you memorize the long passphrase.

> heavily monitored internet (e.g. China, USA)? It's probably easy enough for the government to snoop your connection metadata and seize the physical server.

The server doesn't have to be in their jurisdiction. It can also use FDE itself and then the key for that is stored offline in an undisclosed location.

> no security at all against hardware implants / base firmware modification. Secure Boot can cryptographically prove to the OS that your BIOS, your ACPI tables and your bootloader didn't get manipulated.

If your BIOS or bootloader is compromised then so is your OS.


> If your threat model is Iran

Well... they wouldn't be the first ones to black out the Internet either. And I'm not just talking about threats specific to oneself here because that is a much different threat model, but the effects of being collateral damage as well. Say, your country's leader says something that makes the US President cry - who's to say he doesn't order SpaceX to disable Starlink for your country? Or that Russia decides to invade yet another country and disables internet satellites [1]?

And it doesn't have to be politically related either, say that a natural disaster in your area takes out everything smarter than a toaster for days if not weeks [2].

> If your BIOS or bootloader is compromised then so is your OS.

well, that's the point of the TPM design and Secure Boot: that is not true any more. The OS can verify everything being executed prior to its startup back to a trusted root. You'd need 0-day exploits - while these are available including unpatchable hardware issues (iOS checkm8 [3]), they are incredibly rare and expensive.

[1] https://en.wikipedia.org/wiki/Viasat_hack

[2] https://www.telekom.com/de/blog/netz/artikel/lost-place-und-...

[3] https://theapplewiki.com/wiki/Checkm8_Exploit


> Say, your country's leader says something that makes the US President cry - who's to say he doesn't order SpaceX to disable Starlink for your country?

Then you tether to your phone or visit the local library or coffee shop and use the WiFi, or call into the system using an acoustic coupler on an analog phone line or find a radio or build a telegraph or stand on a tall hill and use flag semaphore in your country that has zero cell towers or libraries, because you only have to transfer a few hundred bytes of protocol overhead and 32 bytes of actual data.

At which point you could unlock your laptop, assuming it wasn't already on when you lost internet, but it still wouldn't have internet.

> The OS can verify everything being executed prior to its startup back to a trusted root.

Code that asks for the hashes and verifies them can do that, but that part of your OS was replaced with "return true;" by the attacker's compromised firmware.


The boot verification code wasn't replaced, because it sits in the encrypted partition.


That's premised on the attacker never having write access to the encrypted partition, which is the thing storing the FDE key on a remote system or removable media does better than a TPM. If the key is in a TPM and they can extract it using a TPM vulnerability or specialized equipment. Or boot up the system and unlock the partition by running the original signed boot chain, giving the attacker the opportunity to compromise the now-running OS using DMA attacks, cold-boot attacks, etc. Or they can stick it in a drawer without network access to receive updates until someone publishes a relevant vulnerability in the version of the OS that was on it when it was stolen.

Notice that if they can modify/replace the device without you noticing then they can leave you one that displays the same unlock screen as the original but sends any credentials you enter to the attacker. Once they've had physical access to the device you can't trust it. The main advantage of FDE is that they can't read what was on a powered off device they blatantly steal, and then the last thing you want is for the FDE key to be somewhere on the device that they could potentially extract instead of on a remote system or removable media that they don't have access to.


they said network, not internet :)


> the device uses FDE and the key is provided over the network during boot

An example of such an implementation, since well before TPMs were commonplace: https://www.recompile.se/mandos


I (the commenter you responded to) am a security engineer by trade and I'm arguing that SB is useful. I'm not sure if the parent commenter is or isn't a security person but my interactions with other people in the security field have given me the impression that most of them think it's good, too.

So I'm a little confused about the "can't threat model for shit part," I think these sorts of attacks are definitely within most security folks threat models, haha


Security professionals wanting to have security solutions they can sell to people doesn't mean that those people actually need or benefit from those solutions. Security professionals tend to vastly overestimate the relevant threat models relevant for regular people and have no concern for anything other than so-called security.


>It's necessary for FDE to have any sort of practical security

why? do you mean because evil maid attacks exist? anyone that cared enough about that specific vector just put their bootloader on a removable media. FDE wasn't somehow enabled by secure boot.

>bootkits are a security nightmare and would otherwise be much more common in malware

why weren't they more common before?

serious question. Back in the 90s viruses were huge business, BIOS was about as unprotected as it would ever possibly be, and lots of chips came with extra unused memory. We still barely ever saw those kind of malware.


> anyone that cared enough about that specific vector just put their bootloader on a removable media. FDE wasn't somehow enabled by secure boot.

Sure, but an attacker could still overwrite your kernel which your untouched bootloader would then happily run. With SB at least in theory you have a way to validate the entire boot chain.

> why weren't they more common before?

Because security of the rest of the system was not at the point where they made sense. CIH could wipe system firmware and physically brick your PC - why write a bootkit then? Malware then was also less financially motivated.

When malware moved from notoriety-driven to financially-driven in the 2000s, bootkits did become more common with things like Mebroot & TDL/Alureon. More recently, still before Secure Boot was widespread, we had things like the Classic Shell/Audacity trojan which overwrote your MBR: https://www.youtube.com/watch?v=DD9CvHVU7B4 and Petya ransomware. With SB this is an attack vector that has been largely rendered useless.

It's also a lot more difficult to write a malicious bootloader than it is to write a usermode app that runs itself at startup and pings a C2 or whatever.


> Sure, but an attacker could still overwrite your kernel which your untouched bootloader would then happily run.

Except that it's on the encrypted partition and the attacker doesn't have the key to unlock it since that's on the removable media with the boot loader.

They could write garbage to it, but then it's just going to crash, and if all they want is to destroy the data they could just use a hammer.


The attacker does this when the drive is already unlocked & the OS is running.

Backdooring your kernel is much, much more difficult to recover from than a typical user-mode malware infection.


> The attacker does this when the drive is already unlocked & the OS is running.

But then you're screwed regardless. They could extract the FDE key from memory, re-encrypt the unlocked drive with a new one, disable secureboot and replace the kernel with one that doesn't care about it, copy all the data to another machine of the same model with compromised firmware, etc.


> serious question. Back in the 90s viruses were huge business,

No, they were not. They were toys written for fun and/or mischief. The virus authors did not receive any monetary reward from writing them, so they were not even a _business_. So they were the work of individuals, not large teams.

The turning point was Bitcoin. Suddenly it provided all those nice new business models that can be scaled up: mining, stealing cryptowallets, ransomware, etc.


Malware was absolutely used to sell botnet access in the 90s, millions of Windows machines were used for DDoS and as anonymous proxies


The '90s was a bit too soon for that. Most people using the Internet then were still on dialup, to the extent they were connected at all. There weren't that many DDoSes yet. Even the Trin00 DDoS in 1999 only involved 114 machines.


DDoS for sale were not a big thing until Bitcoin. You couldn't transfer meaningful amounts anonymously.

And no, lol. There were no million machine botnets in 90-s. You could DDoS the entire countries with a few dozen computers, Slammer did that accidentally with Korea.


Secure Boot provides no useful security for an individual user on the machine they own, and as such should be disabled by default.

If you want to enable it for enterprise/business situations, thats fine, but one should be clear about that. Otherwise you get the exact Microsoft situation you mentioned and also no one knows about it.


So everyday users should be vulnerable to bootkits and kernel-mode malware...why, exactly? That is useful security. The fact that people do not pursue this type of malware very frequently is an effect of SB proliferation. If it were not the default then these attacks would be more popular.


Every day users care most about the files in their home directory (or cloud services these days). The OS kernel and ring 0 isn't any more important to them than that.


Ooh, I like this argument a lot. Right now I'm thinking a good analogy is, you live in a gated community, but the locks on your house and your ring camera are fine -- but your overly annoying gate system makes it hard for people or deliveries to get to you etc.


This is a tiresome argument that is based on a pile of unstated and rather shaky assumptions, ignores the very concept of opportunity costs and does not consider alternative solutions to the problems you seem to consider so important.

Fir starters, UEFI Secure Boot is actually rater bad at protecting users from bootkits or kernel-mode malware or anything, really. You can search this very website to get a giant list of bypasses and news about leaked vendor keys. Not to mention the fact that CrowdStrike Falcon incident had clearly demonstrated that Microsoft is more than happy to sign utterly insecure garbage.

Also, the issues with boot malware and kernel verification could be solved in many other ways, many of which are much more sensible or elegant. For example, by storing the bootloader and its keys on a physically separate read-only medium.

The issues with UEFI Secure Boot are actually the main point of the system, just like the issues with Windows executable signing are the whole point of that system.


[flagged]


Citation for what? The existence of bootkits?

Petya/NotPetya, Alureon, Carberp/Rovnix, Gapz, LoJax (firmware rootkit!).

All of these attacks would be thwarted by SB (and in Petya's case, simply having UEFI enabled at all, since that was only for BIOS machines)


No. The existence of actually dangerous bootkits in relation to ease of use of UEFI, ease of prevention, likelihood and magnitude of harm of said bootkits and adverse secondary problems when UEFI is used.


You're arguing for not wearing seatbelts because no evidence has been shown that anyone has ever been saved by wearing one has been presented. That's just stupid by refuting ubiquitously understood data and facts.

SecureBoot ensures a valid, signed OS is installed and that the boot process generally hasn't been completely compromised in a difficult-to-mitigate manner. It provides a specific guarantee rather than universal security. Talking about "many vectors" has nothing to do with SecureBoot or boot-time malware.


No, you're arguing for wearing a spacesuit when riding your bicicle.


Instead of proprietary SecureBoot controlled by megacorps, you can use TPM with Heads based entirely on FLOSS with a hardware key like Librem Key. Works for me and protects from the Evil Maid attack.


You can also use SB with your own keys (or even just hashes)...just because Microsoft is the default included with most commercially sold PCs—since most people use Windows on their PCs—doesn't mean SB is controlled by them. You can remove their signing cert entirely if you want. I have done this and used my own.

Plus they signed the shim loader for Linux anyways so they almost immediately gave up any "control" they might have had through SB.


Won't removing the Microsoft key prevent UEFI option ROMs from PCIe cards from loading when Secure Boot is enabled?

Is it even possible to install firmware containing an oprom resigned with a custom key onto, say, a modern Nvidia GPU, without the entire firmware bundle being signed by Nvidia's own key?


Anything that restricts user freedom is entirely bad, even if it's at the expense of security.


But...it doesn't restrict user freedom. If the user wishes to do so, they can disable SB.


And will then be locked out from an increasing amount of Applications, Media, and eventually even Websites.


I run Linux with Secure Boot and I don't feel locked out of any media, applications, or websites.

My mom uses Secure Boot with Windows and doesn't know or care that it's enabled at all.


The OP is describing the status quo on mobile phones and tablets. On mobile Secure Boot, and systems like it, are used to lock out the user. If the boot path integrity is altered, some apps won't work or will provide degraded experiences.

What's happening the article is what has already happened on mobile: it requires vendor signing to run anything on mobile OS and the vendor locks out 3rd party drivers from their OS entirely.

It's yet another step towards desktop computing converging with mobile when it comes to software/firmware/boot/etc integrity attestation, app distribution and signing, and the ability to use your own bootloader and system drivers. When Secure Boot was first rolled out on laptops, it was used by Microsoft to lock the user out of the boot process before it was adapted to let users register their own keys, it can always be used for its original purpose, and how it's currently used on mobile, again.


They shouldn't _have_ to do anything. The point is that no demands should be placed upon users.

Same problem with age gating. It's fine, as long as zero additional demands are placed upon users.


Freedom from the consequences of malware is more valuable than the low cost of turning SecureBoot off if you don’t want it.

We shouldn’t need the hassle of locks on our home and car doors, but we understand they are probably worthwhile for most people.


Do you lock your house or car and permanently handover the keys to some stranger, who you then have to depend on always to lock or unlock it for you?


No? I have locks on my house and car that I have the keys for. That an argument _for_ secure boot.


It is absolutely not.

It's a decent one for "locks on an apartment building that someone else owns."

But no, purchasing a house ought not include by default "a set of locks that you must work around, permission-wise."


Funnily enough, when you buy a house, the first task is to change all the locks.

Y’know, for security.


Sure. Now, of the people who buy houses -- how many of them would find this a difficult or onerous task?

And then, do computers.

Apples and oranges here, for this point.


Cost me $500 recently. Not difficult, but costly.

Sorry dwattttt, I’m unable to verify your identity and your keys are disabled. If you have an issue, please fax a copy of your DUNS number.


You don't have the ability to revoke my keys on this machine, that's the point. Not even MS could do that, because these are _my_ keys. The alternative proposed here is no keys at all.


What's the improved security argument for terminating VeraCrypt's account though? SB does have clear benefits but what is unclear is the motivation for the account termination.

What's the likelihood that this account ban provides zero security benefit to users and was instead a requirement from the gov because Veracrypt was too hard to crack/bypass.


Are the demands that users become experts in provider their own security against more advanced actors not significantly worse? The control part is unfortunate but the defaults should make it so users can focus on sharing pictures of cats without fear or need for advanced cyber security knowledge.


Users who care enough to do so can enrol their own keys using the extremely well documented process to do that.

Users who don’t care about the runtime integrity of their machine can just turn it off.

Both options are so easy that you could’ve learned how to do them on your machine in the time that you spent posting misinformation in this thread.


So like banks requiring you to have a PIN on your ATM card, even if you don’t want one… that’s bad? Seatbelt laws are bad?


I don't know about executable signing, but in the embedded world SecureBoot is also used to serve the customer; id est provide guarantees to the customer that the firmware of the device they receive has not been tampered with at some point in the supply chain.


Computers should abide by their owners. Any computer not doing that is broken.


Its a simple solution in law to enable. Force manufacturers to allow owners of computer to put any signing key in the BIOS.

We need this law. Once we have this law, consumers csn get maximum benefit of secure boot withiut losing contorl


But that's how it already works.

If you install Windows first, Microsoft takes control (but it graciously allows Linux distros to use their key). If you install Linux first, you take control.

It's perfectly possible for you to maintain your own fully-secure trust chain, including a TPM setup which E.G. lets you keep a 4-digit pin while keeping your system secure against brute force attacks. You can't do that with the 1990s "encryption is all you need" style of system security.


It's funny, but I just encountered this for the first time the other day - feels like I had to do a lot of digging to find out how to do this so that I could add my LUKS key to my TPM... really felt like it took some doing on the HP all-in-one that I was trying to put debian on... maybe because it was debian being debian


Not really. There are many laptops where you cannot rrally get rid of Microsoft key and also cannot put your own key.

Most embedded processors sadly don't have a BIOS, and the signing key is permanently burned into the processor via eFUSEs.


Yes, BIOS is really a PC-thing, AFAIK. Embedded processors have "bootloaders" which often serve a similar purpose of performing the minimal viable hardware initializations in order to load the OS kernel.


> Its a simple solution in law to enable. Force manufacturers to allow owners of computer to put any signing key in the BIOS.

...it's already allowed. The problem is that this isn't the default, but opt in that you need quite a lot of knowledge to set up


I have set it up on worst laptops. There are laptops like hp x360 which doesn't allow modification at all.

I make the analogy with a company, because on that front, ownership seems to matter a lot in the Western world. It's like it had to have unfaithful management appointed by another company they're a customer of, as a condition to use their products. Worse, said provider is also a provider for every other business, and their products are not interoperable. How long before courts jump in to prevent this and give back control to the business owner?


This gets tricky. If I click on a link intending to view a picture of a cat, but instead it installs ransomware, is that abiding by its owner or not? It did what I told it to do, but not at all what I wanted.


We dont need to get philosophical here. You(the admin) can require you (the user) to input a password to signify to you(the admin) to install a ransomware when a link is clicked. That way no control is lost.


What if the cat pictures are an app too? The computer can't require a password specifically for ransomware, just for software in general. The UI flow for cat pictures apps and ransomware will be identical.


A computer that can run arbitrary programs can necessarily run malicious ones. Useful operations are often dangerous, and a completely safe computer isn't very useful.

Some sandboxing and a little friction to reduce mistakes is usually wise, but a general-purpose computer that can't be broken through sufficiently determined misuse by its owner is broken as designed.


If you connect your computer to the Internet, it can get hacked. If you leave it logged in unattended or don't use authentication, someone else can use it without your permission.

This isn't rocket science and it has nothing to do with artificially locking down a computer to serve the vendor instead of the owner.

Edit: I'd like to add that no amount of extra warranty from the vendors are going to cover the risk of a malware infection.


The ransomware can encrypt the files in your home directory just as well with secure boot enabled.

This is just another example of how secure boot provides zero additional security for the threat modes normal users face.



And what if that customer wants to run their own firmware, ie after the manufacturer goes out of business? "Security" in this case conveniently prevente that.


Well, that's a different market. What I say is that there are markets in which customers wants to be sure that the firmware is from "us".

And those markets are certainly not IoT gizmos, which I suspect induce some knee-jerk reactions and I understand that cause I'm a consumer too.

But big/serious customers actually look at the wealthiness of the company they buy from, and would certainly consider running their own firmware on someone else's product; they buy off-the-shelf products because it's not their domain of expertise (software development and/or whatever the device does), most of the times.


you click the box to turn off secure boot


And how do you do that on some locked down embedded device? Say, a thermostat for instance.


...and then some essential software you need to run detects that and refuses to run. See where the problem is here?


It does no such thing if you enrol your own keys using the extremely well documented process to do that.


It's fair to think of secure boot in only the PC context but the model very much extends to phones. It seems ridiculous to me that to use a coupon for a big mac I have to compromise on what features my phone can run (either by turning on secure boot and limiting myself to stock os or limiting myself to the features and pricing of the 1 or 2 phones that allow re-locking).


And the PC situation is only a leftover due to historical circumstances that will be "corrected" in due time. Microsoft already tried this once with their ARM devices.


Where is this "extremely well documented process" to enroll new signing keys on an embedded device? I don't see one for any of these embedded processors with secure boot.

https://pip-assets.raspberrypi.com/categories/1214-rp2350/do...

https://documentation.espressif.com/esp32_technical_referenc...

https://docs.amd.com/v/u/en-US/ug1085-zynq-ultrascale-trm


Tradeoffs. Which is more likely here?

1. A customer wants to run their own firmware, or

2. Someone malicious close to the customer, an angry ex, tampers with their device, and uses the lack of Secure Boot to modify the OS to hide all trace of a tracker's existence, or

3. A malicious piece of firmware uses the lack of Secure Boot to modify the boot partition to ensure the malware loads before the OS, thereby permanently disabling all ability for the system to repair itself from within itself

Apple uses #2 and #3 in their own arguments. If your Mac gets hacked, that's bad. If your iPhone gets hacked, that's your life, and your precise location, at all times.


1. P(someone wants to run their own firmware)

2. P(someone wants to run their own firmware) * P(this person is malicious) * P(this person implants this firmware on someone else’s computer)

3. The firmware doesn’t install itself

Yeah I think 2 and 3 is vastly less likely and strictly lower than 1.


As an embedded programmer in my former life, the number of customers that had the capability of running their own firmware, let alone the number that actually would, rapidly approaches zero. Like it or not, what customers bought was an appliance, not a general purpose computer.

(Even if, in some cases, it as just a custom-built SBC running BusyBox, customers still aren't going to go digging through a custom network stack).


The customers don't have to install the firmware themselves, they can have a friend do it or pay a repair shop. You know, just like they can with non-computerized tools that they don't fully understand.


I’m not talking about your buddy’s Android phone, the context was embedded systems with firmware you’re not going to find on xda developers. A “friend” isn’t going to know jack shit about installing firmware on an industrial control.


This guy thinks that if you rephrase an argument but put some symbols around it you’ve refuted it statistically.

P(robably not)


The argument is that P(customer wants to run their own firmware) cancels out and 2,3 are just the raw probability of you on the receiving end of an evil maid attack. If you think this is a high probability, a locked bootloader won’t save you.


Very neat, but 1) is not really P(customer wants to run their own firmware), but P(customer wants to run their own firmware on their own device).

So, the first term in 1) and 2) are NOT the same, and it is quite conceivable that the probability of 2) is indeed higher than the one in 1) (which your pseudo-statistical argument aimed to refute, unsuccessfully).


As if the monetary gain of 2 and 3 never entered the picture. Malicious actors want 2 and 3 to make money off you! No one can make reasonable amounts of money off 1.


I encourage you to re-evaluate this. How many devices do you (or have you) own which have have a microcontroller? (This includes all your appliances, your clocks, and many things you own which use electricity.) How many of these have you reflashed with custom firmware?

Imagine any of your friends, family, or colleagues. (Including some non-programmers/hackers/embedded-engineers) What would their answers be?


I would reflash almost all my appliances if I could do so easily since they all come with non-optimal behavior for me.


On Android, according to the Coalition Against Stalkerware, there are over 1 million victims of deliberately placed spyware on an unlocked device by a malicious user close to the victim every year.

#2 is WAY more likely than #1. And that's on Android which still has some protections even with a sideloaded APK (deeply nested, but still detectable if you look at the right settings panels).

As for #3; the point is that it's a virus. You start with a webkit bug, you get into kernel from there (sometimes happens); but this time, instead of a software update fixing it, your device is owned forever. Literally cannot be trusted again without a full DFU wipe.


And where are the stats for people running their own firmware and are not running stalkerware for comparison? You don’t need firmware access to install malware on Android, so how many of stalkerware victims actually would have been saved by a locked bootloader?


The entirety of GrapheneOS is about 200K downloads per update. Malicious use therefore is roughly 5-1.

> You don’t need firmware access to install malware on Android, so how many of stalkerware victims actually would have been saved by a locked bootloader?

With a locked bootloader, the underlying OS is intact, meaning that the privileges of the spyware (if you look in the right settings panel) can easily be detected, revoked, and removed. If the OS could be tampered with, you bet your wallet the spyware would immediately patch the settings system, and the OS as a whole, to hide all traces.


LineageOS alone has around 4 million active users. So malicious use is at most 1:4, not 5:1.


Assuming that we accept your premise that the most popular custom firmware for Android is stalkerware (I don’t). This is of course, a firmware level malware, which of course acts as a rootkit and is fully undetectable. How did the coalition against stalkerware, pray tell, manage to detect such an undetectable firmware level rootkit on over 1 million Android devices?


> The entirety of GrapheneOS is about 200K downloads per update. Malicious use therefore is roughly 5-1.

Can you stop this bad faith bullshit please? "Stalkerware" is an app, not an alternate operating system, according to your own source. You're comparing the number of malicious app installs to the number of installs of a single 3rd party Android OS which is rather niche to begin with.

You don't need to install an alternate operating system to stalk someone. And in fact that's nearly impossible to do without the owner noticing because the act of unlocking the bootloader has always wiped the device.

> The Coalition Against Stalkerware defines stalkerware as software, made available directly to individuals, that enables a remote user to monitor the activities on another user’s device without that user’s consent and without explicit, persistent notification to that user in a manner that may facilitate intimate partner surveillance, harassment, abuse, stalking, and/or violence. Note: we do not consider the device user has given consent when apps merely require physical access to the device, unlocking the device, or logging in with the username and password in order to install the app.

> Some people refer to stalkerware as ‘spouseware’ or ‘creepware’, while the term stalkerware is also sometimes used colloquially to refer to any app or program that does or is perceived to invade one’s privacy; we believe a clear and narrow definition is important given stalkerware’s use in situations of intimate partner abuse. We also note that legitimate apps and other kinds of technology can and often do play a role in such situations.

- https://stopstalkerware.org/information-for-media/


This assumes a high level of technical skill and effort on the part of the stalkerware author, and ignores the unlocked bootloader scare screen most devices display.

If someone brought me a device they suspected was compromised and it had an unlocked bootloader and they didn't know what an unlocked bootloader, custom ROM, or root was, I'd assume a high probability the OS is malicious.


> And that's on Android which still has some protections even with a sideloaded APK (deeply nested, but still detectable if you look at the right settings panels).

Exactly, secure boot advocates once again completely miss that it doesn't protect against any real threat models.


Clearly you’ve never met my ex’s (or a past employer). Not even being sarcastic this time.


You expect that stuff to happy with 3 letter agencies.


Sorry, I have no idea what you are trying to say.


Happens with 3letter agencies like NSA and CIA which keep close tabs on current and former employees.

> 2. Someone malicious close to the customer, an angry ex, tampers with their device, and uses the lack of Secure Boot to modify the OS to hide all trace of a tracker's existence, or

Lol security people are out of their mind if they think that's actually a relevant concern.

> 3. A malicious piece of firmware uses the lack of Secure Boot to modify the boot partition to ensure the malware loads before the OS, thereby permanently disabling all ability for the system to repair itself from within itself

Oh no so now the malware can only permanently encrypt all the users files and permanently leak their secrets. But hey at least the user can repair the operating system instead of having to reinstall it. And in practice they can't even be sure about that because computers are simply too complex.


#2 and #3 are fearmongering arguments and total horseshit, excuse the strong language.

Should either of those things happen the bootloader puts up a big bright flashing yellow warning screen saying "Someone hacked your device!"

I use a Pixel device and run GrapheneOS, the bootloader always pauses for ~5 seconds to warn me that the OS is not official.


Yes. They're making the point that your flashing yellow warning is a good thing, and that it's helpful to the customer that a mechanism is in place to prevent it from being disabled by an attacker.


No, they've presented a nonsense argument which Apple uses to ban all unofficial software and firmware as if it had some merit.


Then that customer shouldn't buy a device that doesn't allow for their use case. Exercise some personal agency. Sheesh.


What happens when there are no more devices that allow for that use case? This is already pretty much the case for phones, it's only a matter of time until Microsoft catches up.


There are still phones not obeying the megacorps. Sent from my Librem 5.


Does your Librem 5 run banking apps, though?


Waydroid allows to run Android apps that don't require SafetyNet. If your bank forces you into the duopoly with no workaround, it's a good reason to switch.


And you only have that option as long as people oppose that secure boot enabled dystopia.


I don't know about executable signing, but in the embedded world SecureBoot is also used to serve the PRODUCER; id est provide guarantees to the PRODUCER that the firmware of the device they SELL has not been tampered with at some point in the PROFIT chain.


In my case a firmware provider went out of business, and in one particular device the firmware gets stuck in an endless boot loop. It tries to calibrate some led's, but forgets to round some differences, so it can never converge to a proper calibration.

Device is bricked, firmware is secured with a signing key, refactoring a new device is pretty hard. The current one needed 10 years of development. I'm on the wait to either patch the firmware by finding the problematic byte (if it's patchable, round() needs much more), or to wait for the original dev willing to release an update on his own. BTW Claude opus got much better than ghidra lately. It's perfect.

I see the value of protected firmware updates, but business has to survive also.


Frankly: that's stupid. In case you didn't figure it out, I work in the field and I can tell you that this is was not the mindset at the places where I worked.


> id est provide guarantees to the customer that the firmware of the device they receive has not been tampered with

The firmware of the device being a binary blob for the most part... Not like I trust it to begin with.

Whereas my open source Linux distribution requires me to disables SecureBoot.

What a world.


You can set up custom SecureBoot keys on your firmware and configure Linux to boot using it.

There's also plenty of folks combining this with TPM and boot measurements.

The ugly part of SecureBoot is that all hardware comes with MS's keys, and lots of software assume that you'll want MS in charge of your hardware security, but SecureBoot _can_ be used to serve the user.

Obviously there's hardware that's the exception to this, and I totally share your dislike of it.


> You can set up custom SecureBoot keys on your firmware and configure Linux to boot using it.

Right, but as engineers, we should resist the temptation to equate _possible_ with _practical_.

The mere fact that even the most business oriented Linux distributions have issues playing along SecureBoot is worrying. Essentially, SB has become a Windows only technology.

The promise of what SB could be useful for is even muddier. I would argue that the chances of being victim of firmware tampering are pretty thin compared to other attack vectors, yet somehow we end up all having SB and its most significant achievement is training people that disabling it is totally fine.


+1

An unsigned hash is plenty guard to against tampering. The supply chain and any secret sauce that went into that firmware is just trust. Trust that the blob is well intentioned, trust that you downloaded from the right URL, checked the right SHA, trust that the organization running the URL is sanctioned to do so by Microsoft...

Once all of that trust for every piece of software is concentrated in one organization, Microsoft, Apple or Google, is has become totally meaningless.


It's to serve the regulators. The Radio Equipment Directive essentially requires the use of secure boot fir new devices.


I happen to like knowing that my mobile device did not have a ring 0 backdoor installed before it left the factory in Asia. SecureBoot gives me that confidence.


No it doesn't? The factory programs in the secure boot public keys


The public keys are provided by the developer. Google, or Apple, for example. It's how they know that nothing was tampered with before it left the factory.


Nothing has been tampered with doesn't mean there's no factory backdoor, it just only means same as factory, nothing more.


Apple or Google know what the cryptographic signature of the boot should be. They provide the keys. It's how they know that "factory reset" does not include covert code installed by the factory. That's what we're talking about.


This is true for phones but not for IoT in general.


well, unless govt tells MS to tamper it


If only people didn't install Ask Jeeves toolbars all over the place and then asked their grandson during vacations to clean their computer.


Geez, this brings back memories.

At one time at our university we had table desktop dancers installed everywhere. Was kind of funny when it turned up just as a student wanted to defend their work in a lab.


Hey I made some good money from that as a kid. And some of the malware that people ended up with was also fairly visually pleasing to a teenager.


> I still hope that one of these days people in general will realize that executable signing and SecureBoot are specifically designed for controlling what a normal person can run, rather than for anything resembling real security

For home/business users I'd agree. But in Embedded / money-handling then it's a life-saver and a really important technology.


If by "really important technology" you mean it lets companies save a bit on fraud-related expenses then sure. But the world worked just fine with much simpler solutions because secure boot or not we have plenty of ways to discourage most people from committing crimes.


Videogames are increasingly demanding secure boot.


A few competitive online games do, but most don't. That's why nowadays so many games run great on Linux.


Executable signing is also designed to make easy money from selling certificates


Apple is also somewhat responsible for the attitude shift with the introduction of iOS. 20-25 years ago a locked down bootloader and only permitting signed code would have been seen by techies as dystopian. It's now quite normalized. They say it's about security but it's always been about control.

Stallman tried to warn us with "tivoization".


This is like saying you shouldn't vaccinate your kids because no one gets polio anymore


We we don't just pump our kids with any vaccine ever developed "just in case" either. Instead we weight actual risk against possible side effects - a concept most security people seem to be unable to grasp.


> use less RAM.

The same accounts that defended and promoted LLM use just a few weeks ago are now telling RPi users to use less RAM.


RPis are used in a lot of embedded devices. From industrial IoT to music keyboards. You can't easily use refurbishes laptops for those[1].

--

[1] Korg Kronos with its crazy Intel Atom based architecture notwithstanding.


The current strategy of the AI hype machine is to exhaust people's reserves of attention by presenting a never-ending stream of hard-to-verify "positive" claims. It's Gish Gallop done on the Internet scale with a never-ending parade of tech influencers, proxy "journalists" and low-value accounts. The whole strategy aims for saturation and demoralized acceptance.

It's no surprise that people readjust their immediate reactions by expressing hostility and skepticism about anything AI-related without spending much time on analysis. In fact, it's an entirely rational repones.

Complaining about it without acknowledging the larger picture is disingenuous.

In this particular case, using the term "machine learning" would likely avoid the immediate negative reaction.


It feels related to “it’s easier to argue with a smart person than an idiot.”

It’s really exhausting to feel negative all the time when faced with the cavalcade of terribly weak claims.


The Gaussian Processes underpinning this work are hardly a product of the 'AI Hype Machine' - they've been around for decades, have strong statistical underpinnings, and are being widely explored for experimental design across many disciplines. Reflexive and poorly-informed backlash to any variety of machine learning is no more productive than blindly hyping up LLMs.


Meta Platforms, Inc featuring this technology with a title announcing “AI for American-produced cement and concrete” is, on the other hand, 1000% a product of the AI Hype Machine.


Sure, it's clearly marketing. I think a private company pursuing marketing via open research with open source code (including datasets) is a good trade. A hypey blogpost + research is better than no blogpost and no research.


Stop blaming random stuff for your own shortcomings.


Written like someone who hasn't used AI since the great paradigm shift of December 2025.


Was that the one immediately after the great paradigm shift of November 2025, and before the great paradigm shift of January 2026? I think I remember it.


There was no such paradigm shift. LLMs still suck just as much as they did before, in the exact same ways they did before. In 6 months you'll be trying to BS us about the "great paradigm shift of summer 2026".


> weaponisation of FOMO

This is in an excellent characterization of the kind of marketing tactic I see all over social media right now and that I find absolutely disgusting.

The keyword here is fear. Despite faux-positive veneer, the messaging around certain technologies (especially GenAI) is clearly designed to induce anxiety and fear, rather than inspire genuine optimism or pique curiosity. This is significant, because fear is one of the most powerful tools to shut down rational thinking.

The subliminal (although not very subtle) message there is something very primitive. "If you don't join our group, you will soon starve to death." This is radically different from how most transformative technologies were promoted in the past.


It seems that the emotional rhetorical range in general has been stunted to just fear. Politicians seem to be the worst at it. They used to be able to give actually inspiring speeches. Now they just mash the fear button for everything to get what they want and then wonder why there are problems of despair in society.


I think AI is not quite the same as crypto when it comes to FOMO. At the peak of the craze you could not write on HN that 'crypto is nonsense' unless you wanted to be modded down to oblivion, to be shadow banned forever. I exaggerate, but not much.

With AI people are able to say 'this is nonsense' without people getting the pitchforks out.

As for myself, I don't have the bandwidth to learn how to do clever things with AI. I know you just have to write a prompt and it all happens by magic, but I have been burned quite badly.

First off, my elderly father got tricked out of all of his money and my mother's savings, which were intended for my niece, when she comes of age. It was an AI chatbot that did the deed. So no inheritance for me, cheers AI, didn't need it anyway!

Then there was the time I wanted to tidy up the fonts list on my Ubuntu computer. I just wanted to remove Urdu, Hebrew and however many other fonts that don't have any use for me. So I asked Google and just copied and pasted the Gemini suggestion. Gemini specified command line options so that you could not review the changes, but the text said 'use this as you can review changes'. I thought the '-y' looked off, but I just wanted to do some drawing and was not really thinking. So I typed in the AI suggestion. It then began to remove all the fonts and the window manager, and the apps. It might as well have suggested 'sudo rm -fr /'.

This was my wakeup call. I am sure an AI evangelist could blame me for being stupid, which I freely admit to. However, as a clueless idiot, I have been copying and pasting from Stack Overflow for aeons, to never be tricked into destroying all my work.

My compromise is to allow some fun with cat pictures, featuring my uncle's cat, with Google Banana. This allows me to have a toe in the water.

Recently I went on a course with lots of people with few of them being great intellects. I was amazed at how popular AI was with people that have no background in coding. They have collectively outsourced their critical thinking to AI.

I did not feel the FOMO. However, I am old enough to remember when Word came out. I was at university at the time and some of my coursemates were using it. I had genuine FOMO then. What is this Word tool? I was intimidated that I had this to learn on top of my studies. In time I did fire up Word, to find that there was nothing to learn of note, apart from 'styles', which few use to this day, preferring to highlight text and making it bold or biglier. I haven't used a word processor in decades, however, it was a useful tool for a long time.

Looking back, I could have skipped learning how to use a word processor, to stick to vi, latex and ghostscript until email became the way. But, for its time, it was the tool. AI is a bit like that, for some disciplines, you can choose to do it the hard way, using your own brain, or use the new tools. However, I have been badly burned, so I am waiting it out.


Small Web, Indie Web and Gemini are terminally missing the point. The web in the 90s was an ecosystem that attracted people because of experimentation with the medium, diversity of content and certain free-spirited social defaults. It also attracted attention because it was a new, exciting and rapidly expanding phenomenon. To create something equivalent right now you would need to capture those properties, rather then try to revive old visual styles and technology.

For a while I hoped that VR will become the new World Wide Web, but it was successfully torpedoed by the Metaverse initiative.


There's an element of nostalgia, certainly but it's also a reaction to the overwhelmingly commercial web. Why not build something instead of scrolling through brief videos interspersed with more and more ads that follow you everywhere?

Large companies have helped build the web but they've done at least as much, if not more, to help kill it.


The small web can be a lot of things, but IMO it gets too overrun by the ideologically zealous. One does not have to believe in primitive anarchism to enjoy camping, for example. In general it seems any niche idea on the internet is like candle flame to zealous moths.


Ideological zealots are more or less the only people who hate the modern web so much that they want to quarantine themselves within an entirely different and functionally limited protocol or ecosystem. Everyone else is fine discussing camping in Facebook groups and on Reddit and wherever, maybe just using an ad blocker.


I don't think there's anything terribly modern about a collection of large companies trying to present themselves as the entirety of a given thing (the internet in this case).


I don't think any social media platform has ever actually tried to present themselves as the entirety of the internet.

I don't think anyone actually believes social media platforms comprise the entirety of the internet, either.

But isn't really what people tend to complain about when they complain about the "modern" web. Mostly it's the complexity of websites and the presence of advertising and javascript, the homogeneity of frameworks versus the "quirkiness" of hand-coded HTML, the consolidation of content into platforms (versus, again, hand-coded HTML) and the fact that the web no longer entirely consists of people like themselves. And now AI, of course.

And the fact that every single alt-web is more restrictive than the web, almost universally antithetical to "design" or "creativity" as opposed to pure hypertext, and seems meant to appeal only to the strictly technical mind, bears that out.


It's about capturing the noncommerciality, not the experimentation. Most of the small web sites are just blogs, a solved problem by now, but there's interesting content in many of them.


Which is exactly the point of Gemini.


I'm a dinosaur who bemoans the loss of whatever-it-was we had prior to the mass exploitation and saturation of the web today, so I feel it's my duty to check out Gemini and stop complaining. I'm prepared to trade ease of use or some modern functionality for better content and less of what the internet has become.


Not quite. I think Gemini has deliberately gone for a "text only" philosophy, which I think is very constraining.

The early web had a lot going on and allowed for a lot of creative experimentation which really caught the eye and the imagination.

Gemini seems designed to only allow long-form text content. You can't even have a table let alone inline images which makes it very limited for even dry scientific research papers, which I think would otherwise be an excellent use-case for Gemini. But it seems that this sort of thing is a deliberate design/philosophical decision by the authors which is a shame. They could have supported full markdown, but they chose not to (ostensibly to ease client implementation but there are a squillion markdown libraries so that assertion doesn't hold water for me)

It's their protocol so they can do what they want with it, but it's why I think Gemini as a protocol is a dead-end unless all you want to do is write essays (with no images or tables or inline links or table-of-contents or MathML or SVG diagrams or anything else you can think of in markdown). Its a shame as I think the client-cert stuff for Auth is interesting.


It’s tough but one of the tenets of Gemini is that a lone programmer can write their own client in a spirited afternoon/weekend. Markdown is just a little too hard to clear the bar. Already there was much bellyaching on the mailing list about forcing dependence on SSL libraries; suggesting people rely on more libraries would have been a non-starter

Note that the Gemini protocol is just a way of moving bytes around; nothing stops you from sending Markdown if you want (and at least some clients will render it - same with inline images).


Didn't the creator of the protocol go on a rant when someone made a browser for Gemini that included a favicon?

I can't imagine the backlash if someone tried to normalize Markdown. Isn't the entire point of Gemini that it can never be extended or expanded upon?

Maybe it would be better to create an entirely different protocol/alt web around Markdown that didn't risk running afoul of Gemini's philosophical restrictions?


Yeah, instead someone makes a new and incompatible protocol whenever they want to change it.

> The SmolNet consists of content available through alternative protocols outside the web such as gemini:// gopher:// Gopher+ gophers:// finger:// spartan:// text:// SuperText nex:// scorpion:// mercury:// titan:// guppy:// scroll:// molerat:// terse:// fsp://. There is a summary of the main SmolNet protocols.

- https://wiki.archiveteam.org/index.php/SmolNet


Molerat at least seems to use Markdown but many do seem to be "Gemini but X." I wonder how much use any of those get?


I think a "markdown-web" that uses some of the Gemini approaches for privacy and auth/identity etc would be pretty nice.

Of course, as others have said, we could just use HTML without JavaScript or cookies and we'd be a lot of the way there with 95% less effort but hey in the future we'll probably just query an AI rather than load a web page ourselves.


Given how many people on HN say they like Gemini in principle but wish it weren't so restrictive, some people would use it. All of those people might just be that cross section of HN users, however.


There are images in geminispace, and audio, and (probably) video. It's just not inline. One of constraints of the protocol is that pages cannot load content without your express say-so.


I would like to note that it would be trivial to definitively prove or disprove such things if we had a searchable public archive of the training data. Interestingly, the same people (and corporate entities) who loudly claim that LLMs are creating original work seem to be utterly disinterested in having actual, definitive proof of their claims.


This would be awesome. Even titles and shasums could be enough.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: