Hacker Newsnew | past | comments | ask | show | jobs | submit | matthewdgreen's commentslogin

One of the major problems with on-device identifiers is that they must by tied tightly to devices, due to the risks of cloning. This is particularly true for privacy-preserving identifiers. That's why device attestation is so important, because you can't ensure that identity (keys) are locked to a device unless you can verify that the hardware prevents users from extracting keys. The worst part of this is that motivated criminals will certainly figure out how to extract those keys and use them for fraud; it's open-source and open computing that will be destroyed by this.

Yeah, but they aren't.

Google certifies devices unpatched for the last 10 years, rooted, riddled with the malware, because the keys have leaked.

Google knows and still sells the lie.

But you should know better. Google is not selling the actual security, it's just protecting its business.


Google's business is advertising. Right now they don't care whether your phone is "authentic" or secure, because it doesn't cost them money. As AI-enabled bot fraud rises, they will care. Fighting this requires identifying human beings, and that requires trusted devices to be associated with human beings. We're in the foothills still, but look forward and up at where adtech is going.

How is a trusted device associated with a human being? I'm pretty sure the walls of hundreds of bot phones are running trusted Android.

By attaching your government ID to a (single) phone and verifying the human owns it by checking biometrics. You can try this today if you live in one of several US states and have a recent iOS/Android phone. This doesn't stop one real person from attaching their ID to one real phone and then abusing it for botting, but (if implemented well) it limits you to one-real-ID-one-bot-phone.

Don't hardware identifiers also mean that Google can blacklist your device from vast portions of the internet whenever they feel like it?

Do we know whether this is possible? I'm clueless when it comes to phones, so this is a genuine question.

Only if you need to have the entire application behavior (or at least some trusted confirmation) attested, right? Otherwise, an external USB dongle, tapping a contactless smartcard on a phone etc. could do just fine.

Sure, but then you need to receive an attestation from that external dongle, and/or pre-provision it with an identity (like a national ID smartcard.) It might work in places that distribute this hardware, but it's a crummy UX. I expect that the goal of these systems is to make ID verification a requirement for most routine device usage, sadly, and external dongles will crap that up from a UX perspective.

There is also the problem that most external hardware is less secure than things like Apple's SEP. (But on the other hand, probably more secure than the long tail of cheap Android phones, which use virtualization rather than real hardware.)


> then you need to receive an attestation from that external dongle, and/or pre-provision it with an identity (like a national ID card.)

That's how it works in Germany: You tap your national ID card (as a citizen) or eID card (as a non-citizen) on any NFC-capable iPhone or Android device. I personally much prefer that solution over one that requires a specifically trusted device.

The big gap is trusted user confirmation, though: Users need to see what they sign by tapping their card, and then you're usually back to some form of attestation.

Practically, they also completely botched the rollout; literally everyone I know managed to somehow lock themselves out of their card at the first attempted use (assuming they've even bothered to set it up).


The adtechs want this so they can verify the "human" quality of each user. To do this, they don't want people tapping their government ID on their phones every single time they sign up for Reddit or receive an advertisement. Hence (some derivative of) the ID has to be stored on-device to make the browsing/usage experience seamless.

Fair enough, I can see why not.

To me, it seems like just the right amount of friction, and user expectations can work in favor of privacy here: People will hopefully refuse to tap their ID on their phone for a service where they want to remain completely anonymous, even if the protocol technically might support anonymous assertions.


Open weight models will keep pace because capable open-weight models are China's strategy for preventing a closed takeover of AI by the West.

US megatechs stole copyrighted data to train their. Hyper expensive models.

Chinese megatechs stole copyrighted data AND trained their models on derivative / synthetic data that came from the US foundation models.

I’m happy Chinese foundation model trainers were able to use Huawei (homegrown) hardware to train their models (also because having Nvidia dominate that sector is terrible for competition), but if Chinese megatech companies are just deriving their open weights models from US companies, then this is just an IP theft exercise.


Not only this, but the benefit of SMR is based on the possibility that they can be mass-produced at low cost. Until that happens, the benefit doesn’t exist. Solar and batteries and wind have already passed that threshold, but cheap mass-produced SMRs don’t exist yet, even if someone can point to a couple of expensive, bespoke SMRs.

It doesn’t really matter if people on HN or Reddit are in favor of nuclear. At the end of the day, nuclear will get built if someone thinks the cost is worth it over the alternatives. The Internet fan club is mostly irrelevant.

How the public understand what they are (or are not) signing up for is critical though.

Are you using the same AI engineering tools you were using 2-3 years ago? 1 year ago? I'm not. Without a network effect, capturing revenue is hard.


My use is not relevant. It's not ideal to extrapolate from your own personal habits. cursor's user volume and growth is the important thing


"But writing a genuinely good harness with lots of context engineering and solid tool integration is in fact not that easy."

It is surprisingly easy to do it once someone else has done the work. Increasingly that's the nature of AI-based software engineering: point it at an existing tool and ask it to carefully duplicate features until it has parity. As you pointed out, frontier LLM companies happen to be well positioned to sell the resulting products.


Plus more generally, contact with peers through publishing is good. It is easy to end up with work that does not really advance the state of the art if you’re not making regular trips to convince others that your work is interesting.


Eliezer Yudkowsky has gone so far as to say that it might be ok to kill most of humanity (excepting a "viable reproduction population") to stop AI. If that's not just talk, then this line reasoning only gives you a few possible modes of action. I would not be worried about the people with Molotov cocktails, but I'd be very worried about bio terrorism.


>Eliezer Yudkowsky has gone so far as to say that it might be ok to kill most of humanity (excepting a "viable reproduction population") to stop AI

That doesn't sound like a non-misleading summary of anything he would say. Do you have a quote or a link?



Those 2 links certainly satisfy my request. Thank you.

My summary of Eliezer's deleted tweet is that Eliezer is pointing out that even if everyone dies except for the handful of people it would take to repopulate the Earth, even that (pretty terrible) outcome would be preferable to the outcome that would almost certainly obtain if the AI enterprise continues on its present course (namely, everyone's dying, with the result that there is no hope of the human population's bouncing back). It was an attempt to get his interlocutor (who was busy worrying about whether an action is "pre-emptive" and therefore bad and worrying about "a collateral damage estimate that they then compare to achievable military gains") to step back and consider the bigger picture.

Some people do not consider the survival of the human species to be intrinsically valuable. If 99.999% of us die and the rest of us have to go through many decades of suffering just for the species to survive, those people would consider that outcome to be just as bad as everyone dying (or even slightly worse since if 100% of us were to die one day without anyone's knowing what hit them, suffering is avoided). I can see how those people might find Eliezer's deleted tweet to be alarming or bizarre.

In contrast, Eliezer cares about the human species independent of individual people (although he cares about them, too).

Also, just because I notice that outcome A is preferable to outcome B does not mean that I consider it ethical to do anything to bring about outcome B. For example, just because I notice that everyone's life would be improved if my crazy uncle Bob died tomorrow does not mean that I consider it ethical to kill him. And just because Eliezer noticed and pointed out what I just summarized does not mean that Eliezer believes that "it might be ok to kill most of humanity to stop AI" (to repeat the passage I quoted in my first comment).


The question was

> How many people are allowed to die to prevent AGI?

He didn’t say “not everyone dying is preferable to everyone dying”. The question was about acceptable consequences from preemptively stopping AGI under his assumption that AGI will lead to the extinction all humans.

Those are only the same thing under the assumptions that 1) AGI is inevitable without intervention and 2) AGI will lead to the extinction of humanity.

If he believes he is being misunderstood, his “apology” doesn’t actually deny either of the assumptions I identified, and he is widely known to believe them.

In fact, his stated reason for correcting his earlier tweet, that using nuclear weapons is taboo, is an extremely weak excuse. Given the opportunity to save humanity from AGI if that is what you believe, it would be comical to draw the line at first use of nukes.

No, I think Eliezer is trying to come to grips with the logical conclusion of his strident rhetoric.


You have a population of relatively wealthy, scientifically-educated people who believe that AI risk is real and existential. That if they/we don't act, humanity itself might become extinct -- and that this is an unacceptable outcome. Then you have Yudkowsky mooting the possibility that this is basically inevitable (in the absence of global coordination that seems highly unlikely), and suggesting that hyper-violent outcomes might be literally the only way our species survives.

What I am not saying: Yudkowsky intends to exterminate most of humanity.

What I am saying: this is a dangerous environment, and these kinds of statements will be seen as a call to action by a certain kind of person. TFA is literal proof of the truth of that statement. Moreover: within the community there exist trained experts who might be able to, at the cost of millions of lives, plan an attack that could (plausibly) delay AI by many years.

The danger of this argument is that someone who reveres Yudkowsky might take his arguments to the logical conclusion, and actually do something to act on them. (Although I can't prove it, I also think Yudkowsky knows this, and his decision to speak publicly should be viewed as an indicator of his preferences.) That's why these conversations are so dangerous, and why I'm not going to give Yudkowsky and his folks a lot of credit for "just having an intellectual argument." I think this is like having an intellectual discussion about a theater being on fire, while sitting in a crowded theater.


I said something to the same effect in a sibling comment to yours.

> someone who reveres Yudkowsky might take his arguments to the logical conclusion

What about Eliezer himself? Does he not believe his own rhetoric? Certainly if he believes the future of the human race is at stake it demands more action than writing a book about it and going on a few podcasts.

I think the whole thing is a bit like the dog who finally caught the car. It’s easy to use this strident rhetoric on an Internet forum, but LessWrong isn’t real life.


If I ran the FBI I would be very gently keeping tabs on the most fervent (and technically capable) anti-AI groups. Unfortunately I don't think anyone is currently running the FBI. If I was tightly connected to folks in these communities, I would be keeping tabs on my friends and making sure they're not getting talked into anything crazy.


That's like pointing out that Rhode Island isn't designed to be a self-sufficient grid.


We had targeted policies under Biden to increase US production of grid components. This entailed invoking the DPA and setting aside millions for manufacturing improvements. Trump paused all that and created blanket tariffs that don’t seem like they’re designed to onshore US manufacturing of these very specific components but do increase all the material costs. This is not an easy thing to fix with dumb tariffs, and it’s really easy to make everything worse.


I’m just noting the article doesn’t have anything specific of value to say about tariff’s. This is not directed at you but rather the reporters: I can read general opinions on tariffs or political parties anywhere; I need details relevant to transformers here to not just ignore other opinions


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: