They are, there's a video on YouTube you can find where they interview someone with that job and they test 10,000 a day. Then they mention that they go home and vape some more
Foisting the responsibility of the extremely risky transport industry onto the road developers would certainly prevent all undesirable uses of those carriageways. Once they are at last responsible for the risky uses of their technology, like bank robberies and car crashes, the incentive to build these dangerous freeways evaporates.
I think this is meant to show that moving the responsibility this way would be absurd because we don't do it for cars but... yeah, we probably should've done that for cars? Maybe then we'd have safe roads that don't encourage reckless driving.
But I think you're missing their "like bank robberies" point. Punishing the avenue of transport for illegal activity that's unrelated to the transport itself is problematic. I.e. people that are driving safely, but using the roads to carry out bad non-driving-related activities.
It's a stretched metaphor at this point, but I hope that makes sense (:
It is definitely getting stretchy at this point, but there is the point to be made that a lot of roads are built in a way which not only enables but encourages driving much faster than may be desired in the area where they're located. This, among other things, makes these roads more interesting as getaway routes for bank robbers.
If these roads had been designed differently, to naturally enforce the desired speeds, it would be a safer road in general and as a side effect be a less desirable getaway route.
Again I agree we're really stretching here, but there is a real common problem where badly designed roads don't just enable but encourage illegal and potentially unsafe driving. Wide, straight, flat roads are fast roads, no matter what the posted speed limit is. If you want low traffic speeds you need roads to be designed to be hostile to high speeds.
I think you are imagining a high-speed chase, and I agree with you in that case.
But what I was trying to describe is a "mild mannered" getaway driver. Not fleeing from cops, not speeding. Just calmly driving to and from crimes. Should we punish the road makers for enabling such nefarious activity?
(it's a rhetorical question; I'm just trying to clarify the point)
Which in case of digital replicas that can feign real people, may be worth considering. Not a blanket legislation as proposed here, but something that signals the downstream risks to the developer to prevent undesired uses.
Then only foreign developers will be able to work with these kinds of technologies... the tools will still be made, they'll just be made by those outside jurisdiction.
Unless they released a model named "Tom Cruise-inator 3000," I don't see any way to legislate that intent that would provide any assurances to a developer that their misused model couldn't result in them facing significant legal peril. So anything in this ballpark has a huge chilling effect in my view. I think it's far too early in the AI game to even be putting pen to paper on new laws (the first AI bubble hasn't even popped, after all) but I understand that view is not universal.
I would say a text-based model carries a different risk profile compared to video-based ones. At some point (now?) we'd probably need to have the difficult conversation of what level of media-impersonation we are comfortable with.
It's messy because media impersonation has been a problem since the advent of communication. In the extreme, we're sort of asking "should we make lying illegal?"
The model (pardon) in my mind is like this:
* The forger of the banknote is punished, not the maker of the quill
* The author of the libelous pamphlet is punished, not the maker of the press
* The creep pasting heads onto scandalous bodies is punished, not the author of Photoshop
In this world view, how do we handle users of the magic bag of math? We've scarcely thought before that a tool should police its own use. Maybe, we can say, because it's too easy to do bad things with, it's crossed some nebulous line. But it's hard to argue for that on principle, as it doesn't sit consistently with the more tangible and well-trodden examples.
With respect to the above, all the harms are clearly articulated in the law as specific crimes (forgery, libel, defamation). The square I can't circle with proposals like the one under discussion is that they open the door for authors of tools to be responsible for whatever arbitrary and undiscovered harms await from some unknown future use of their work. That seems like a regressive way of crafting law.
> The creep pasting heads onto scandalous bodies is punished, not the author of Photoshop
In this case the guy making the images isn't doing anything wrong either.
Why would we punish him for pasting heads onto images, but not punish the artist who supplied the mannequin of Taylor Swift for the music video to Famous?†
Why would we punish someone for drawing us a picture of Jerry Falwell having sex with his mother when it's fine to describe him doing it?
(Note that this video, like the recent SNL "Home Alone" sketch, has been censored by YouTube and cannot be viewed anonymously. Do we know why YouTube has recently kicked censorship up to these levels?)
> then we'd have safe roads that don't encourage reckless driving.
You mean like speed limits, drivers licenses, seat belts, vehicle fitness and specific police for the roads?
I still can't see a legitimate use for anyone cloning anyone else's voice. Yes, satire and fun, but also a bunch of malicious uses as well. The same goes with non-fingerprinted video gen. Its already having a corrosive effect on public trust. Great memes, don't get me wrong, but I'm not sure thats worth it.
Creative work has obvious applications. e.g. AISIS - The Lost Tapes[0] was a sort of Oasis AI tribute album (the songs are all human written and performed, and then the band used a model of Liam Gallagher's mid 90s voice. Liam approved of the album after hearing it, saying he sounded "mega"). Some people have really unique voices and energy, and even the same artist might lose it over time (e.g. 90s vs 00s Oasis), so you could imagine voice cloning becoming just a standard part of media production.
As a former VFX person, I know that a couple of shows are testing out how/where it can be used. (currently its still more expensive than trad VFX, unless you are using it to make base models.)
Productivity gains in the VFX industry over the last 20 years has been immense. (ie a mid budget TV show has more, and more complex VFX work than most movies that are 10 years old, and look better.)
But, does that mean we should allow any bad actor to flood the floor with fake clips of whatever agenda they want to push? no. If I as a VFX enthusiast gets fooled by GenAI videos (Picture area done deal, its super hard to stop reliably) then we are super fucked.
You said you can't see a legitimate use, but clearly there are legitimate uses (the "no legitimate use" idea is used to justify bad drug policy for example, so we should be skeptical of it). As to whether we should allow it, I don't see how we have a choice. The models are already out there. Even if they weren't, it becomes cheaper every year to train new ones, and eventually today's training supercomputers will be tomorrow's commodity. The whole idea of AI "fingerprinting" is bad anyway; you don't fingerprint that something is inauthentic. You sign that it is authentic.
> The models are already out there. Even if they weren't, it becomes cheaper every year to train new ones,
Yes, lets just give up as bad actors undermine society, scam everyone and generally profit from us.
> You sign that it is authentic.
Signing means you denote ownership. A signed message means you can prove where it comes from. A service should own the shit it generates.
Which is the point, because if I cannot reliably see what is generated, how is a normal person able to tell. being able to provide a mechanism for the normal person to verify is a reasonable ask.
You put the bad actors in prison, or if they're outside your jurisdiction, and they're harming your citizens, and you're America, you go murder them. This has to be the solution anyway because the technology is already widely available. You can't make everyone in the world delete the models.
Yes signing so the way you show something is authentic. Like when the Hunter Biden email thing happened I didn't understand (well, I did) why the news was pretending we have no way to check whether they're real or whether the laptop was tampered with. It was a gmail account; they're signed by Google. Check the signatures! If that's his email address (presumably easy enough to corroborate), done. Missed opportunity to educate the public about the fact that there's all sorts of infrastructure to prove you made/sent something on a computer.
> open their password manager which also might need you to authenticate, type in their master password, search for the name of the said website, copy the password, paste it in
This is one way to guarantee you'll eventually fall for a phishing attack. Are we really running URL-unaware password managers in the year 2026?
>Are we really running URL-unaware password managers in the year 2026?
URL-aware browser plugins for autofilling passwords can also make people _more_ susceptible to phishing.
The password managers plugins sometimes not working correctly changes the Bayesian probabilities in the mind such that username/password fields that remain unfilled becomes normal and expected for legitimate websites. If that happens enough, it inadvertently trains sophisticated computer-literate users to lower their guard when encountering true phishing websites in the future. I wrote more on how this happens to really smart technical people: https://news.ycombinator.com/item?id=45179643
Password browser plugins being imperfect can simultaneously increase AND decrease security because of interactions with human psychology.
Even if autofill breaks, the moment it does, if you're security aware, is to actually read the URL you're at, not start copy-pasting like it's the wild west.
> autofilling passwords can also make people _more_ susceptible to phishing
No, it doesn't. What it does, is generally make people _less_ susceptible to phishing, but the moment you stop paying attention when autofill breaks, is the moment you can STILL get phished. But in 90% of the cases, the autofill will HELP you avoid getting phished.
What an absolutely bananas thing to say, that autofilling passwords make people more susceptible to phishing, completely wrong and borderline harmful to spread things like this.
It can also not "break", autofill your credentials, and in submission the data ends up going to the attacker (see my other comment on DOM-based clickjacking)
> The new technique detailed by Tóth essentially involves using a malicious script to manipulate UI elements in a web page that browser extensions inject into the DOM -- for example, auto-fill prompts, by making them invisible by setting their opacity to zero
The website is compromised, all bets are off at that point. Of course a password manager, regardless of how good it is, won't defeat the website itself being hacked before you enter your credentials.
That's not a "hijack of autofill", it's a "attacker can put whatever they want in the frontend", and nothing will protect users against that.
And even if that is an potential issue, using it as an argument why someone shouldn't use a password manager, feels like completely missing the larger picture here.
I never said someone should not use a password manager.
I'm pointing out that password manager autofill can be used in an attack without the person's knowledge.
The site itself does not have to be compromised btw, this could come through the device itself being compromised or a poisoned popup on a website without referrer checks. There are probably quite a few ways I haven't considered to be able to get this to work.
I don't think your other comment supports your assertion. I've experienced Bitwarden failing to auto-fill due to quirks on websites, but I've never seen it fail to identify the domain correctly.
You link to Bitwarden's issues mentioning autofill and while it's true that autofill might break, if you click on the extension icon it's going to present you with a list of credentials for the current domain and give you options to quickly copy the username and password to your clipboard.
If that list is empty then I'm immediately put on high alert for phishing, but so far it's always been due to the website changing its URL/domain. I retrace my steps, make sure I'm on the right domain, then I have to explicitly search for the old entry and update it with the new URL.
That said, I've seen people do: Empty account list -> The darn password manager is misbehaving again -> Search and copy the password. I wouldn't consider those people to be sophisticated users since they're misunderstanding and defying the safety mechanisms.
Wrong. If my password manager doesn't auto-fill I'm am immediately far more wary. If I didn't have any URL matching in the password manager then I would very quickly stop paying close enough attention to the URL because I'd have to do it too frequently.
It’s also an issue that extensions like 1Password are _too_ URL-aware, until recently it tried to use heuristics and ignore subdomains for matching credentials. This meant that we used to get a list of almost a hundred options when logging into our AWS infrastructure. No matter which actual domain used. Someone could have used this vulnerability as part of a phishing campaign.
> extensions like 1Password are _too_ URL-aware, until recently it tried to use heuristics and ignore subdomains for matching credentials
I've used 1Password for years (Linux+Firefox though, FWIW), and this never happened to me or our family. I did discover though that the autofill basically went by hierarchy in the URI to figure out what to show, so if you specify "example.com" and you're on "login.example.com", you'll see everything matching "*example.com" which actually is to be expected. If you only want to see it on one subdomain, you need to specify it in the record/item.
That it ignored the subdomains fully sounds like it was a bug on your particular platform, because 1Password never did that for me, but I remember being slightly confused by the behavior initially, until I fixed my items.
> 1Password currently only suggests items based on the root domain. I can see the value of having 1Password suggest only exact matches based on their subdomain, especially for the use case you have described.
> As it currently stands, 1Password only matches on the second level domain (i.e. sample.com in your example). While I can't promise anything, this is something we've heard frequently, so I'll share your thoughts with the team.
Now it is:
> You’ll see the item as a suggestion on any page that’s part of the website, including subdomains. The item may also be suggested on related websites known to belong to the same organization.
It's that second sentence which is the problem, they "suggested" by being "smart" items from one AWS domain which ought to have never suggested on another unrelated AWS domain.
I work in a company where I have two okta accounts (because hey, why not) on two .okta.com subdomains.
Bitwarden _randomly_ messes up the two subdomains and most of the times (but not always, which seems strange actually), it fills the form with the wrong password. I don’t know why. I know that there is an option to make it stricter on domain matching but you can’t configure it on per item basis, only for the whole vault.
Every browser-based bitwarden client I have used have the option to choose the autofill option on single items as well as the global default. Find the login item, click edit, scroll to autofill options, where each URI is listed with a gear icon next to it. Click the gear and select the appropriate match type.
For the absolute majority of use cases, "host" should be the default, but i have found uses for both "base domain" and "regular expression" in some special cases.
Normal browser extension Bitwarden Ctrl-Shift-L autofill defaults to the most recently used entry when there are multiple matches, afaik.
You can indeed configure it on a per-item basis. The vault-wide setting you found is just the default for ones that don’t have an override set. Click on the domain/url matching setting in the individual credential and you can change it to exact host match.
Which is a legitimate concern since they are a gaping hole in security and isolation. Visiting website should be treated like phone calls from the bank. If you get called/mailed you don't follow the information there but call back / visit the site yourself e.g. from bookmarks or copy url from pw manager.
I am now wondering if Safari's integration with the system-wide password manager is similar to having a 1Password browser extension installed in a chromium browser
Lookup Dom-based clickjacking. It will "autofill" the field but on submission it sends the data to an attacker.
"The new technique detailed by Tóth essentially involves using a malicious script to manipulate UI elements in a web page that browser extensions inject into the DOM -- for example, auto-fill prompts, by making them invisible by setting their opacity to zero.
The research specifically focused on 11 popular password manager browser add-ons, ranging from 1Password to iCloud Passwords, all of which have been found to be susceptible to DOM-based extension clickjacking. Collectively, these extensions have millions of users."
""All password managers filled credentials not only to the 'main' domain, but also to all subdomains," Tóth explained. "An attacker could easily find XSS or other vulnerabilities and steal the user's stored credentials with a single click (10 out of 11), including TOTP (9 out of 11). In some scenarios, passkey authentication could also be exploited (8 out of 11).""
Yes we should run URL-unaware manager, but nearly no one understand security, especially in browser. Let's see the permission asked for the #1 manager in firefox (Authenticator):
Input data to the clipboard
Access your data for sites in the dropboxapi.com domain
Access your data for www.google.com
Access your data for www.googleapis.com
Access your data for accounts.google.com
Access your data for graph.microsoft.com
Access your data for login.microsoftonline.com
Yep! And #2 (2FAS Auth):
Display notifications to you
Access browser tabs
Access browser activity during navigation
Access your data for all websites
Even better, maybe at one point web browser can get their sh* together and build better permission system (and not just disable functions like manifest v3). For now the majority of people trust opaque organization shoving them unknown code their run with way too many permissions on their computers.
Talking about unknown code there is a lot of work to be done on reproducible build as anything touching web has nearly nothing about it.
That's a very smug take, especially when you encounter websites every day that don't autofill for whatever reason (As another poster already showed with some examples) or in my case the 1Password extension in Safari failing to connect to the main 1Password deamon or a number of other issues that make this still common place in 2026.
And that's for me, a technical user using a password manager.
I also find the 1Password browser (Safari) extension to pitifully poor. But there's a neat workaround: set up a hotkey for 'Show Quick Access'. I use Ctrl+Opt+\.
This pops up 1Password's overlay but it is still URL-aware. I find it works almost universally. It'll show you what it's going to fill: just hit Return and it'll be done.
It doesn't even care what browser you're in. Works across the lot. Of course it isn't fully integrated so Passkeys won't work.
I'm using Apple's Password Manager (native app on iOS & macOS), but didn't install its browser extension that can do autofill because for me it wasn't as convenient (it has a bad UX, unreliable autofill, etc.)
So, when I'm prompted to log in somewhere, I open the password manager and repeat the steps you just mentioned. It does add extra steps to the process, but I don't think it makes it less safe than having an autofill extension, which requires a ton of permissions and is more prone to compromises. And yes, my manual method also means I have to rely on me being aware of the URLs I'm on, but I usually bookmark my main services, so it's working fine for me this way. I also treat all emails as spam and/or an attack unless I verify them by the domain, and whether I had just recently requested to log in or requested a password change, etc.
At the end of the day, it boils down to us paying attention to every action we take, regardless of the measures we take, as new and different methods are being deployed to own us every day.
Behavioral (invisible) analytics alone is the secret trillion dollar industry that online advertisers want to distract you from by focusing on the morality of ad blocking.
A good blocker should block many of those scripts too, but there's no stopping server-side analytics at scale.
It also neglects that car companies purposely made cars extremely unsafe while chasing profits.
The only reason we have any regulations and safety standards for cars is because of one person leading the charge: Ralph Nader. You know what companies like Ford, GM, Chrysler tried to do after he released "Unsafe at any speed?" Smear his name in a public campaign that backfired.
Car companies had to be dragged kicking and screaming to include basic features like seatbelts, airbags, and crumple zones.
The responses to the rest of the questions in the survey indicate (to me) that sentiments on AI, which is forced onto people more often than not, are largely negative.
> Close to 2/3 Americans also believe in magic so I'm not sure what these studies are supposed to tell us.
I think you're missing the point, as are many other comments on this post saying effectively, "These people don't even understand how AI works, so they can't make good predictions!"
It's true that most people can't make accurate predictions about AI, but this study is interesting because it represents people's current opinion, not future fact.
Right now, people are already distrustful of AI. This means that if you want people to adapt it, you need to persuade them otherwise. So far, most people's interactions with AI are limited to cheesy fake internet videos, deceptive memes, and the risk of shrinking labor demand.
In its short tenure in the public sphere, large language models have contributed nothing positive, except for (a) senior coders who can offload part of their job to Claude, and (b) managers, who can cut their workforce.
Yet this is a primary goal of AI. Problem is that the way how the dominant economic system is structured, reduction of said demand increasingly leads to a societal crash.
> What I'm wondering is, why couldn't the AI generate this solution? And implement it all?
My read of the blog post is that is exactly what happened, and the human time was mostly spent being confused why 40MB/s streams don't work well at a coffee shop.
This frames the issue in a fundamentally incorrect way.
Since the dawn of pseudoanonymous communication, politicians have been trying to get their nasty little claws into it. See Clipper Chip in the 90's. They've tried many avenues to deanonymize and centralize. Going after the parents is just their latest - they've discovered they could use convincing language like this to trick a bunch of people who previously had no reason to care about The Internet to now suddenly "realize" oh gosh it's scary out there, what can we do to help.
Unfortunately their latest tactic is working. They figured out how to recruit a (possibly) well-intentioned bloc into supporting efforts that undermine privacy in an irreversible way.
> Because of our industry’s refusal to take those concerns seriously, we lost our voice,
Fighting against demands to censor, unmask, and neuter the closest thing we've got to a global platform of freedom is a valiant effort. Not entertaining these bureaucrats isn't some moral failing of our industry, in the same sense that ignoring a persistent busker on the street entitles him to your money after some uninvolved observer has arbitrarily decided he's made the same demand enough that somehow it's starting to make sense because the victim hasn't yelled at him with a good enough argument against it.
In other words: yep, still the parents' job, yep, internet was still there when I grew up, yep, I turned out fine, yep, politicians have been trying to take away our privacy for 30 years (and unfortunately, they're finding more creative and convincing ways to disguise it). Hint: it's never about the kids
Well yes it is. It is about both the cover problem (child safety), and the ulterior motive (surveillance, control).
And not taking the reasonably concerning cover problem seriously, by finding sensible solutions, both leaves it festering unsolved in its own right, and growing in usefulness as a cover problem.
Note that there is also "censorship" (!) - `gag_factor` - even in this free thought paradise. The lesson is that no matter your scale, suppressing certain content is necessary to prevent low quality posts and spam from turning your site into a swamp.
Correct, it is not personalized. So we need a different word than 'algorithmic'. People keep saying that word when they want to "ban" a certain kind of math. But they should at least be particular about what they don't like (sort your friends' posts chronologically is also a personalized algorithm, after all..)
reply