AI usage is getting expensive since Anthropic et al are turning the screws, and that money has to come from somewhere. Reducing AI usage is blasphemy of course, so cutting headcount is the only path forward.
Is there no one figuring out ROI on AI spend vs human payroll? I can't make sense of this idea that companies are firing productive employees because they're spending too much money on AI that isn't doing anything for them... they still hope chatbots will be worth it in the future?
It should be illegal to host insecure services, especially when you're dealing with PII. Breaches keep happening and nobody gives a fuck, because the worst that'll happen is you might lose a handful of customers and buy some "credit monitoring".
Incidents like this should be followed by an audit and charges being laid. Send corp officers to jail for negligent security failures. If you can go to jail for accounting fraud, you should be able to go to jail for cybersecurity-promises-fraud.
They claim to be compliant with a number of security standards [1]. I would love to see a postmortem audit of how much of this they actually implemented.
I don't think that criminal negligence is the most helpful legal tool for incentivizing improved security. It's too hard to prove negligence.
Instead, there should be standard civil penalties for leaking various degrees of PII paid as restitution to the affected individual. Importantly, this must be applied REGARDLESS of "certification" or whether any security practices were "incorrect" or "insufficient". Even if there's a zero-day exploit and you did everything right, you pay. That's the cost of storing people's secrets.
This would make operating services whose whole "thing" is storing a bunch of information about individuals (like Canvas) much more expensive. Good! It's far to cheap to stockpile a ticking time bomb of private info and then walk away paying no damages just because you complied with some out-of-date list of rules or got the stamp of approval from a certification org that's incentivized to give out stamps of approval.
And this strict liability will come with an expectation of insurance. The insurance policies will necessitate audits, which will actually improve security.
It's not a popular opinion but I agree. I live in a country that has a very extensive principle of public records, and often times these leaks disclose much less than you would get by simply calling the authorities and ask. Now, whether that's good or bad is a different story.
This is a solved problem in pretty much every other domain of life - if you are following best practises but something that wasn't reasonably forseeable happens, then you're fine, but if the bad thing happens as a result of negligence then you are in trouble.
Criminal law isn't about making things alright for the victim. That's what insurance is for.
Even if you leave your door unlocked, if someone walks in and steals your stuff, it's a crime. The state has an interest in prosecuting crimes even if the victim didn't do everything they could to prevent it.
The company is not the victim here. Its users are. [I suppose my previous comment was a bit ambigious - i meant something bad happens to someone else not to yourself]
A better version of your analogy would be if your landlord failed to repair your front door in a reasonable period of time and as a result soneone walked in and stole your stuff. Yes the theif is the primary responsible party, but the landlords negligence in maintaining the property probably also exposes them to some liability.
P.s. This is neither here nor there, but restitution is a part of criminal law.
"Best practice" in cybersecurity is largely vendor-driven with little to no independent empirical validation.
That standard is likely to lock people into buying some pretty bad software, but it does little to ensure that they're running reasonably secure systems.
I like to relate it to operating an automobile. You can follow every traffic law and still be liable in an accident, because you owned the vehicle that caused the damage. This is why you have insurance.
The equivalent analogy is charging lock/door/drywall/timber makers and suppliers for lapses if a thief entered the house by picking a lock or drilling/sawing through the wall.
No, it’s more like me storing my money at a bank, and then someone stealing from the bank, who told me they were secure. And turns out they had shitty locks.
We do not generally hold victims of crimes accountable for failing to defend themselves adequately.
If someone threatens you with a knife and gets you to hand over your wallet, your bank doesn’t get to say ‘you should have hired better security’ when the mugger uses your credit card.
The problem here is the mugger, and that’s who the state goes after. Even if the victim walked into a bad area. Even if the victim could have defended themselves.
Same with ransomware attackers. They are the problem. We might encourage potential victims to behave in ways that make it less likely for them to be targeted. But if they are targeted, we should still focus our societal disdain on the criminal not the victim.
While I’m sympathetic to this argument (it would be great if the internet were a safe place), in practice this thinking leads to governments trying to impose legislation that hurts legitimate uses but does little to protect from the long tail of harm. There’s little that can be done about North Korean state sanctioned cybercrime without a great firewall.
If the perpetrators of this hack were caught and in a developed country, they would certainly be prosecuted for their crimes and not get off light (especially if any data is actually leaked).
I think states should be able to do better than a ‘great firewall’ to defend their domestic net infrastructure from malicious foreign actors.
But I do think it should be much more states’ responsibility to make their domestic network safe for citizens and businesses and institutions to operate.
Your analogy portrays gravity as a thing that buildings cannot be built to withstand. There are plenty of structurally sound buildings and while there are plenty of secure apps the problem is there’s no incentive to build the latter.
My analogy would be: of course buildings have to be built to withstand gravity. That’s a natural part of the world that cannot be eliminated.
Buildings are built to stand up to natural forces. But not to, for example, the threat of a malicious actor crashing a plane into them. That isn’t typically considered a reasonable thing to architect civilian infrastructure for.
When you built IT infrastructure likewise you should build it to handle the natural forces it will be exposed to. But are you as accountable for securing it against the acts of malicious parties as a structural engineer is for securing a building against gravity, or as accountable for securing against those acts as the structural engineer is for securing that building against terrorists?
If Boeing claimed a plane was airworthy, but it crashed because basic engineering controls were skipped, we have collectively put our faith in the NTSB to preserve evidence, run an independent technical investigation, etc. There is no such authority for software - most security auditors (SOC2, HITRUST, etc) are just looking at self-reported data.
Just take a look at the recent Epic vs. Health Gorilla lawsuit to see how nonexistent the protection is around exchanging your medical records, one of the most sensitive types of PII.
People who haven’t been hacked just haven’t been looked at. If someone wants to hack you, they will hack you. It’s really unfortunate that people have this level of confidence in their ability.
These problems will continue as long as it is legal to operate in an unsafe way.
We've learned this in every other industry, but we can't seem to accept it in software. One of my hopes for AI is that it reduces the cost to behave responsibly to a level where this absurd resistance to acting responsibly erodes.
Agreed it's a weird comparison, but I'd argue SG&A needs to come out of gross margin too for a fair comparison. You need a warehouse/staff/utilities/etc to sell merchandise, you need nothing to sell a membership (whether it's worth anything is another question of course).
In their 2025 filing, gross margin on merchandise was $30B, but SG&A cost $25B (with membership fees at $5.3B).
Note that $2.6B of those membership fees will go back to members as membership rewards, which is interesting too.
Wait for the EU AI Act to require text watermarking in August. It will work, and it will be effective -- not because it'll be impossible to circumvent, but because all the big SaaSes will have to adopt it, and the hurdle of stripping it back out will filter out the vast majority of the sloppers.
No, they did not. Careful of falling for the psychosis.
> This finding was AI-assisted, but began with an insight from Theori researcher Taeyang Lee, who was studying how the Linux crypto subsystem interacts with page-cache-backed data.
You appear to want to die on the hill of "This vulnerability would never have been found if we lived in a world without LLM AI" which is a very strange hill to die on.
There's no question that we live in the world where LLM AI was involved in finding the copy fail vulnerability at this specific time, and it's completely normal for people to see a vulnerability and then look closer and find related vulnerabilities or a deeper root cause, but there's no need to adopt an extreme "without AI LLM we don't find these vulnerabilities" position.
It's weird to say I want to "die on this hill" because that's not even something I believe. There was nothing especially difficult about this particular vulnerability. My only observation that nobody did find it before, then an LLM security firm went out looking for Linux LPEs, and thus it was discovered.
That is a very difficult fact pattern to which to attach the conclusion "LLMs have sabotaged security research" (my paraphrase).
Well.. every new vulnerability is one nobody did find it before.
Otherwise, it won't be classified as "new"
--
Edit:
I think LLM is very useful here.
When a researcher spot something funny, instead of spending two days on reading and testing, he can fire up a LLM and have it read all the code lead to there in ~30 minutes.
The finding started with human intuition and was assisted by an LLM. You can yell "AI sec firm" 1000 times. A human got it started. You shouldn't die on that hill.
Of the MANY things I've completed in the last year that I would never have done without an LLM, a human got 100% of them started. The ideas were mine in every case.
But it is still a fact that I have been taking on all sorts of tasks I would never have taken on if I didn't have power tools.
Look at the FedRAMP requirements around integrity protection, then look at how massive the list of complaint products is. I promise, pretty much everyone in regulated environments is. It's so prevelant Azure is even pushing a turnkey solution for k8s https://learn.microsoft.com/en-us/azure/aks/use-azure-linux-...
Nothing about fedramp requires that you enable any of the features you're talking about. Linking to a public preview of an Azure product that doesn't even run with enforcement on is not great supporting evidence.
If you have much experience with fedramp, and it sounds like you do, perhaps you might agree that it is a huge list of things that superficially indicate doing something, without actually doing anything. As the documentation for IPE freely admits, it has no protective benefits because it is unaware of anonymous executable regions.
It sure has limitations, but "no protective benefits" is pretty wrong. In a real world example, if your containerized application has an RCE, you're preventing the attacker from executing binaries they tampered with or down/up-loaded. Combined with minimal distroless containers, it's a very effective attack surface reduction strategy, and works much better than the legacy scan-occasionally integrity-checking methods (rkhunter et al).
Everyone cranks out endless pages of slop, that everyone else then has to ingest. Anthropic collects a fee from all of you and is the only winner here.
I'm looking forward to the impending crash when the AI providers actually start charging what it costs to run these models. It's going to be a bloodbath, and it's going to be cathartic as fuck.
AI usage is getting expensive since Anthropic et al are turning the screws, and that money has to come from somewhere. Reducing AI usage is blasphemy of course, so cutting headcount is the only path forward.
reply