Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

People get super confused about the differences between abuse prevention, information security, and cryptography.

For instance, downthread, someone cited Kerckhoffs's principle, which is the general rule that cryptosystems should be secure if all information about them is available to attackers short of the key. That's a principle of cryptography design. It's not a rule of information security, or even a rule of cryptographic information security: there are cryptographically secure systems that gain security through the "obscurity" of their design.

If you're designing a general-purpose cipher or cryptographic primitive, you are of course going to be bound by Kerckhoff's principle (so much so that nobody who works in cryptography is ever going to use the term; it goes without saying, just like people don't talk about "Shannon entropy"). The principle produces stronger designs, all things being equal. But if you're designing a purpose-build bespoke cryptosystem (don't do this), and all other things are equal (ie, the people doing the design and the verification work are of the same level of expertise as the people whose designs win eSTREAM or CAESAR or whatever), you might indeed bake in some obscurity to up the costs for attackers.

The reason that happens is that unlike cryptography as, like, a scientific discipline, practical information security is about costs: it's about asymmetrically raising costs for attackers to some safety margin above the value of an attack. We forget about this because in most common information security settings, infosec has gotten sophisticated enough that we can trivially raise the costs of attacks beyond any reasonable margin. But that's not always the case! If you can't arbitrarily raise attacker costs at low/no expense to yourself, or if your attackers are incredibly well-resourced, then it starts to make sense to bake some of the costs of information security into your security model. It costs an attacker money to work out your countermeasures (or, in cryptography, your cryptosystem design). Your goal is to shift costs, and that's one of the levers you get to pull.

Everybody --- I think maybe literally everybody --- that has done serious anti-abuse work after spending time doing other information security things has been smacked in the face by the way anti-abuse is entirely about costs and attacker/defender asymmetry. It is simply very different from practical Unix security. Anti-abuse teams have constraints that systems and software security people don't have, so it's more complicated to raise attacker costs arbitrarily, the way you could with, say, a PKI or a memory-safe runtime. Anti-abuse systems all tend to rely heavily on information asymmetry, coupled with the defender's ability to (1) monitor anomalies and (2) preemptively change things up to re-raise attacker costs after they've cut their way through whatever obscure signals you're using to detect them.

Somewhere, there's a really good Modern Cryptography mailing list post from... Mike Hamburg? I think? I could be wrong there --- about the Javascript VM Google built for Youtube to detect and kill bot accounts. I'll try to track it down. It's probably a good example --- at a low level, in nitty-gritty technical systems engineering terms, the kind we tend to take seriously on HN --- of the dynamic here.

I don't have any position on whether Meta should be more transparent or not about their anti-abuse work. I don't follow it that closely. But if Cory Doctorow is directly comparing anti-abuse to systems security and invoking canards about "security through obscurity", then the subtext of Alec Muffett's blog post is pretty obvious: he's saying Doctorow doesn't know what the hell he's talking about.




Dammit. Apologies to both of them.


Having worked in anti-abuse for nearly 20 years this is spot on. Even if it were possible, publishing “the algorithm” isn’t going to solve anything. It’s not like it can be published in secret or avoid being instantly obsolete.

All of this is an exercise balancing information asymmetry and cost asymmetry. We don’t want to add more friction than necessary to end users, but somehow must impose enough cost to abusers in order to keep abuse levels low.

Unfortunately for us, it generally costs far less for attackers to bypass systems than defenders to sustain a block.

As defenders we work to exploit things in our favor - signals and scale. Signals drive our systems be it ML, heuristics, signatures (or more likely a combination). Scale lets us spot larger patterns in space or time. At a cost. 99%+ effective systems are great, but at scale 99% is still not good enough. Errors in either direction will slip by in the noise; especially targeted attacks.

As a secondary step, some systems can provide recourse for errors. Examples might include temporary or shadow bans, rate limiting, error reporting, etc. Unfortunately, cost asymmetry comes into play again. It is far more costly to effectively remediate a mistake than it is to report one. We’re back to cost asymmetry.

All of this is suboptimal. If we had a better solution, it would be in place. Building and maintaining these systems is expensive and won’t go away unless something better comes along.

tl;dr version: assholes ruin it for everyone.


I think a big part of why this is a focus nowadays is because some "community standards" started crossing into political canards as abuse types, so normies who are not spammers are starting to bump into anti-abuse walls, which don't create real appeal processes because that is too expensive. Now the political class is starting to demand expensive things as a result, and they have the guns.

In the past the rules were obvious easy wins like "no child porn" and "no spam" that nobody really gave a shit about most anti-abuse and welcomed it because they never encountered it for their normie behavior.

These platforms to reduce the 'political' costs of their anti-abuse systems need to drop community standards that start becoming political canards, and say that if we are to enforce political canards one way or another, then it has to become law, creating a much higher barrier for the political class to enact because they have another political camp on the other side of the aisle fighting them tooth and nail, because all political canards have multiple sides.

That might mean dropping painful things like coronavirus misinformation enforcement, violent hate speech against LGBT groups in certain countries and even voting manipulation, because you have to let the political class determine the rule set there, not the company itself. Otherwise it will be determined for you, in a really bad way, even in the USA.


I mean, all of this might be true or it might not be true, but either way: if Cory Doctorow is appealing to "security through obscurity" to make his argument, he's making a clownish argument.


Yeah I'm not even thinking about cory, just talking about this general issue and why it has become an issue in the past 7 years, vs any other time. I really think it's come down to enforcing political things as rules, and suggesting to any lurker who works in anti-abuse in big tech that you need to start putting a price on enforcing political rules, much like you do in many other parts of anti-abuse as you explained, or your going to destroy the company eventually.

I know that would be really hard also in most big tech, because unfortunately there is a specific political opinon culture there, and basically suggesting that you stop enforcing LGBT hate speech is not going go well with the general employee population. Puts them in a rock and hard place, so it would probably have to be done confidentially.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: