Security is about doubting what you're told, productively. Anyone who tells you otherwise should be doubted-- productively.
If you build security tools, verify that they actually work. If you write exploits, doubt every correctness claim until proven. If you make policy, assume ill intent until proven otherwise. These are the iron rules.
Knowing where to look or to care comes with experience. In the meantime, ask questions.
What does this even mean? This sounds more like being a roadblock to everyone else because you can say “but what about the security” and everyone has to dance to your tune.
Assume the user's search term or form data is going to contain SQL injection attempts. Design to handle that.
Assume the person calling up to reset a password is trying to access someone else's account, so make them prove their identity before resetting a password.
Assume employees will go poking around internal fileshares where they shouldn't have access, and design ACLs to keep them out.
and everyone has to dance to your tune.
Good. Developers who don't dance to that tune are responsible for headline after headline after headline of valuable user data being accessed, stolen, or leaked, and should risk having their professional software development licenses revoked and companies being fined by watchdogs.
> [devs who don't heed to security primacy] should risk having their professional software development licenses revoked and companies being fined by watchdogs.
But there is no such thing as a "software development license".
Is that just naive wishful thinking? I mean, do you think that the way to secure systems is layer after layer of pain-in-the-ass red-tape and heavily silo-ed organizational structure?
Instead of just forcing people to "dance to your tune" perhaps a more cooperative approach is better?
But there is no such thing as a "software development license". Is that just naive wishful thinking?
It's normal wishful thinking.
I mean, do you think that the way to secure systems is layer after layer of pain-in-the-ass red-tape and heavily silo-ed organizational structure?
Basically, yes. Just look at them: https://informationisbeautiful.net/visualizations/worlds-big... - when Marriott Hotels leaks data, that's a shame, when the big and prominent tech companies can't stop it - LinkedIn, Google+, Twitter, Dell, Uber, Amazon, Sony - or the important Equifax, Healthcare.gov, NHS, IBM Health Net, that's damning.
Do you want to trust your data to "a more cooperative approach" where people can debate whether security is too much of a negative nancy drag to bother with? Do you think we're at a stage now where this is winding down and everything important is basically secure? I don't. I think we've only seen the beginning of all the data leaks and sold information and dormant vulnerabilities, and that most systems are only considered "secure" because nobody has tried to attack them.
Hey, I agree we're only at the beginning of very bad things for information security problems.
However, putting blame so heavily on the devs isn't constructive. They're subject to the same forces as the infosec people-- clueless corporate hierarchies and derpy project managers that blindly push through deliverables.
And yes, I DO want to trust my data to orgs that take a cooperative problem-solving approach where people genuinely want to do the right thing, are encouraged to do so, and aren't afraid to challenge ideas.
Sorry, but the info-sec "mall-cop" approach doesn't do this.
My phrase at a previous company was that "not every bug is a vulnerability, but every vulnerability is a bug". In that context our goal as a security team was to drive the total bug count to zero, both by helping to design systems which could be kept bug free and by aggressively hunting the kinds of bugs we really cared about. In my opinion, that was a great way to do business-- with the caveat that you can't be the only team committed to having zero bugs.
This sort of thing is exhausting if you have to keep it up unsupported. Being the negative nancy that keeps pointing out the holes in other peoples' grand plans earns you no kudos.
Which is why security should be primarily a software development team offering high-quality, safe-by-default libraries and services. Instead of "you idiots failed to consider authorization" it becomes "here are the onboarding instructions for our authorization system, and here's your language's client."
If you're unsupported it's both exhausting and pointless. If you're lightly supported, IMO it's generally better to fight the long fight for stronger support than to spend political capital on specific bugs or design flaws.
Depends who you are leaking the holes to. Negative Nancy probably makes a living selling said flaws to foriegn governments these days because no one will listen to them.
A team I used to work with closely had a process where if a development team didn't think an issue was important enough to fix and release the security team was encouraged to find other buyers for the vuln. It was actually a pretty great way to both keep people on the same page and keep your security folks tethered to reality.
I'll get to what I meant in a moment, but first... I'm not sure what you think a security team should do, honestly.
Security teams exist because your management has a cross-cutting concern and needs people to represent that concern on the ground. If they don't do that they aren't doing their job. So if they have a security concern they should say "but what about the security". If everyone then has to dance that tune, it should suggest to you that your management thinks they have a point.
In terms of what I meant, security policy is rife with areas where you have to decide "is this borderline behavior acceptable". Your default answer to that should be "no", basically because otherwise your policy will grow too complicated to reason about quickly.
Yes, you’re a pretty little snowflake. Unique among the wold. No other profession or field has to suffer the hardships you have to suffer. Get over yourself. Everyone has has this problem. Many people don’t even have a name for what they do. You have conferences, educational curricula including at the university level, professional organizations, etc
If you build security tools, verify that they actually work. If you write exploits, doubt every correctness claim until proven. If you make policy, assume ill intent until proven otherwise. These are the iron rules.
Knowing where to look or to care comes with experience. In the meantime, ask questions.