Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree that if the NSA is your threat model, then you shouldn't trust any company.

I also think we can learn a lot about security from Google even if they comply with federal court orders requesting user data. "Willingness to comply with federal court orders" and "competence at securing data against cyberattacks" are two different things.



"if NSA is your threat model" sounds like something somebody from the 1980's would say. It's been a long time now that we've known they spy on everybody and that they share the data, and they Five Eyes the data they can't get. NSA is everybody's threat model and it has been for a long time. Intervening in electoral politics, getting private companies to do their bidding... where have you been?


The turn this thread has taken has been interesting. A few comments ago, stcroixx wrote:

>Have you ever seen security done right anywhere? In my experience, it's always the bare minimum.

I think there's a lot of ground between doing the bare minimum for security and hardening your organization against the NSA. Every step towards greater security is a step I support, even if your organization isn't able to reach the "hardened against the NSA" level.

I'm happy for you if you want to harden yourself against the NSA, but I dislike black-and-white thinking. I care about harms to users which come from non-NSA threats too. Case in point: the original post about hackers selling 23andme data -- presumably to clients who are not the NSA, in some cases.

If every discussion of how to improve security gets derailed into a discussion of how evil the NSA is and how practically no one is secure against them, then organizations will continue to do security badly, and we'll see more breaches like this 23andme breach. Fatalism is a self-fulfilling prophecy. I see it every day here on HN.


When "your" military officers are selling state secrets out for $5k in bribes [0], you realize there's probably very little you can do to prevent bad actors in positions of trust from blowing up any security model anywhere. Your only choice is between minimizing your risk with hoping for the best, or rolling your own everything and not taking part in any modern anything and living and dying alone. And even then, there's still probably going to be a file on you somewhere.

[0] https://abcnews.go.com/US/2-us-navy-sailors-arrested-alleged...



What's interesting to me remembering this is that back then, even that late into Google's life, Google had enough people to actually be pissed off about this and try doing something about it. Google of today? I have the sense that management would just shrug its shoulders and let the violating by any nation-state-backed group that pleases continue.


There's a mutual cynicism here. If Google's users think: "Google will violate my privacy no matter what, there's no point in complaining", then Google's executives will think: "Users will believe we are weak on privacy no matter what, there's no point in protecting user privacy".

To break the cycle, it helps to share concrete evidence of Google misbehaving rather than just presenting it as a fact that everyone knows. You get what you incentivize. If the feeling that Google sucks on privacy isn't linked to specific Google misbehavior whenever it is brought up, Google execs will correctly realize that users will feel the same no matter what decisions they actually make.

As a concrete point for discussion, in the zdnet article it states:

>After the news about NSA snooping first broke over the summer, Google decided it was time to start encrypting its datacenter-to-datacenter communications.

Is there an analogous security story from more recently where Google didn't try to address the problem in a similar way?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: