> HTTP got basic auth, which is crap because plaintext password transmission happens...
Plaintext submission happens with HTML forms too. The problem with Basic is the password goes with every request. That means you're exposing a long term credential to a higher risk. We want to exchange the long term credential for a short term one, ideally scope limited. That is far less catastrophic to revoke, and gives you some power of granularity (you can at the very least have some operations prompt for the password again). It also means you can limit risk on the server: only one page has access to the long term credentials, which can be more easily audited, or even hosted on dedicated servers.
WebAuthn has been the real savior here. Real cryptography has always been desirable for this, and removing per-site passwords is honestly just a bonus.
Imho WebAuthn is just the next problematic non-solution: Everything you do in WebAuthn you have to build up manually within the already-problematic forms+cookies+serverlogic+javascript stack. You cannot just instruct your webserver to do WebAuthn for /secret and everything works, no, you need tons and tons of code for it to work. Code that will have errors and problems. Code that is lots of complications on top of forms+cookies+serverlogic+javascript.
WebAuthn might solve a problem for the likes of Google and Facebook. But definitely not for the average web developer or server admin. And not for the user of some HTTP-based API. And the problem WebAuthn solves isn't really "we need better Auth", it is rather "we need better customer lock-in". Because the complexity and incompatibility of WebAuthn will just reproduce the debacle that was OpenID, only with the added "bonus" of being coupled to some hardware.
Pardon my ignorance, but why don't companies run their own nameservers?
I get why you don't want to run email - it's highly reputation driven. But as far as I can tell, running nameservers is no harder than running webservers or DB servers. HA is potentially even easier, because the system was designed that way from day zero.
I'm not suggesting I'd run one for my personal website, but twitter and github are already managing distributed networks for this. What are the services Dyn and others provide that are so invaluable?
The complex DNS products exist for a reason. For one they can do really good geo-routing. This makes your services go faster for a global audience.
Then, some of the big companies use multiple CDN's. You might want to use one CDN provider in Asia and another in Europe. Furthermore, you may want to select CDN not only on geo-routing dimension, but on arbitrary criteria. Imagine that you had a fixed budget for, say, cloufront, and wanted to route to them as much as you could, but never exceed your budget. Modern DNS services allow all this complex of scenarios.
Furthermore, running your own DNS infrastructure is far from obvious these days. In May 2015 I gave this talk on defending DNS from DDoS:
Draw your own conclusions, but I'd say that running your own DNS makes you _more_ exposed to DDoS and extortion than using someone else's DNS infrastructure.
I've run my own dns infrastructure for a medium sized company. It could get attacked. And you better be sure your providers are OK with the bandwidth usage or they will shut you down. You could be down for hours. If you are targeted it could be very bad.
Amazon and providers like Dyn have Anycast so routing will normally be faster than what most companies would want to spend on their own dns systems. And they can absorb most large attacks. Not to mention that uptime for route 53 is near 100% usually and is pretty cheap. I don't think you could build something cheaper for yourself that offered similar features to route 53.
Getting good, consistent, well routed, fast and secure DNS Is harder than you'd think. Dyn typically sing speed as the main selling point for their DNS product, they do this through a large distribution of domain name servers geographically and anycast. Many hosts (like say, DigitalOcean) run their own DNS but use something like CloudFlare Virtual DNS on top. Personally I was surprised so many large sites trusted Dyn, Route 53 is a more robust product for production and scale. In the past, I've seen hosting providers switch to Dyn, give them load, cripple them, and have to scramble to revert away. I'm not at all surprised his happened, even given the uptick in botnet traffic globally.
Route 53 aint all that. We approached them about handling our customer's domains, and they said no way. They didn't have the capacity. Granted, this was 2 years ago, but Dyn has a much better reputation (still) than Route 53.
It's not that hard but DynDNS can offer a much higher performance, reliable and advanced service. They use anycast with a lot more servers than it's practical for each company to manage. They also offer advanced georouting, failover, etc.
In the face of a DDoS I'm not sure a custom nameserver network would do much better than a company who does that for a living. The only advantage is that attacks would have to target individual services (which did happen other times).
Latency, Ops, Cost, specialised features like latency based routing (nearest datacentre to the user making a request).
It comes down to the same reasons as someone using the cloud or a cdn, why spend more running it yourself (staff, equipment etc) instead getting someone who's who job it is to run that specific piece of software to the absolute best of their ability.
It's just not a core competency of almost all companies.
The simple answer is that running DNS servers at scale is as hard as running anything else at scale. The cost of having someone else do it for you is often much lower than doing it yourself.
I have an opposite question. Why does anyone even need to run their own non-cache nameserver ?
My current understanding of dns infra is, We have root nameservers which takes record change request, apply to itself and send it to other listening root nameservers & cache nameservers. The dns root nameservers would be extremely ddos resilient, more than any other kind of servers. Considering millions of dollars get spent per year on domain keeping, its fair to expect it too.
I'm only replying to say a personal thank you for pursuing this. Australia has such weak individual rights, it is so important that people like yourself put your hand to pursue them on occasions like this when it is important. Keep going!
I didn't know about this latest turn of events! This quote in particular is extremely disturbing... I noticed it in the FOI rejection, but now they're telling this to the Senate:
> In relation to the source code for the Senate counting system, I am advised that publication of the software could leave the voting system open to hacking or manipulation.
This was after the Senate asking, this argument has nothing to do even with the FOI request.
"I am advised that publication of the software could leave the voting system open to hacking or manipulation".
Well, if the problems are there, opening up the source to more eyes strikes me as the obvious thing to do; or should those with the knowledge of how to manipulate it as it stands be kept to the bare minimum? :)
But in any case, at least the meat of the implementation of the algorithm should be OK to release I would've thought - surely that isn't someone's intellectual property?
This is software we paid for and strikes me as pretty important to the democratic process, I'd like to have a bit of a look at it.
A smart cookie could vote in such a manner as that when the information is entered into the system, it crashes it? Maybe that's what they mean by manipulation...
Or, is it available online without any authentication other than knowing where it is? So if you know where it is, you could enter votes and then manipulate the election with those fake votes...
> A smart cookie could vote in such a manner as that when the information is entered into the system, it crashes it?
"Informal" votes -- ballots where the voter does not correctly fill out the ballot paper -- are rejected from the tally by the counters under supervision from scrutineers.
If you use hexadecimal, it will be rejected. If you use a very large number, it will be rejected. If you use weird unicode characters, it will be rejected. If it's anything other than a) a single [1] "above the line" or a fully filled-out ballot "below the line" comprised of numbers from 1-n where n is the number of candidates-1, it will be rejected.
If it's crashing on properly filled-out votes, there's a bigger problem.
I hadn't seen that one :) Someone above mentions its VB6 with embedded SQL Server upgraded from COBOL [1]. Can sort of see how they don't want anyone looking at it now.
It seems to be an ongoing misconception in the public, that part of good security is obstification. Know of any simple clear articles I could point people to when they make these sort of ("because Hackers might see") claims?
"I am advised that publication of the software could leave the voting system open to hacking or manipulation"
Shouldn't this "advice" demand substantiation or evidence? Surely it's not enough for one to just get "advice" right? If so then any Joe could lie to this officer and they could write the same thing.
Also, what does that bit about "commercial-in-confidence" mean?
> Also, what does that bit about "commercial-in-confidence" mean?
The AEC does conduct some elections on a fee-for-service basis - things like union elections. They use a version of the same system to tally votes in those elections too. They say that the two systems are totally inseparable, to the point where you can't just cut out the code used in industrial elections. They also say revealing the code (though keep in mind it would still be copyrighted, so couldn't be used by any other organisation) would cause them significant commercial disadvantage. Because they have particular efficiency in the way their software operates which causes them to be more competitive.
As you might suspect, I disagree with pretty much every part of what they claim there.
Plaintext submission happens with HTML forms too. The problem with Basic is the password goes with every request. That means you're exposing a long term credential to a higher risk. We want to exchange the long term credential for a short term one, ideally scope limited. That is far less catastrophic to revoke, and gives you some power of granularity (you can at the very least have some operations prompt for the password again). It also means you can limit risk on the server: only one page has access to the long term credentials, which can be more easily audited, or even hosted on dedicated servers.
WebAuthn has been the real savior here. Real cryptography has always been desirable for this, and removing per-site passwords is honestly just a bonus.