The many mobile browsers which hide the address bar are training people to ignore website urls.
Sites who use lots of nonsensical malware-ish url redirects (Google, Microsoft are guilty) train people to accept random urls.
I guess the chief culprits are email tracking links. Everyone including banks use them. Often tracking domains have nothing in common with the destination URL. This teaches people to disable or ignore email provider warnings and click any link in official sounding emails.
Banks and credit card companies have always been the absolute worst offenders for this, requiring people to use hidden iframes from all sorts of acmegenericsecure.net domains, and all the while professing to be the high priests of good practice with their absurd PCI racket, not to mention asking people to install random third party software just to use their websites because browsers apparently aren't good enough.
Google's use of gvt1.com had me convinced for the longest time that I was backdoor'd by some unknown branch of the government that was either not bright enough to cover their tracks or ballsy enough to just say "yea, it's us. the government. and we're in your computer"
Just last year I told Ikea that they use a phishing like url in my country. Something like makeyourhomegreat.com (I can't remember the exact URL). They actually stopped using that URL but I'm not sure it was because of me or some other reason.
The sensible thing would be to have dedicated software for anything involving handling money, in particular banking. Then you could tell people to never interact with their bank using a web browser.
> The many mobile browsers which hide the address bar are training people to ignore website urls.
This is my biggest complaint about forcing users to use apps to browse a website-- it hides everything. I have no idea if any given app is actually using SSL. Oversights have happened before to Credit Karma, Fandango and others.
RIP Microsoft. IMO this is a big reason why their phones/app store died. Try finding the real VLC player in the store - last I checked they dont even have a app store version (but you'll find tons of results for it).
There's real vlc app in both Windows and WP app store. In fact there were two official VlC apps for Windows Phone. One was written with c/c++ and the other UWP version
> email tracking links ... domains have nothing in common with the destination URL
The tradeoff has been CNAME-ing your own subdomain to your Email Service Provider’s tracking domain, which gets you a recognizable(-ish) URL, but has historically prevented https links, or using the ESP’s tracking domain directly, which allows https but makes sketchy-looking URLs.
I’d think Let’s Encrypt would make it possible to offer https on white-labeled (CNAME’d) tracking domains. Seems like an opportunity for some enterprising ESP.
(Yes, two other options are not tracking email links, or running your own tracking. I’m going to assume these are not realistic for most marketing departments.)
> I’d think Let’s Encrypt would make it possible to offer https on white-labeled (CNAME’d) tracking domains.
Technically, Let's Encrypt is not unique here. AFAIK, most CAs allow subdomain certs (and only validate ownership of said subdomain, not the top-level domain).
Relying on users manually confirming that the domain is correct has never been a good strategy. The user is supposed to tell microsoft.com from micros0ft.com from microsoft.co from microsoft-corp.com?
I recently saw a MacKeeper landing page url: "www.apple.com-spamsite.info/landing"
It was truncated in the url-bar enough to look like "www.apple.com".
The landing page, of course, was a clone of the apple.com website with a "Scan Computer" button that did the ol trick of showing you some animations before suggesting you use MacKeeper to clean up 17 viruses.
Another thing is the login with Google/Facebook buttons that do a redirect where you enter your password. It always makes me nervous that a website could create a fake Google/Facebook login page and collect my password, and I make a point of looking at the login page extra carefully. However, I bet that the average computer user doesn't do this.
Sounds like you are talking about the OAuth authentication flow which is designed to use a separate window/iframe for entering credentials. This allows the application to authenticate the user without ever having access to the cleartext credentials.
That’s providing that the login page you enter your credentials in actually is Google/Facebook. The app can easily open a login page which looks identical, but actually submits your credentials somewhere nefarious.
Users are not discerning enough to look for the padlock; they'll get taken either way. They are not the problem here.
The bigger problem with this is that the paths being requested can't be monitored by intermediary devices unless you're MITMing all outbound traffic.
It becomes impossible to tell whether a domain is simply cybersquatting or if they're up to something more sinister. '/' may return a parking page, '/login' may return a phishing page, and '/?id=c4010087800cf4e5753c80c9afbe0fe5' may be a malware callback, but as far as you can tell from your network logs all traffic to httpx://www.xn--bbox-vw5a.com is simply requesting '/'.
The percentage of people using network inspection for “good” like malware/phishing filtering is much lower than the percentage using it for bad stuff like ad/cancer tracking.
Still, I wish it was easier for me to locally MITM a single application running on my computer/phone. I find myself wanting to do this roughly every month.
The cycle continues and will continue to cycle. The only proper browsing hygiene takes place between the chair and keyboard, or touch screen. Sadly, it won't change. Humans are humans.
Well yes (that it’s only 50% is surprising), but realistically the presence/absence of a padlock is a terrible security indicator. Long term I would hope it goes away and you get an “insecure” UI only.
There are still valid reasons for not using ssl for everything. Internal facing sites, device admin pages, development servers etc. If I have to deal with obnoxious warning pages doing local Node.js development & testing I’m switching browsers.
internal facing site: so hopefully no logins, no confidential info, right? Similar for dev servers.
For local development localhost(and 127.0.0.1, and ::1) is explicitly in the definition of "secure" used by browsers and the html specs.
Device admin pages are about the only place you could legit claim the ssl isn't viable (because it isn't). But that's a problem that needs to be solved - if you can't make a secure connection to your device, then anyone can intercept the login creds. Those various peering steps required for a lot of new devices are explicitly there to act us a side channel to establish trust (either a shared key, or certs, or whatever) as until you have a source of trust that isn't from the network, you can't trust anything you receive from the device (and the device can't trust you).
Noob question, if a.com gets a certificate, then b.a.com can use the same cert, right? As in the example of the fb impostor in 000webhost.
So, in that same vein, can a TLD get a certificate? For example, com gets a certificate, so now anything.com has a valid certificate. Also, can I issue a cert specifically for d.c.b.a.com?
A certificate can have an effectively unlimited (CAs impose an arbitrary limit like 100, nobody is sure the maximum that could work) number of names listed (the subscriber will have to achieve proof of control for all these names to get the cert).
Each name can either be an exact fully qualified domain name, and will match only that single name, or it can be a "wildcard" like *.example.com which matches any DNS name with exactly one label (a part with no dots in, essentially) where the asterisk is and the rest an exact match.
Thus, a wildcard in com, even if it could exist (it is forbidden to issue such a thing) would not match service.example.com only the exact name example.com itself.
Yes, you can have a single certificate for both a.com and b.a.com. You can also have it for a.com and <star>.a.com.
No, you can't get <star>.com. Typically, at least for known root CAs, you have to prove ownership of your top level domain. If you own a.com, they'll ask you to either put a file on a.com/random, or register random.a.com. If you try to do so with .com, you'll likely fail (but please feel free to try and prove me wrong!).
Yes, you can get a certificate for d.c.b.a.com, I don't see any reason why not if you own a.com. Unless your specific root CA has constrains on the depth of the domains.
EV certificates are a compromise between the CAs and the browser vendors (today effectively all OS vendors except Mozilla stands in for the free Unixes).
The CAs wanted a product with a distinct UI that could drive sales of a more expensive certificate.
The browsers wanted CAs to do a better job of validation.
So the agreement was: we'll add a fancy UI for these certificates if you promise to ensure all your certificates are properly validated.
But validating the shiny organisation data in the EV cert, while useful, is not a major priority for the browsers. A machine can't do anything with it. The browsers mostly care about validating the Fully Qualified Domain Name, which is done even in DV and OV certificates just the same.
Trying to solve security problems with EV means relying on fallible humans not to make mistakes. It won't work. If it makes you feel better to try, be my guest but the browser vendors have been there, tried that.
Mostly, but to be fair they have also gotten quite a bit cheaper, now merely hundreds of dollars a year, down from thousands... which is a lot more than OV or DV, but not a huge bar to entry.
I remember that people were warned to avoid doing sensitive stuff on websites without the padlock. I don't remember any attempt to suggest that the padlock implied some sort of validity.
Most users do not understand the “necessary but not sufficient” condition. They need a “if (condition) { SAFE; } else { NOT SAFE; }” test, not an endless checklist, and the security community has continuously failed to deliver on this.
I'm pretty sure that's actually impossible. If someone registers a domain and cert that's essentially a homoglyph attack against a common website, you're basically stuck with heuristics to detect it. You need a global database of targetable domains that supports similarity checking with arbitrary Unicode. You need some kind of fuzzy hash of the website to see whether the website your user is looking at is actually an imitation or just happens to legitimately have a similar name. It will be messy at best.
And yet such a feature would more or less solve the problem for a large majority of users. What you describe as terrible seems like a pretty good feature to me. If a URL is visually indistinguishable from amazon.com yet differs at the byte level its probably not legit.
If I were implementing it I would render the domain text and then check how significantly pixels differed from its nearest "known" domain. We used to do this with render tests where there was a bit of noise.
Whatever happened to the Web of Trust thing? We could have a curated one so that an extension can indicate:
- whether the domain is substantially similar to a trusted one
- recent data breaches
- whether the site has been known to sell data
Those could be indicated by different, intuitive colors:
- red - high likelihood of phishing/malware
- yellow - recent data breach; user intervention required, but the service itself isn't fraudulent
- green - reasonable safe
- green padlock - trusted
It would be awesome to get all major browser vendors on board to ship it by default, and make sure that data is never sent upstream (download a database).
I loved MyWot! I was one of the earlier users around ~2007 until 2009 or so. It helped teach intuition on sketchy, dangerous, and bloated web pages. The community was small and plenty of sites were unrated, though a surprising number still had ratings (and I was fairly active myself).
To answer your question, privacy addons started selling our data. I remember Adblock Plus added "Acceptable Ads" around 2012. MyWot redesigned in 2013. Times were changing. Surely enough in 2016 they were found selling sensitive user data. It's not like this was a surprise, since it's the reason I left years ago.
These days, I'd rather reduce my browser dependency. I hope the community finds a way to filter the 1% of useful data on the internet into like a .txt file, or something that doesn't make me solve puzzles to grep.
There is something disingenuous and false about those who have been pushing ssl 'vehemently' on the pretext of concern end user privacy and surveillance.
It would be slightly more credible if the response by the tech community both in comment and action to Snowden and Assange's revelations and invasive surveillance by Google, Facebook and others was not so embarrassing in inaction.
One can argue of degrees and doing both, but in this case it seems all the 'concern' gets expended in ssl leaving no energy for the far more pervasive SV surveillance culture the tech community props up without protest or even leaks.
Do you think things would be better tf the effort used to make ssl prevalent was used to make people stop using Google and Facebook?
Lets encrypt has received what? A million in funding? I doubt it's much more. Do you think putting that money into convincing people would lead to a larger change?
Great so every 3 months when I have to manually renew all the LetsEncrypt certs I manage for clients I know it's giving them zero protection. Kinda reminds me of the British Government's decision to insert road humps into all the roads in the towns and cities of the land just to deter speeding drivers. All it produced was more work for garages mending damaged exhaust pipes.
Why haven't you automated it? It's not exactly hard to automate the renewal, that's the great thing about letsencrypt, and the whole point about the 3 month period is to encourage you to automate this stuff.
Any reason you couldn't use the http-01 challenge? I think there are thousands of people who are using LetsEncrypt and have automated it successfully. So whatever you just said,
> all the LetsEncrypt certs I manage for clients
... if this contains some technical reason why it won't work, I think that's the problem.
But I'd be more inclined to believe you if you just told me that, your clients periodically need your assistance for other things, but they weren't going to call because as every good salesperson knows, "if you don't call, they don't come"... and since they trust you already, this is a reliable door-opener that gets you back into their offices, where you get to bill for something, even if this time they didn't need anything else... it gets you valuable face time and a pretty reliable, even if only nominal, payday.
If that's not it, then tell me that's not it, but... I think that's what you're doing. (And there's nothing wrong with that.)