Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Half of All Phishing Sites Now Have the Padlock (krebsonsecurity.com)
166 points by snowy on Nov 28, 2018 | hide | past | favorite | 73 comments


On the bright side, at least your data won't get stolen by a fourth party while it's being stolen by a third party.


Your data is being sold by the 1st party.


Your data are sold by the second party. The first party give them for free most of the time...


This is the problem with the two party system.


...you mean conversations/ dialogues/ communicating/ reaching out to a server to request information?


no, you're giving your data to the first party.


Well, someone's having a party.


Data aggregation party. They'll resell it later.


And the phishing site won’t steal your crypto currency (I mean unless that was its original purpose :) ).


The many mobile browsers which hide the address bar are training people to ignore website urls.

Sites who use lots of nonsensical malware-ish url redirects (Google, Microsoft are guilty) train people to accept random urls.

I guess the chief culprits are email tracking links. Everyone including banks use them. Often tracking domains have nothing in common with the destination URL. This teaches people to disable or ignore email provider warnings and click any link in official sounding emails.


Banks and credit card companies have always been the absolute worst offenders for this, requiring people to use hidden iframes from all sorts of acmegenericsecure.net domains, and all the while professing to be the high priests of good practice with their absurd PCI racket, not to mention asking people to install random third party software just to use their websites because browsers apparently aren't good enough.


Chase likes to send emails from the not-at-all-suspicious "acctmanagement.com" domain[1].

[1]: https://twitter.com/8x5clPW2/status/1046244493203263488


Google's use of gvt1.com had me convinced for the longest time that I was backdoor'd by some unknown branch of the government that was either not bright enough to cover their tracks or ballsy enough to just say "yea, it's us. the government. and we're in your computer"


Their reply to your tweet pisses me off


Not as bad as T-Mobile defending the practice of storing passwords in plaintext!


Defending at least implies engaging with the complaint.

Chase just said "go file a ticket". In other words, "fuck off".


Just last year I told Ikea that they use a phishing like url in my country. Something like makeyourhomegreat.com (I can't remember the exact URL). They actually stopped using that URL but I'm not sure it was because of me or some other reason.


>not to mention asking people to install random third party software just to use their websites because browsers apparently aren't good enough.

is this still a thing? maybe this was true back in the days when activeX was still common, but not now.


Check out Rapport, big banks prompt you to install it everytime you visit their login page.


Which banks? Chase, BofA, and Amex do not, and I'd like to stay far away from those that do.


The sensible thing would be to have dedicated software for anything involving handling money, in particular banking. Then you could tell people to never interact with their bank using a web browser.


> The many mobile browsers which hide the address bar are training people to ignore website urls.

This is my biggest complaint about forcing users to use apps to browse a website-- it hides everything. I have no idea if any given app is actually using SSL. Oversights have happened before to Credit Karma, Fandango and others.



IIRC you can still submit an app with exemptions to this rule; you will just have to provide a justification for it that Apple deems reasonable.


On the positive side, with apps, there's a lower chance to land on a phishing app, because apps need to be reviewed before they appear on the store.


RIP Microsoft. IMO this is a big reason why their phones/app store died. Try finding the real VLC player in the store - last I checked they dont even have a app store version (but you'll find tons of results for it).


There's real vlc app in both Windows and WP app store. In fact there were two official VlC apps for Windows Phone. One was written with c/c++ and the other UWP version


^ Though, it should be noted that the fact that there's two official versions speaks to a somewhat parallel issue.


> email tracking links ... domains have nothing in common with the destination URL

The tradeoff has been CNAME-ing your own subdomain to your Email Service Provider’s tracking domain, which gets you a recognizable(-ish) URL, but has historically prevented https links, or using the ESP’s tracking domain directly, which allows https but makes sketchy-looking URLs.

I’d think Let’s Encrypt would make it possible to offer https on white-labeled (CNAME’d) tracking domains. Seems like an opportunity for some enterprising ESP.

(Yes, two other options are not tracking email links, or running your own tracking. I’m going to assume these are not realistic for most marketing departments.)


> I’d think Let’s Encrypt would make it possible to offer https on white-labeled (CNAME’d) tracking domains.

Technically, Let's Encrypt is not unique here. AFAIK, most CAs allow subdomain certs (and only validate ownership of said subdomain, not the top-level domain).

Let's Encrypt just makes it scalable financially.


Relying on users manually confirming that the domain is correct has never been a good strategy. The user is supposed to tell microsoft.com from micros0ft.com from microsoft.co from microsoft-corp.com?


I recently saw a MacKeeper landing page url: "www.apple.com-spamsite.info/landing"

It was truncated in the url-bar enough to look like "www.apple.com".

The landing page, of course, was a clone of the apple.com website with a "Scan Computer" button that did the ol trick of showing you some animations before suggesting you use MacKeeper to clean up 17 viruses.


That is modern art!


Another thing is the login with Google/Facebook buttons that do a redirect where you enter your password. It always makes me nervous that a website could create a fake Google/Facebook login page and collect my password, and I make a point of looking at the login page extra carefully. However, I bet that the average computer user doesn't do this.


Sounds like you are talking about the OAuth authentication flow which is designed to use a separate window/iframe for entering credentials. This allows the application to authenticate the user without ever having access to the cleartext credentials.


That’s providing that the login page you enter your credentials in actually is Google/Facebook. The app can easily open a login page which looks identical, but actually submits your credentials somewhere nefarious.


Whatever.ms/login? Mont-fricking-serrat? Well that's all but screaming FAKEFAKEFAKE.


Users are not discerning enough to look for the padlock; they'll get taken either way. They are not the problem here.

The bigger problem with this is that the paths being requested can't be monitored by intermediary devices unless you're MITMing all outbound traffic.

It becomes impossible to tell whether a domain is simply cybersquatting or if they're up to something more sinister. '/' may return a parking page, '/login' may return a phishing page, and '/?id=c4010087800cf4e5753c80c9afbe0fe5' may be a malware callback, but as far as you can tell from your network logs all traffic to httpx://www.xn--bbox-vw5a.com is simply requesting '/'.


I think it’s still a worthwhile trade off.

The percentage of people using network inspection for “good” like malware/phishing filtering is much lower than the percentage using it for bad stuff like ad/cancer tracking.


Still, I wish it was easier for me to locally MITM a single application running on my computer/phone. I find myself wanting to do this roughly every month.


There are tools like Fiddler or Charles Proxy that make it easy.


Only half? I'd expected them to nearly all use ssl by now. C'mon, phishers, it's free! ;-)


The cycle continues and will continue to cycle. The only proper browsing hygiene takes place between the chair and keyboard, or touch screen. Sadly, it won't change. Humans are humans.


Well yes (that it’s only 50% is surprising), but realistically the presence/absence of a padlock is a terrible security indicator. Long term I would hope it goes away and you get an “insecure” UI only.


There are still valid reasons for not using ssl for everything. Internal facing sites, device admin pages, development servers etc. If I have to deal with obnoxious warning pages doing local Node.js development & testing I’m switching browsers.


internal facing site: so hopefully no logins, no confidential info, right? Similar for dev servers.

For local development localhost(and 127.0.0.1, and ::1) is explicitly in the definition of "secure" used by browsers and the html specs.

Device admin pages are about the only place you could legit claim the ssl isn't viable (because it isn't). But that's a problem that needs to be solved - if you can't make a secure connection to your device, then anyone can intercept the login creds. Those various peering steps required for a lot of new devices are explicitly there to act us a side channel to establish trust (either a shared key, or certs, or whatever) as until you have a source of trust that isn't from the network, you can't trust anything you receive from the device (and the device can't trust you).


Noob question, if a.com gets a certificate, then b.a.com can use the same cert, right? As in the example of the fb impostor in 000webhost.

So, in that same vein, can a TLD get a certificate? For example, com gets a certificate, so now anything.com has a valid certificate. Also, can I issue a cert specifically for d.c.b.a.com?


In the Web PKI, which is what you care about:

A certificate can have an effectively unlimited (CAs impose an arbitrary limit like 100, nobody is sure the maximum that could work) number of names listed (the subscriber will have to achieve proof of control for all these names to get the cert).

Each name can either be an exact fully qualified domain name, and will match only that single name, or it can be a "wildcard" like *.example.com which matches any DNS name with exactly one label (a part with no dots in, essentially) where the asterisk is and the rest an exact match.

Thus, a wildcard in com, even if it could exist (it is forbidden to issue such a thing) would not match service.example.com only the exact name example.com itself.


In short no.

a.com does not match b.a.com

Only if the certificate is *.a.com does it match b.a.com

b.a.com can have its own certificate.


Yes, you can have a single certificate for both a.com and b.a.com. You can also have it for a.com and <star>.a.com.

No, you can't get <star>.com. Typically, at least for known root CAs, you have to prove ownership of your top level domain. If you own a.com, they'll ask you to either put a file on a.com/random, or register random.a.com. If you try to do so with .com, you'll likely fail (but please feel free to try and prove me wrong!).

Yes, you can get a certificate for d.c.b.a.com, I don't see any reason why not if you own a.com. Unless your specific root CA has constrains on the depth of the domains.

Edit: replaced '*' with <star>


I thought this was the point of EV certs.


EV certificates are a compromise between the CAs and the browser vendors (today effectively all OS vendors except Mozilla stands in for the free Unixes).

The CAs wanted a product with a distinct UI that could drive sales of a more expensive certificate.

The browsers wanted CAs to do a better job of validation.

So the agreement was: we'll add a fancy UI for these certificates if you promise to ensure all your certificates are properly validated.

But validating the shiny organisation data in the EV cert, while useful, is not a major priority for the browsers. A machine can't do anything with it. The browsers mostly care about validating the Fully Qualified Domain Name, which is done even in DV and OV certificates just the same.

Trying to solve security problems with EV means relying on fallible humans not to make mistakes. It won't work. If it makes you feel better to try, be my guest but the browser vendors have been there, tried that.


No, the only point of EV certs is for CAs to make more money.


Mostly, but to be fair they have also gotten quite a bit cheaper, now merely hundreds of dollars a year, down from thousands... which is a lot more than OV or DV, but not a huge bar to entry.


EV certs can be spoofed too: https://stripe.ian.sh/


I remember that people were warned to avoid doing sensitive stuff on websites without the padlock. I don't remember any attempt to suggest that the padlock implied some sort of validity.


Most users do not understand the “necessary but not sufficient” condition. They need a “if (condition) { SAFE; } else { NOT SAFE; }” test, not an endless checklist, and the security community has continuously failed to deliver on this.


I'm pretty sure that's actually impossible. If someone registers a domain and cert that's essentially a homoglyph attack against a common website, you're basically stuck with heuristics to detect it. You need a global database of targetable domains that supports similarity checking with arbitrary Unicode. You need some kind of fuzzy hash of the website to see whether the website your user is looking at is actually an imitation or just happens to legitimately have a similar name. It will be messy at best.


And yet such a feature would more or less solve the problem for a large majority of users. What you describe as terrible seems like a pretty good feature to me. If a URL is visually indistinguishable from amazon.com yet differs at the byte level its probably not legit.

If I were implementing it I would render the domain text and then check how significantly pixels differed from its nearest "known" domain. We used to do this with render tests where there was a bit of noise.

Don't let perfect be the enemy of good.


Whatever happened to the Web of Trust thing? We could have a curated one so that an extension can indicate:

- whether the domain is substantially similar to a trusted one - recent data breaches - whether the site has been known to sell data

Those could be indicated by different, intuitive colors:

- red - high likelihood of phishing/malware - yellow - recent data breach; user intervention required, but the service itself isn't fraudulent - green - reasonable safe - green padlock - trusted

It would be awesome to get all major browser vendors on board to ship it by default, and make sure that data is never sent upstream (download a database).


I loved MyWot! I was one of the earlier users around ~2007 until 2009 or so. It helped teach intuition on sketchy, dangerous, and bloated web pages. The community was small and plenty of sites were unrated, though a surprising number still had ratings (and I was fairly active myself).

To answer your question, privacy addons started selling our data. I remember Adblock Plus added "Acceptable Ads" around 2012. MyWot redesigned in 2013. Times were changing. Surely enough in 2016 they were found selling sensitive user data. It's not like this was a surprise, since it's the reason I left years ago.

These days, I'd rather reduce my browser dependency. I hope the community finds a way to filter the 1% of useful data on the internet into like a .txt file, or something that doesn't make me solve puzzles to grep.


There is something disingenuous and false about those who have been pushing ssl 'vehemently' on the pretext of concern end user privacy and surveillance.

It would be slightly more credible if the response by the tech community both in comment and action to Snowden and Assange's revelations and invasive surveillance by Google, Facebook and others was not so embarrassing in inaction.

One can argue of degrees and doing both, but in this case it seems all the 'concern' gets expended in ssl leaving no energy for the far more pervasive SV surveillance culture the tech community props up without protest or even leaks.


Do you think things would be better tf the effort used to make ssl prevalent was used to make people stop using Google and Facebook? Lets encrypt has received what? A million in funding? I doubt it's much more. Do you think putting that money into convincing people would lead to a larger change?


I'd be curious to know how many phishing sites support 2fa, i.e. can also phish time-based codes. If anyone from PhishLabs is reading... :)

Edit: grammar


Padlock? I thought it was a handbag.


TL;DR: "Padlock" means the usual icon promising the site has a valid TLS cert.

But well worth skimming through for the excellent Firefox about.config tweak "network.IDN_show_punycode".


Great so every 3 months when I have to manually renew all the LetsEncrypt certs I manage for clients I know it's giving them zero protection. Kinda reminds me of the British Government's decision to insert road humps into all the roads in the towns and cities of the land just to deter speeding drivers. All it produced was more work for garages mending damaged exhaust pipes.


Why haven't you automated it? It's not exactly hard to automate the renewal, that's the great thing about letsencrypt, and the whole point about the 3 month period is to encourage you to automate this stuff.


Not possible because the domains are pointed to the webserver from a different host. It has to be done manually with:

`certbot certonly -d $1 -d www.$1 --manual --preferred-challenges dns-01`

The TXT records have to be edited manually then checked with DNS Toolbox. Once visible certbot can be allowed to process.


Any reason you couldn't use the http-01 challenge? I think there are thousands of people who are using LetsEncrypt and have automated it successfully. So whatever you just said,

> all the LetsEncrypt certs I manage for clients

... if this contains some technical reason why it won't work, I think that's the problem.

But I'd be more inclined to believe you if you just told me that, your clients periodically need your assistance for other things, but they weren't going to call because as every good salesperson knows, "if you don't call, they don't come"... and since they trust you already, this is a reliable door-opener that gets you back into their offices, where you get to bill for something, even if this time they didn't need anything else... it gets you valuable face time and a pretty reliable, even if only nominal, payday.

If that's not it, then tell me that's not it, but... I think that's what you're doing. (And there's nothing wrong with that.)


> Great so every 3 months when I have to manually renew all the LetsEncrypt certs I manage for clients I know it's giving them zero protection.

I'm not sure I understand how a server certificate was supposed to provide protection against an entirely unrelated server hosting a phishing website.


Why are you manually renewing letsencrypt certs?

The point of the 3 month limit is to encourage you to set up automatic renewal.


That's only possible when the domain is hosted on the same network as the site. Doesn't apply in this case. See above.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: