Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Incident report: Employee and customer account compromise (twilio.com)
207 points by wepple on Aug 8, 2022 | hide | past | favorite | 155 comments


A month ago I had a call with Twilio sales / onboarding to consider switching to them from our current IP phone provider. Ironically, I was unable to complete the process because my current work number, which is IP based, would not pass their "put in your number so we can make sure you're a human" verification test because IP numbers are not supported.

I do [edit: NOT] use my personal cell number for anything for security reasons, so even after their insistence that it was safe to use I refused and therefore I was unable to get past the first step of the signup process and went with another provider. After reading this I am feeling validated that I didn't cave.


Just ask twitter about it... They will tell you you're a bot unless you pony up a "real" phone number, and then they get breached and tell people: sorry you will get doxed now because we lost your data, but you shouldn't have used your real phone number[1].

[1] "If you operate a pseudonymous Twitter account, we understand the risks an incident like this can introduce and deeply regret that this happened. To keep your identity as veiled as possible, we recommend not adding a publicly known phone number or email address to your Twitter account."

https://privacy.twitter.com/en/blog/2022/an-issue-affecting-...


"They will tell you you're a bot unless you pony up a "real" phone number ..."

This is my 2FA mule:

https://kozubik.com/items/2famule/

There are others like it, but this one is mine.


While I consider this a very elegant solution, I find it hard to justify paying $8/month for a cellphone plan because others don't do their homework.


What I suggest if you have the opportunity to go anywhere south of the USA is buying a 2$ claro SIM chip and putting 2$ on it at least once every 3 months or so in order to maintain it (Can even maintain payments over the internet as long as you can understand the Spanish website). This makes the monthly cost around 66 cents a month/$7.92 a year for an SMS verification mule, and it will receive texts anywhere that has GSM band 1 without having to activate roaming or anything expensive like that.


Is there a way to automate this? I would be afraid of forgetting to top it up and then having the whole thing fall apart. Personally, it is worth $5/mo to me (Tello's cheapest plan: no data, 500 voice minutes, unlimited text) to not have to deal with manual payments.


I dunno about automating this but I have a calendar for that


> I find it hard to justify paying $8/month for a cellphone plan because others don't do their homework.

Whenever I hear things like this, I like to remind people that they should be most concerned about outcomes, not about what satisfies their indignance. I agree that this situation is ridiculous, but being mad about it isn't going to change the state of the world. If you're genuinely worried about giving out your primary phone number for 2FA or account verification purposes, then this is a solution for you. Being pissed at companies for leaking data is not a solution.


You are absolutely right, and it's definitely a rational choice to give in and buy a dedicated cell phone plan for this purpose.

To my defense, I didn't argue that being pissed at some company would solve anything :)


Yes, but because so many others don't do their homework, you have to take it upon yourself to protect yourself. $8/month does seem like a stupid fee to pay, but for those willing to do it, it isn't that much. Those kinds of companies may even have a pay for 12 months in advance and it get a lower rate.


Could do it cheaper with some more obscure MVNOs https://prepaidcompare.net/


My first thought was "paying for a second SIM just for that sounds like a pain", but then I read to the end and saw that you're using a service for $8/mo. Had no idea such a thing existed. I clicked through their plans, and it looks like you can reduce that down to $5/mo if you ditch the data plan and select 100 voice minutes / unlimited text. I figure for such a device, you can just connect it to wifi, and don't need a data plan.

I guess it's nice to have the backup, though, in case the local internet connection goes down. The 500MB/mo plan bumps it to just $6/mo.


This is clever. Don't mind if I borrow your idea. I wonder if I can automate the setup...


I'm a fan of US Mobile personally. They let you make a custom plan that has ONLY sms, so you don't pay for what you don't need. Can also add minutes at will for $1. Bonus of choosing the underlying carrier.


you can provision in API/IP-enabled "mobile" number through a provider like Twilio or competitors, do everything in software (or do nothing regarding custom software and just enable SMS forwarding in the provider's UI), and pay fractions of pennies per SMS, plus a monthly fee for the number like $1.50.

See my comments elsewhere in thread, but a "VOIP" number is a ridiculously tiny corner case in the world of telco, hence lack of support.


I wish this were the case - believe me, I have tried many different API provided phone number endpoints and they are discriminated against by banks, google, etc.

In fact, twilio even started offering special numbers that are flagged and "vouched for" that should be treated as non-VOIP ... but they aren't.

If you really need it to work, it needs to be a number from a physical SIM card.


Unfortunately, many services have started blocking any number not currently associated with AT&T, T-Mobile, Verizon or a major regional carrier. Even legitimate 'mobile' numbers from tiny carriers and MVNOs are getting blocked.


Nope. I had a "burner" voip used for providing to retailers/etc who wanted a phone number for no reason. I would migrate the number yearly. Unfortunately those are now rejected.


Are you not worried that permanent, 24/7 connection to the charger, will destroy the battery, make it blow up, swell, or something in that line?


Modern phones have overcharge protection, as long as that's in tact (no reason to think it wouldn't be), it'll be completely fine. The worst that might happen is your battery capacity will degrade slightly faster than it would otherwise.


It's Lithium not Nickel-Cad


$8/month is ridiculously high for what you're doing with that. You're effectively paying over a dollar per text message?


I kept contacting support every time my account got blocked for not providing a number stating that i refused to provide a phone number. I think after the second or third time they decided i was human and haven't had a block since.


Recently I got an email receipt for event tickets that I didn't purchase. Looking at my credit card showed no transaction, so it was probably just a case of someone entering the wrong email address while checking out. The receipt happened to include a phone number, and I was about to text this person to tell them about the mistake when I realized that this would be a great way to insidiously associate an email address with a phone number. The iMessage hacks demonstrate that this would be a great vector for someone in possession of some 0days.

Realistically, was I being targeted? No. But it's a sad state of affairs that I have to even think about such things. I ended up looking up the company online and finding their contact email address to let them know about the mistake, so they could contact the person directly (which they did, and thanked me).


You mean you do not use your personal cell for those things.

Good for you. Your refusal and my refusal protect one another, refusing collectively is stronger than refusing alone.


I refuse to give my mobile number, or use my mobile number for anything. SMS auth, main contact, anything.

My SMS spam is almost non-existent, compared to others.

But one thing, beyond my desire to not give out my number... it is pointless regardless.

When I worked in office, I had mobile access. At home, a bit rural, my access is nil via 4g/5g. I just have no access.

My mobile forwards after 6 rings to my voip, so that works well. But for SMS auth? Hello! I cannot do that!

I have been an ebay customer for 21, yes 21 years. I can no longer log in, as they now insist I enter a mobile number to continue.

Gee thanks ebay.

(No, SMS won't work via voip, they check numbers, even ported numbers)

21 years. Years of thousands, even >10k spent reliably per year.

Gone as a customer.

Calling paypal support, results in people literally unable to understand ... anything. They repeat a mantra off their screen without deviation. Many cuatomer support people I spoke to, were barely paying attention.

I really don't get it. SMS is barely secure to begin with.

They are willing to throw away accounts, just for pennies on tracking.

21 years.


I've had an ebay account nearly that long, but I hardly use ebay after it became whatever it became. To the point, ebay recently sent me an email saying that they were going to close my account for lack of use. se la vie


Just for future reference, the proper spelling is “C’est la vie” (French)


i actually googled that, and it c'est was not returned and i got a bunch of se. whatchagonnado?


...stop using search engines that tailor themselves to idiot queries more and more every year?


Just FYI you don't need a mobile phone to have a mobile number:

* Numbers are generally designated when they're created by regulatory agencies as "landline", "mobile", "VOIP", etc.

* You can provision yourself a mobile number in a provider like Twilio or a competitor if you don't like them, and it will have all the capabilities you want

* Thanks to technological advancements, you can do things like overlay mobile capabilities on non-mobile numbers, but only if you're running all of the traffic through a company that has that tech

* "VOIP" numbers are a vanishingly small corner case for these companies and are a pain in the ass to support. They are a tiny portion of the numbering space and lots of smaller Telco companies just won't complete calls or texts to/from VOIP numbers. Companies like Twilio rely on those smaller companies for last-mile completion or origination of calls.

TL;DR Provision yourself a "mobile" number using an API-enabled SaaS telco platform, which you can use exactly how you want, no actual mobile phone needed, with all the capabilities you want. The "VOIP" number will only continue to cause you more and more issues over time.


voip.ms have SMS support: https://wiki.voip.ms/article/SMS

It does say that short number support is not guaranteed, though.


eBay and etc all ban these. It's not a "supported" issue.


And you know this for a fact? Because you tried it out yourself? Right?

Somehow, even though you say they ban them, I was able to submit it and get a verification message. Funny how that works, when someone just parrots the same thing without actually trying it out.

Proof: https://imgur.com/a/75yKkFl


Yes. I had to close my eBay account last year. They would not take my Google Fi or non-voip.ms VOIP number. Using my VOIP number also immediately got my Twitter account banned before making a single post.

It's annoying as hell because I would very much like to sell a thing or two and don't have many platform choices.

> The process to close the account may take up to 30 days from this notice. eBay will send a message to the email address registered on file, confirming that the account has been closed and, unless on hold, restricted, or suspended, that data associated with the account has been deleted.

I am curious if it is because they care slightly less if it's a CA VOIP number, if it came from a decent pool of numbers, or something else. Or if they will lock you out later and force you to contact the "risk assessment" team and use this as a datapoint.


> I am curious if it is because they care slightly less if it's a CA VOIP number, if it came from a decent pool of numbers, or something else. Or if they will lock you out later and force you to contact the "risk assessment" team and use this as a datapoint.

Interestingly it's an old landline number that got ported over to voip.ms, so you'd expect that if it were trusted due to reputation, it'd also only be accepted as a landline. Yet the validation experience has no problem with it.


I think this might be related if it was a port. My numbers from Twilio and other services are all either Bandwidth or Onvoy or Peerless and refused (all of these are basically pure VOIP and have no reputation of landline or mobile).


See my sibling comment about having a mobile number without a mobile phone


Good catch, edited in the *not*. I agree, if everyone refused then it would change. I asked them what would happen if I tried to use a Twilio number to verify, but they did not seem amused by the irony.


Similar experience when Twilio started requiring SMS 2FA. I offered to use U2F or TOTP, but no, it must be SMS.

I don’t have a work number and this is an absurd reason to get one, so we just canceled the account.


They've even started requiring this for their subsidiaries, like Sendgrid. It's very offputting.


That's strange. I have TOTP 2FA on my Twilio account. AFAIK there is no requirement for it to be SMS.


Twilio Support:

> We're aware that some users would much rather prefer not to use SMS for 2FA usage […] this is a known issue

> Please be noted that as of right now, you can only access [TOTP] after submitting a valid phone number. If possible, we'd recommend you just provide a personal phone number as a workaround

I’d already borrowed a coworker’s phone once for the initial account verification, but requiring it multiple times crossed a line.


> After reading this I am feeling validated that I didn't cave.

The /r/twitch subreddit is full of people who think you should absolutely give twitch your phone number for verification (fun fact: Twitch doesn’t even allow you to turn on 2FA until you give them your phone number). Even when a few days later, twitch got their data leaked, they’ll reaffirm that you are an idiot for caring.

And those are users, not even companies.


While I fully approve not using your phone, the company should issue you one, what is the security benefit of not using a personal phone? It's not really different from a company phone.


You can treat a company phone as adversarial (e.g. leaving it at home or on airplane mode) without impacting your normal life. An MDM solution can usually get real-time location information, not to mention potentially access other personal information. And your real number should not be associated with work things as a matter of course, in my opinion.


The sophistication of attacks is rising exponentially. "Spear phishing" has pretty much become the norm, all our new hires get an email from our "CEO" pretending they have to sign paper work for their equity... Its part of our onboarding now to warn them that if they join our LinkedIn they will get multiple phishing attempts from the "founders".


We have a web page which lists every single tool/service we use, and we actively encourage people to check whether unfamiliar links are listed on that page.

I've considered creating a browser plugin which marks the address bar red/green when you're on URLs the company (dis)trusts. Seems like this would be something that should exist already.


All that is going on is that hackers are learning about how telephony works and weaponizing it. No different than social networks and email. It is time to move off of email and SMS as 2fa.


> Its part of our onboarding now to warn them that if they join our LinkedIn they will get multiple phishing attempts from the "founders".

Hm, i wonder what the point is to warn them beforehand?


I think OP means that new hires are receiving actual spearphishing emails from attackers outside of the company, not that they're testing them by sending fake spearphising emails. (I misread it as the latter at first too)


I'm not sure if they ever did this during onboarding, but my former employer would regularly run fake spearphishing campaigns to raise awareness about spearphishing.

The number of people who regularly fell for it was worrisome. Falling for it meant auto-enrollment in a mandatory security awareness training. Failing to take the training would result in deactivation of the individual's network credentials.

I don't know if these campaigns are actually effective at changing people's behavior, but they certainly revealed how effective spearphishing is.


Ha - thanks for clarifying - now that i read it over again... you're probably right and it even makes sense to do so...


The main problem is they are real attacks that 100% happen as soon as they update their LinkedIn. They aren't tests like a Knowb4.


These screenshots don’t look very sophisticated. Does okta even have a scheduling solution? Who was clicking these links, engineers? Is it normal for Twilio to send corporate communications like this over text (at least without a good reason & an email warning it’s coming and verifiable details)?

I’m also curious how much customer data an attacker should have been able to get into from outside. Is there no 2fa-auth vpn needed to get in? Or is there just lots of customer data hanging out in email/whatever?


SMS- or TOTP-based 2FA doesn't save you here. The phishing site is designed to look like the real login form. The victim enters their username and password, and the phishing site forwards the username and password to the real site's login form, which triggers the "real" 2FA process. Victim enters a TOTP code (or SMS code) into the fake site, and boom, the attackers now have a valid username, password, and 2FA code, as long as they use it quickly enough. Assuming they do, then they have a valid session cookie that they can use wherever they like, which can often be refreshed before expiration without re-authing.

The only 2FA that protects against this is a hardware token, like a Yubikey, if used in U2F/FIDO2 mode. Twilio does use Yubikeys for some roles and access types, but not all, and presumably there was enough sensitive data that was only gated by TOTP 2FA.

I think in this day and age, if I were running a company infosec department, I would mandate U2F/FIDO2 for access to every system, and try to structure things so employees don't need to access company systems on their mobile device. (Yes, I know it's possible to do hardware 2FA on mobile, but I feel like it'd create a large IT support burden since it's not always so straightforward.) And if there are company services that aren't suitable for hardware 2FA, they need to be rare exceptions that are granted, and need to be firewalled off from the rest of the company.


VPNs are so 2010s. They just add friction and a false sense of security (the way they're traditionally implemented - VPN is "trusted" and everything behind is open access), and an amazing point of entry for attackers.

A "zero trust" approach where nothing is trusted and you pass through an SSO with MFA like Okta is better in that Okta do a better job in security than "random network team running an unpatched firewall from vendor X" do on average. And each service has an auth layer and doesn't implicitly trust some IPs are good.

In any case, a phishing that captures login+password+mfa means game over in both scenarios.


100000x Incorrect. VPNs add a layer of security at the network level. You do not rely SOLELY on a VPN. You add it to drop network traffic that does not belong, decreasing the noise level in other logs, highlighting threats elsewhere.

This is one of the worst pieces of advice to be repeated ad deafenun.


The thing is that VPNs are usually deployed in the way i described. No ACLs, everyone on the VPN is "trusted" regardless of position and role, with services behind often relying on the fact that the VPN is securing them.


“Trusted” does not mean authorized. I have never seen any (corporate) environment that uses VPN and not require some form of authentication & authorization on top. With “trusted” status often grants those trusted users read only access to internal communication or documentation.


Do you have any sources for this claim? Are there any stats about VPN deployments anywhere? I have never seen a VPN deployed in this way. Who are the companies that do this? Mom and pop shops with a two employees? Or 1990s corporations which were blissfully unaware of online threats?


Utilizing VPN as _a_ layer of network security with additional systems access security is something we learned in the old pre-2020s era, and I’ve never seen a cavalier system like you’ve described deployed. But maybe it’s just because I’m not up to date on “state-of-the-art” 2020s systems.


In practice VPNs are a very visible layer that people will tend to rely on when they decide what security measures to implement. They are very clunky, so people end up assuming that they must provide quite a lot of security, to be worth enduring.


Not sure where you're getting this idea. I've never been on a VPN where services inside the VPN all just assumed I was allowed to be there. Internal services still at least required a username and password, and some would require 2FA as well.

I'm sure there are some VPNs that are implemented a poorly as you describe, but I'm not sure that's the common case like you seem to think.


> VPNs add a layer of security at the network level

Which you get from TLS. Strong authentication and authorization per-web-application using short-lived tokens through OAuth/OIDC is about a billion times more robust than any VPN network security.

90% of what VPNs are useful for is connecting you to a non-routable network.


This is also very very incorrect! Think about it, how come people in Iran/China use VPNs to access the outside work? If TLS were sufficient for privacy, your claim would be correct, but this is very far from reality.

TLS is another _layer_ of security, but it still reveals who you are talking to via SNI and DNS queries, and worse: how often you are talking to them.


You can use TLS to secure a VPN connection from state actors. You can also use TLS 1.3 to avoid SNI, and DNS over HTTPS (or just use DNS over a tunneled connection). But I'm talking about how you don't need a VPN for a business purpose.


That's kinda a different thing, though. People using VPNs to circumvent national internet restrictions is one thing. Securing a corporate network is another.


Of course they're sophisticated if you don't regularly review SSL trust chains and secure DNS. And "funny looking characters" and all that. Which is most people on this planet. And Twilio had a lot of business and customer facing non-engineer users.

It's another story that will drum up support for another integrated authentication system. I say this as someone who was recently targeted in an attack.

It's peek-a-boo, I get that, but damn is it frustrating.

But they'll get what they want. They'll get their secure system.


A recent article[0] on HN about on-call culture called out Twilio specifically for having overworked and under compensated employees as well as the subsequent attrition those cause.

I wonder how those factors impact the likelihood that an employee might be susceptible to phishing.

0: https://news.ycombinator.com/item?id=32378752


So what exactly was breached related to customers....

Even in the Techcrunch article, they have not specified anything. Are these customers other businesses or regular users using Twilio apps like Authy?

If the 'hackers' got too deep into the systems they might create a big mess. Authy 2fa tokens are backed up on Twilio servers (opt-in) unlike Aegis/andOTP. If you lose access to your offline 2fa tokens (phone stolen/lost), Authy can re-send these token from their servers. Users need to wait 24-48 hours for the whole recovery process to be over.

https://support.authy.com/hc/en-us/articles/115012672088-Res... https://authy.com/phones/reset/?proceed=true


The actual TOTP secrets are (or should be) encrypted with the backup password that the user chose, so access to the backups wouldn't automatically result in compromised TOTP secrets unless the backup password was weak.


As far as I know, the backup password is not needed for recovery using this method. When you have 'Multi-device' enabled in settings and install Authy on second device then the backup password is used/useful.

But for this method, your phone number linked to Authy Account, email Id linked to Authy Account are needed. The Process is Started by old-school SMS based OTP sent to the linked number. You then have to Cancel it via a email sent to you, if you think some is doing is maliciously without your consent.


> If you are not contacted by Twilio, then it means we have no evidence that your account was impacted by this attack.

aka... There were a bunch of ways employees could access the customer data unaudited, so we don't have evidence of that. Rather than say "we don't believe you were impacted in this attack", we're using weasel words to set your mind at rest, when we really have no idea.


This is why I have always wondered about the secondary domains companies have for major incidents or auth or internal employees. If you train people to think that several can exist, few will question another appearing.

Shouldn’t having multiple domains for different use cases be considered a security risk going forward? Is it already considered a bad practice?


Given Twilio is an SMS company it seems like they would/should have an “official phone number” that any company related SMSes would come from. That would make this kind of attack harder.

(Maybe they already have this, the article doesn’t say)

Very unfortunate incident, and rather scary if this is targeting multiple companies.


Twilio does not use SMS 2FA for access to internal systems. It's all either Yubikey or TOTP apps. I believe SMS 2FA was disabled several years ago.

So, really, Twilio employees should be trained that the company will never send SMSes to employees for these sorts of purposes. But it only takes a few people to get fooled when their guard is down.

That's the difficult thing about defending against security-related attacks: the defender needs to be perfect 100% of the time, but the attacker only needs to get lucky once.

(Disclosure: I'm a former employee who has been getting these phishing texts, the most recent of which was yesterday... not that I have access to any systems anymore. I guess they are going off leaked employee lists that are at least 5 months out of date.)


The sender number/name for SMS can be easily changed and should not be relied on for _anything_.

Twilio even offers that as a service (as do most bulk-SMS providers): https://support.twilio.com/hc/en-us/articles/223181348-Alpha...


That greatly depends on telco policies at the destination.

In the US, alpha senders aren't allowed. Shortcode (5 or 6 digit numeric senders) spoofing is highly frowned upon to the point where you can't send from the same shortcode with multiple aggregators.

Other countries vary between anything goes, aggregator takes anything in the sender field, some high profile names are checked, or all sender names need to be run through the telecoms regulator and validated by the aggregator and cell carrier.

Of course, there's (usually?) no customer id transparency between the carrier and the aggregator, so if there's a misconfiguration at the aggregator, one customer may be able to send with a registered sender of another customer.


Only trust corp comms for corp comms (Gmail/Outlook, Slack, Teams). No SMS!

(Twilio also owns Authy for 2FA)


Organizations should definitely think very carefully about creating domains they expect others to trust. @SwiftOnSecurity used [0] a turn of phrase recently that I had not yet encountered and I think it is a great way to put it: "strong DNS governance".

0: https://twitter.com/SwiftOnSecurity/status/15535388679192616...


Good idea. Seems like you could use DNS or .well-known to provide canonical domains that could be checked by devices/email clients.


So I guess the real story is that Twilio does not require second-factor authentication for employees that have access to user data?

It seems like hardware-based 2FA with something like FIDO/U2F would have easily stopped this attack and generally make stealing credentials really difficult (it's still technically possible if you're able to spoof the target domain, but that's much harder than registering a fake domain).

Kind of weird that they only mention "security training" as a remedy, which is obviously never going to be 100% effective, instead of a more effective technical solution.


It seems likely to me that they do, they use Authy, and the employees that fell for the attack just supplied the codes they needed. Each time you get in, you can muddle around and figure out what is needed for the next time you send out an attack until you get where you need to.

The real story is that Twilio, likely due to one of their largest products being Authy, still refuses to move away from SMS 2FA. They need to finally join the rest of the industry and move on to webauthn, internally and externally for their customers.


I can't be the only one laughing that Twilio employees were phished with SMS messages.


It's funny because they open by calling it a "sophisticated" attack... just not true.


I guess BigCorp will call anything that compromises them "sophisticated". They wouldn't admit being compromised by a technologically simple decades-old attack, would they?


An attack is sophisticated when you don't want to get sued.


Kudos to them for showing the kind of messages that caught their own employees out - though to me they seem jarring and unsophisticated. So what other texts must Twilio send their own staff that make these ones feel legit? And what systems do they have online allowing for stolen employee credentials to be tested and used?


> Kudos to them for showing the kind of messages that caught their own employees out - though to me they seem jarring and unsophisticated.

Agreed. A former colleague pointed out that company comms of this sort would never have exclamation marks in them. They don't look as amateurish as some of the poorly-spelled/poorly-punctuated/poor-grammar phishing attempts I've seen, but they don't look particularly legitimate to me either.

But remember that, in a company of over 8,000 people, many of them very non-technical (tech companies are staffed by people of all levels of technical proficiency), all it takes is one or two or three people to fall for it. And maybe they were tired, or had a beer or two in them, or something like that.


Send thousands of these out. You just need one person to be on auto-pilot mode to fall for it.


Ah I'm sure, but their incident report mentions that an attacker has a list of employee names & numbers like that's top secret, but the question of how an attacker could then test stolen credentials seems far more interesting.


Not sure what you mean. These attacks work by getting a victim to click on a URL that leads to a website that looks exactly like the company's own authentication site (Twilio uses Okta, so this is easy to mock up). They enter their username and password, and the fake site forwards the entered credentials to the real site. If the real site then transitions to a 2FA prompt, the fake site will also do that. The victim then enters their 2FA code, the fake site forwards the 2FA code to the real site, and then is rewarded with a valid session cookie. The fake site can then even redirect to the real site so the victim doesn't realize they've been duped.

The entire attack process includes testing the credentials as a necessary part of getting the 2FA code.


A reasonable (& previously common) defence against this was an IT-provided VPN setup, with a certificate. My old company didn't put employee endpoints on the public internet, so they couldn't be exploited without a working VPN connection. Asking victims to upload their VPN certificate isn't impossible, but raises the difficulty of the attack.


Twilio's VPN does use a certificate. I have no direct insider knowledge related to this specific incident, but I suspect the VPN wasn't breached. My guess is someone phished their way in through Okta SSO, and then was able to access something like Salesforce, or some other third-party hosted app that Twilio uses.


> Additionally, the threat actors seemed to have sophisticated abilities to match employee names from sources with their phone numbers.

Exactly, that is absolutely not sophisticated - it's often as simple as "scraping LinkedIn".


I wouldn't be surprised if the fraudster used Twilio's own platform to send the phishing SMS to its own employees.

If this is in fact the case, I wonder if Twilio becomes even more liability for such customer damages. Note: additionally, Twilio uses it's own Authy for MFA.


This isn't the case. I'm a former employee, and received these phishing emails after I left the company (I just got two on Sunday, and I've been gone over 5 months). I used Twilio's number lookup API[0], and they're not Twilio numbers. They were both T-Mobile numbers, and one of them even had a caller name that looked like the number was associated with a retail consumer SIM card. Possibly a cloned or stolen SIM?

Twilio has really stepped up KYC efforts over the past couple years. I don't think an anonymous attacker could send this volume of messages through the platform without getting noticed. And if they were using Twilio to send these messages, it would have been trivial to block them.

[0] https://www.twilio.com/docs/lookup/api


This is the exact same phishing attack that i see on many retail stores that i consult, and i have yet to see any of them fall for it. Smaller surface, sure, but even people loosely familiar with computers are skeptical of those emails/SMSs


I'm a former Twilio employee, and even I've been getting slammed by these phishing attempts over the past couple months. Remember that this is a company of more than 8,000 people: it only takes one person to let down their guard. Maybe they're tired, maybe it's been a long day, maybe they're just the weakest link.


Is your company's information that interesting? We had to prepare for an audit, and concluded that our info is simply not worth the effort. People get training in noticing phishing emails, but the only one we've ever had was a cheap shot at getting someone to transfer money. We may have been wrong though, and overlooked something.


I find it concerning that after getting pwned through a relatively lazy phish, their reaction is to double down on training instead of implementing unphishable second factor auth like they should have from the beginning.

For everyone: If you have more than a handful of employees, one of them is guaranteed to be phished. If that can result in a meaningful compromise, work on your security until it doesn't. Then work on securing against attackers tricking that employee into downloading and running malware, because guess what, that's also going to happen.


If your company is hit by something like this, how sure are you that you'll find out?

I suspect most companies would never know that someone was sharing an employees login and downloading data...

Well if you want slightly more confidence, use my tool to catch thieves...[1] Turns out most thieves are rather tempted by some cryptocurrency that appears discarded and up for grabs.

[1]: https://serverthiefbait.com/


It seems like a cryptocurrency wallet would only attract a very specific kind of thieves and would only apply to cases where you'd normally expect crypto wallets to be present.

Given the "sophistication" of some of these attacks I would be surprised if the people behind it would even recognize a crypto wallet file name, let alone know how to import it and actually steal the coins.

I remember seeing another service that provides usernames, emails or links to seed into your DB and will alert when the link is requested or spam starts coming into that. I think that will be more effective.


I send a survey to each user who has seen their wallet drained. Many are just testing the service (ie. they drain the wallet themselves), but of the rest, most are unaware via other methods that their data has been accessed.

The usual caveats of data from surveys applies...


How many "hacks" are really computer hacks at all?


Most aren't, we basically figured out a fair bit of security in the 90s (barring increasing of computational power, which really has just required increasing the number of bits). It's almost always been implementation problems.


Bad ideas can be replaced with better concepts. Implementation flaws can be patched.

But the biggest problem is still the human factor aka "you can't patch stupidity".


I guess that's why the title or article never mentions the word "hack".


> Additionally, the threat actors seemed to have sophisticated abilities to match employee names from sources with their phone numbers.

Wow, what amazing technology could this be! Maybe it's one of those shitty websites or apps that collect your entire phone book and leave it on some Mongo NoSQL public database? Maybe its Twitter that pinkie-promised to not store phone numbers, then stored and lost them?


"How I used Twilio to hack Twilio"


>> Additionally, the threat actors seemed to have sophisticated abilities to match employee names from sources with their phone numbers.

Is this another case of caller-ID spoofing? That needs to be illegal/impossible.


No I believe what they meant was the attackers knew that if they wanted to spear-phish Joe Smith from IT they had a way to correctly get Joe Smith’s phone number to send a text to him, so things could be personalized and seem more legit.


tl;dr: Social Engineering that sent SMS with links to Twilio employees and a few fell for it. Time for more security training to employees ? Weird that this happened to a company like Twilio which sells SMS API.


I don't think "more security training" will get us anywhere - we've been trying it for a long while. This is a technical problem and admits technical solutions. The most obvious, simple, and straightforward solution here is logins with security keys (WebAuthn/FIDO), which cannot be phished. Perhaps Twilio fell for this precisely because they don't want to admit to themselves that the SMS verification product they sell (https://www.twilio.com/verify) is an obsolete joke compared to security keys.

If you don't want to use that, for some reason, or if you want additional protection, other solutions include using dedicated machines (not personal cell phones and certainly not whatever devices the attacker was using) ideally with hardware-locked keys (TPMs etc.) to access the corporate network or at least to access sensitive systems like customer data, or giving people separate privileged accounts that they don't use for day-to-day access, or establishing a two-person rule for logging into sensitive systems (so two people with the same access need to get successfully phished at the same time), or setting up some real-time auditing of changes (e.g., a Slack channel gets notified when people manually log into prod).


Twilio must use MFA (Authy)? If so, the "sophisticated" part of this attack may have included a breach of the MFA or a way to get around it?


I don’t get it. Have credentials been entered on third-party sites that then been used to logon from elsewhere?

Would it be stupid to force mTLS for employees?


This isn't just Twilio-proper right? It would affect Sendgrid and whatever else they own?


Hey Twilio, maybe now you'll implement webauthn for both your internal systems and for your customers?

Its pretty frustrating that there's a well known technical solution that prevents this type of attack from being feasible and companies like twilio simply have not prioritized it yet. Replacing SMS and Push based 2fa with webauthn is the best bang for your buck security upgrade available to companies right now.

EDIT a bit later:

> We have reemphasized our security training to ensure employees are on high alert for social engineering attacks, and have issued security advisories on the specific tactics being utilized by malicious actors since they first started to appear several weeks ago. We have also instituted additional mandatory awareness training on social engineering attacks in recent weeks. Separately, we are examining additional technical precautions as the investigation progresses.

Security training is ineffective against phishing attacks like this. Please stop wasting your money and your employees time; implement the technical solution that nullifies the attack.


"Its pretty frustrating that there's a well known technical solution that prevents this type of attack from being feasible and companies like twilio simply have not prioritized it yet."

I understand your frustration but you are missing the point.

2FA with a valid (non-VOIP) SIM card is not for your security. They tell you it is, and they dress up the process as if it is, but they are lying to you.

Twilio, et. al, have a brutal, unrelenting scam/spam problem and they have no solution for it. They have nothing. If they had a solution they would have deployed it long ago.

Instead, they do what they can: throw sand in the gears and slow down the bad actors with ridiculous, obviously absurd (put in any phone number we have never seen before to prove that it's you) mechanisms and hope the real customer base doesn't balk.


I have worked in this space - aside from WebAuthN, the spam/scam problem is tough for a SaaS Telco company because anti-spam/scam is directly at odds with a low-friction self-serve experience.

You also have to realize that Twilio is not a Telco, they ride on top of big Telco company infrastructure and ultimately has to answer to big Telco, and big Telco is very risk averse and generally has to answer to regulators. So Twilio doesn't want spammers either because they get dinged, blocked, or marked as spam by the Telcos.

Ultimately there's probably more that the underlying Telco infrastructure could do more to support anti-spam measures but these companies are dinosaurs that move very slowly. It's easy to deride them from the position of enlightened SaaS company employees, but they're the ones in the capital-intensive business of maintaining worldwide networks of copper wires in the ground, physical switches, systems designed in the 1980s or the 1950s and complying with incredibly complex and outdated regulatory structures. It's the "legacy system" of a developer's nightmare to the 100th power.

Having a good flow for self-serve customers to sign up and get going is the cheapest way to get new business, and developers will not even look twice at you if it's hard to get going in a self-service manner on you platform.

However, it's exactly that low friction that gets spammers going easily.

So what do you do?

You try to come up with heuristics and even ML models to identify bad actors early. It's an ongoing constant battle, a game of whack-a-mole where the moles have basically zero cost overhead, no legal entity associated with them, and can just keep popping up over and over and over forever.

So you're staffing a fraud detection team of engineers, you have lawyers and finance people on this, you have customer service for customers who get caught as a false positive - it's all an extremely expensive pain in the ass.


They can require a phone number for signup while still supporting WebAuthn.


"Twilio, et. al, have a brutal, unrelenting scam/spam problem and they have no solution for it. They have nothing. If they had a solution they would have deployed it long ago."

That's an interesting point I hadn't heard that before. Don't banks essentially do some form of this with KYC checks?

Would expect being smart about bringing in some components of that process would reduce spam while not introducing too much user friction.


Bank KYC is different because all the banks have to do it, so you can't really avoid it (although maybe some are worse than others). Also, banking usually has more switching costs; people don't usually sign up for a bank, run a couple transactions and leave for another. Twilio is one of a large number of players in this market, spending a lot of effort on KYC likely turns off some customers and may be overinvesting for tire kickers.


Isn’t SMS based 2FA responsible for a lot of Twilio’s revenue?


SMS campaigns, both target and blanket, simply must dwarf any 2fa usage. In just one role I saw Sms-shots with 10s of thousands of recipients. SMS is also fairly heavily used for non-secure notifications (you bill is due / payment received / your password changed, etc.) quite heavily IME.


I wonder how often SMS 2FA serves as the entry point for more business. Like, devs choosing Twilio to try it out for low volume work before expanding into large campaigns.


It's kind of both. SMS 2FA is more likely to serve as an onramp for "good customers" who are developers actually building real applications using your services, rather than people who are mass-blasting borderline spammy marketing texts.

Traffic that gets identified as marketing traffic as opposed to traffic that will involve a human responding (like a chat bot to schedule with your doctor's office) actually costs Twilio much more to send because the Telco carriers charge more because they have to put more into mitigating the risk of getting blocked and sued for spamming.

So the "good customers" are lower volume but higher margin, and generally more likely to grow at a sustainable pace and not disappear off the map, as the have a lot of code around using your APIs, not just "Send a kajillion messages". Those "marketing" customers will jump off your platform in a blink of an eye if someone offers them cheaper prices and less oversight on their spammy practices.


I guess dogfooding isn't a great idea if the kibble is rancid.


> Security training is ineffective against phishing attacks like this.

I wouldn’t say that. There’s a significant effect of training on the number of people that click. The big problem is that the attackers have to only get it right the one time.


I think that's exactly the grandparent's point: any mitigation that relies on making 100% of people 100% perfect, 100% of the time, is not going to be effective.


Unless security training takes the click rate to zero it is ineffective in protecting your organization from phishing attacks.


Nothing can take the risk to zero. The goal is to take steps to minimize that risk. The average “click rate” for phishing emails is something like 18% (don’t quote me on that exactly) and if you can institute training that brings it lower then you are working to minimize risk. You should do other things as well to further reduce risk, but training is one tool in the toolbox to help.


That is incorrect. Mandatory WebAuthN eliminates this risk. Stop wasting your time with inferior alternatives.


How much communication would shift to informal methods if the barrier to communicating over approved channels got too big?


It's important to note that Twilio can't sell WebAuthn as a service, at least not until they try to compete with Auth0 or Okta B2C directly. If they provide it on their own accounts instead of dogfooding their 2fa products, it probably doesn't inspire their customers with confidence in Twilio's product.


Dear Twilio:

I wrote you an email a year ago about how the Authy app is dumb and SMS "2fA" is NOT a second factor! To re-iterate: SMS is not authenticated, it can be spoofed (using your platform), there are no delivery guarantees, no read receipts, it's not encrypted... and most importantly: It defers security to Phone Carriers, who have the security posture of an air seal made of swiss cheese.

Once again: Dump the Authy App. Promote actual 2fA technology like U2F, now WebAuthn, hardware keys, OR _actual_ TOTP which cannot be reset using an sms. My 67 year old Mother, grandmother of 12, knows how to use hardware keys and TOTP. I'm embarrassed for you guys that you still can't figure it out.

Sincerely,

-The I told you so department


I agree with most of what you said, but just note that TOTP does not protect against this sort of attack. Only U2F/FIDO2 does.


What's wrong with the Authy app?


Your security tokens can be reset using SMS.


I have only ever used the Authy app as a generic place to centralize my TOTP secrets from various services.

When you say tokens can be reset using SMS, which tokens are you referring to? Is this reset something specific to Twilio?

I can't imagine a scenario where SMS would enable the resetting of other 3rd party TOTP secrets, but I may not be understanding your comment.


Your TOTP accounts can be moved between devices using Authy, correct? That means Authy has access to the plain text secrets at some point. So mistake #1: your "secrets" aren't secret, they are merely held in escrow. If they "were evil", or perhaps if a state actor, or if you pissed a rouge employee off at Authy, they could absolutely leak your credentials.

#2: The authy app lets you recover your account using SMS. So anyone that wants to pull off a simjack attack on you can login to your Authy as you and obtain the keys to the kingdom.

The entire point of TOTP is the "Secret" is held locally in an oracle. If you break that constraint, you've broken the security of the protocol.


#2: I believe they encrypt the backed-up tokens locally with a user-provided password [1]. The same password must be used to restore the backup. A malicious agent that "clones" your simcard will be able to obtain only an encrypted copy of your token data. This seems secure enough for me, but maybe I'm missing something.

[1] https://authy.com/blog/how-the-authy-two-factor-backups-work...


Authy has access to the secrets if someone enables cloud backups, but they're encrypted with a user-provided key that must be re-entered upon syncing to a new device.

I'm aware of the simjack risk, but that would require:

1. That I'm using cloud backups

2. That the attacker has also obtained my backup key

None of this seems fair to summarize as:

> Your security tokens can be reset using SMS.

I'm not claiming Authy is perfect, but it seems to use a reasonable approach for people who don't fall under the "high value target" category.


Not a huge deal but I have accounts linked to the Authy app which I no longer have access to because they're from old jobs (like a github of GSuite account) and every party involved says they cannot remove the accounts from the app, so I just have to look at these nonexistent accounts for eternity (I just switched to one of the other 19 2FA apps that have a usable interface and functionality)


Absolutely wild that apparently the Twilio team doesnt require any sort of MFA to access internal data, and then even wilder that this incident report doesnt even mention this.


Who says an attacker auth Web site cannot ask for the MFA code behind the scenes and supply it? Problem is no one, especially Twilio employees shouldn't never click and never trust any link they receive from trusted or untrusted source. They should use the links they already have bookmarked.


For properly implemented MFA (FIDO/U2F tokens) an attacker-spoofed website can't ask for the code behind the scenes - i.e. they can ask, but they'll get a code that won't work on the proper site.


Not sure about MFA with a USB key but for the sake of the argument, if they are using App-based MFA as their own Authy, I would think a headless browser in the backend of the fake site accessing the proper site on behalf of the real user would do the trick. It asks the code for the user on the real site and the user replies on the fake site and the fake site supplied the real code to the real site. The only thing needed is that the user gets and supply the code that was asked on their behalf to the fake site.


> properly implemented MFA (FIDO/U2F tokens)

Is what you're responding to, and such an attack cannot work with them. The parent comment already clearly understands the flaws of Authy, you don't need to talk through it.

I'll try to explain the key difference between totp and webauthn style flows, as it relates to security here.

Conceptually, you can think of it as the hardware token (the yubikey or whatever) gets the site domain name the user is on from a trusted source (the browser), and then sends back a secret that is specific to that hardware device and domain. If they're on the real site, the token sends the right secret, but the attacker can't intercept it since it's sent directly between the local browser and usb device. If they're on a fake site, the secret will only work for that fake domain, not the real one, so the attacker can't forward it and have it work.

Many large tech companies use hardware tokens of this sort now, and for a company of twilio's size it's quite reasonable to expect that they provide such a token to employees and mandate using it when accessing customer data.


No, MITM does not circumvent that, unless you can MITM the TLS connection and convince the browser (not the user) that you're actually connecting to the proper domain, e.g. hacked private keys or malicious CA issuing fake certs, which is quite rare.

For U2F, there is no possibility for a user mistakenly approving one site's challenge on another site, if the challenge request is coming from (and the response would be sent to) https://badsite.com, then any challenge that's not for https://badsite.com would be automatically rejected by the browser even before asking the user anything. (This is the type that is usually implemented through a USB key.)


Of course Twilio requires MFA for internal access, but TOTP codes can be phished just as easily as passwords.


Depending on the type of MFA (sms, push, TOTP), phishing can get around it.


Let me guess. Phone numbers also compromised? I won't be surprised about that if it was the case.

Either way, not only there is poor security at Twilio, but I would expect that they will be fined in the multi-millions of dollars for this breach.


> I would expect that they will be fined in the multi-millions of dollars for this breach.

By whom? Most breaches do not involve fines of any sort.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: