What kind of performance penalty do all of these patches combined have on intel chips? I’m curious how much slower an intel chip is now versus before meltdown.
It looks like there is generally a 18% - 40% performance hit with meltdown[1]:
> On Linux distributions like Ubuntu 18.10 and Clear Linux the mitigation costs were about ~18% while both RHEL 8 Beta and openSUSE 15.0 had a nearly 40% hit.
If we look at MDS mitigation for older MACs[2], it could be 40%:
> Intel MDS Vulnerabilities Affecting 7th Gen And Below May Slow Macs By Up To 40%, Apple Warns
If we look at MDS mitigation generally (if Intel is to be believed) we are looking at ~10% for most use cases[3]
> Intel's benchmarks show a 6-14 percent drop in storage performance on a couple of Xeon processors, both with Hyper Threading enabled. Assuming that Intel is not showing a worst case scenario in any of these benchmarks, the hit to storage could be even bigger.
> It's in workstations and data centers that mitigations are likely to have the biggest performance impact, depending on the workload. In a separate graph, for example, Intel shows a 19 percent drop in "server side Java" performance after disabling Hyper Threading on a Xeon Platinum 8180 processor (compared to having it turned on).
In other words... total I have no idea what we'll be seeing. However, if we don't look at just "raw performance" in benchmarks when buying CPUs, AMD is likely a better purchase for most use cases at this point.
That article reports: If looking at the geometric mean for the tests run today, the Intel systems all saw about 16% lower performance out-of-the-box now with these default mitigations and obviously even lower if disabling Hyper Threading for maximum security. The two AMD systems tested saw a 3% performance hit with the default mitigations. While there are minor differences between the systems to consider, the mitigation impact is enough to draw the Core i7 8700K much closer to the Ryzen 7 2700X and the Core i9 7980XE to the Threadripper 2990WX.
Rough prices I just googled: Core i7 8700K is $390, Ryzen 7 2700x is $290, Core i9 7980XE is $1800 and Threadripper 2990WX is $1600.
For the lazy: "If looking at the geometric mean for the tests run today, the Intel systems all saw about 16% lower performance out-of-the-box now with these default mitigations and obviously even lower if disabling Hyper Threading for maximum security. The two AMD systems tested saw a 3% performance hit with the default mitigations. While there are minor differences between the systems to consider, the mitigation impact is enough to draw the Core i7 8700K much closer to the Ryzen 7 2700X and the Core i9 7980XE to the Threadripper 2990WX."
Not looking good for Intel at the moment. It should also be noted that there's believable rumors that the mitigations are not fully effective unless hyperthreading is disabled.
I don't think it's just rumours. Microsoft, Apple and Red Hat have said that full mitigation requires disabling hyperthreading. While Intel don't recommend doing so, their justification for this is that most people don't actually need process protection anyway. They also note that "it’s important to understand that doing so does not alone provide protection against MDS", which is true - you need to disable hyperthreading and apply additional performance-sapping mitigations which require both microcode updates and operating system support. The hyperthreading-based variants seem to be a lot more powerful than the ones fixed by the other mitigations though.
Which makes you wonder why hasn't any major tech enthusiast website redo their benchmarks again with both security patch and HT disabled against AMD. I also don't think disabling HT is an over reaction, as far as I can see Intel's HT implementation is fundamentally flawed.
Right now there aren't any site putting those number with AMD's number next to them.
May be it is too much work and they will wait for the new Ryzen arrives.
There's no single number since it depends on your workload. Something that is syscall- and interrupt-heavy or does lots of task-switching will see far more overhead than pure number-crunching with properly allocated threads.
Synthetic benchmarks can get you a penalty from anywhere from 1.001x and beyond 2x.
Your workload also influences which mitigations you can safely turn off. If you're running a single-tenant system and don't need isolated security domains then you can turn them off.
Future software improvements (io_uring!) may also recover some of those losses.
>If you're running a single-tenant system and don't need isolated security domains then you can turn them off.
This is the dirty secret that no one will let out because its ramifications on "the cloud" are very serious, and Microsoft, Amazon, et al have staked their futures on rent-seeking via cloud.
These hardware bugs make the risks of multi-tenancy incontrovertible, and I would expect enterprise security departments to be clamoring for the return of hardware control.
My popcorn is out to watch as the frantic inquisition against cloud heretics kicks into high[er] gear.
What? Single-tenancy and the cloud are pretty orthogonal. I can request single-tenant machines on EC2; they just cost more. But I still have the benefit of being able to create and delete an instance in just minutes.
Multi-tenant is the cloud's linchpin and it's disingenuous to pretend otherwise. The offerings of and outlook for cloud providers would be very different without multi-tenancy.
While single-tenant options may be available for some configurations, you'll pay through the nose, and there are many limitations (e.g., EBS-backed volumes attached to your single-tenant instances still run on multi-tenant hosts). Moreover, few if any of the managed cloud services, which are what really drive cloud adoption, have any concept of single tenancy.
Even after all this, the risk is only partially mitigated, because you're trusting Amazon's management toolkit and staff to respect these boundaries and to not have any bugs that may inadvertently expose access or data to third parties. Considering single-tenancy is such a small segment of their business, I doubt this is a major consideration, and even if it is, it's a lot of eggs to put into a basket that's completely out of your control.
There just isn't much of an argument for a group that's serious about security to commit hardcore to "the cloud", yet virtually every company I've encountered in the last few years is pushing this hard, and actively ostracizing anyone who tries to inject some moderation or sanity into it.
I appreciate the convenience of cloud offerings as much as the next guy, but it's out of control, and the complete disregard for the implications of hardware bugs that fundamentally undermine the supposed security model is a great representation of that.
I don't know that there are many computers out there which don't need isolated security domains. Users run javascript all the time, cloud servers obviously need security domains, regular bare metal servers generally run internet-facing applications such as web servers as unprivileged users in case they contain RCE bugs.
High performance computing, machine learning training servers, CI/CD build servers, most database servers, home media servers, computers for industrial control, etc.
I would be very interested to see those numbers arrogated, I have no idea whether these numbers are percentages of 100% original CPU performance or percentages of the remaining performance from the last bug. Either way, that's some serious performance loss. Even your normal light-weight users are going to start noticing slowdown.
It appears as if nobody at Intel is doing anything other than cursory security analysis of processors. So far the cost has been performance, but I imagine there may be some bugs without micro-code mitigation. Not only this but it appears to be only a matter of time until Intel ME is broken too.
My recommendation here is to wait a couple of months until Ryzen 3 comes out in laptops. While an IPC boost is nice (rumors are between 10% - 20%) the real reason I say this is because the new 7nm process should net a significant reduction in energy consumption. AMDs demo of Zen 3 in January showed their 8c/16t CPU performed almost identical to Intel's 9900k with 1/3 less the power consumption.
Ryzen Mobile 3000-Series was already launched at CES, it's already available in certain models. It's also worth mentioning that these are 12nm chips based on Zen+, not 7nm and Zen 2 (the new architecture launching in a couple of weeks).
Yep. The 7nm is coming in Q3/Q4 as the Ryzen 3000 for desktop, the mobile ones based on 7nm aren't even on the roadmap yet, and if, would be the mobile Ryzen 4000 series.
It's not anything super resource intensive beyond the inherent need for the resources of the VM + host. I just got sick of dealing with some of macOS's idiosynchacies in it's Very Special Epiosde of BSD that is their CLI interface, so I spin up an Xbuntu or Debian image when I have a project. (That + snapshots makes it a lot less painful to try interesting things)
My 2 cents as someone looking to get a new (mid-range gaming) laptop this year: The new ones coming out currently and within the next few months. You'll find the amount of AMD Ryzen in mobiles has skyrocketed the last two years.
We now have AMD Ryzen CPUs combined with nVidia dGPU solutions, unthinkable just a year ago. I'm still waiting for the Ryzen 3750Hs to hit the (German) market, but if I had to buy tomorrow, I'd presently opt for the Asus FX505DT-EB73. This one has an NV dGPU, there's also an arguably Linux friendlier AMD RX 560 of that laptop available, too.
I personally would also prefer an AMD dGPU solution, but the RX 560/580 is already old and I don't think AMD has a solution anytime soon either, unfortunately...
Yeah, if you even need dedicated GPUs at all, as usually it's Gamers who are looking for those and they're usually on Windows systems anyways (really looking forward to installing Windows 7 on the laptop I mentioned if I do get it this year - kinda mean that both ironic and not-ironic...).
The built-in Radeon Vega 8 or 10 solutions on all mobile Ryzens are usually better and more efficient than current Intel counterparts, so if you don't need a dGPU, then that's one more reason to root for Team Green.
EDIT: In case anyone reading isn't aware, nVidia is a biaaatch when it comes to drivers for Linux. Anyone looking for powerful GPUs in Linux machines, no matter desktop or laptop, has an easier life with AMD solutions. That's the reason there's an infamous photo of Linus showing the finger to nVidia: https://www.youtube.com/watch?v=IVpOyKCNZYw
Is there a decent thin-and-light AMD laptop? Looking to buy the successor to my Dell XPS 13 and would strongly prefer AMD if I can find something ultrabook-like with a QHD or above resolution screen
Yeah, but you're referring to "super-computing" situations, where the OS of choice is Linux and nVidia would be shooting themselves in the foot if they only developed drivers for Windows.
When it comes to normal consumer hardware, it seems that nVidia drivers for Linux in 2019 are still a hit and miss:
AMD "just works" even without installing the binary blob from the vendor.
as a matter of fact, there is no need to install said binary as its most likely already upstreamed to your distribution of choice.
but nvidia still reigns supreme if we're talking actual performance ... at least after you've installed said binary blob ;)
and i can say from personal experience that the nvidia 10x0 drivers were terrible in the first ~6 month after their release. fan kept jumping between 10-80% for example. Haven't had any issues in at least a year though, but i'd expect the same kind of issues on any new chipset by nvidia, as they havent open sourced their drivers
>looking to get a new (mid-range gaming) laptop this year
If you can, build a gaming PC instead; you'll have a more powerful and more upgradeable machine for a fraction of the money (as low as $600[0]). And it'll likely work. In my experience, gaming laptops don't work. Laptops in general don't work, but especially gaming ones and especially for gaming. Ironically, this gets truer the more expensive a laptop is, which is the opposite for PCs.
I already have a gaming PC, and her R9 280 is being replaced by an RX580 this week... :)
And yes, it really needs to be a gaming laptop, I travel for work, and I have had good experiences gaming on the go. It's still a relief to get back home to the desktop, no doubt, but I'm often gone for two days to two weeks, and I'd still like to be able to pull a few frags in that time... :D
I'm currently using an Intel 4702MQ with a GTX 760M in an Acer package, she used to be fine to play CSGO, but that's no longer the case either...
I can only find Intel laptops on their site. AMD only seems to be available in their desktops. The laptop, mini, and server lineups only have Intel CPUs.
I don't understand why hyperthreading needs to be turned off for maximum security? Wouldn't restricting it to only allow simultaneous threads that are part of the same process be enough?
While that definitely restricts the attack surface, there are still situations where you are running untrusted code within a process (e.g., JITed code whether JavaScript or something else like eBPF). So it would require not only the kernel scheduler to be careful about scheduling threads to cores, but these applications as well (which is not something most people have ever bothered with, setting thread affinity for performance, yes, for security, not really).
Since we're also talking about CPU in general here..
I'm building a PC, initially wanted to use Intel because of hackintosh. But I don't think it's justified to have an already expensive CPU (Intel is more expensive than AMD) gets throttled down again due to patches.
So now I'm considering AMD. What would you recommend for developer? I will use it for Android, Docker and occasionally gaming.
(I'm aware that I most likely won't have hackintosh due to AMD)
You can find brand new Ryzen 7 2700X (8 cores) for very reasonable prices after the last price cuts (we're a few months away from the 7nm ryzen 3rd gen).
If it's still too much a Ryzen 5 2600(X) (6 cores) is still a very good processor for almost any use case.
About hackintosh and AMD, that's is not only possible, but also not too complicated to achieve on Mojave (see https://vanilla.amd-osx.com/ for more details). Older versions required using custom AMD kernels, but some guy figured out how to patch the vanilla kernel for Ryzen and Bulldozer straight from Clover; it's not as simple as using Intel, but it's much easier than it was a few years back.
I'm also building a computer and just purchased an i7 9700K for $400. I also wonder what the equivalent-performance AMD processor would be given the performance loss of these patches. Unfortunately it's a bit late to back out without returning the motherboard as well, though.
Is there an option to turn off all mitigations? A public hoster is different to my local build server where security is not an issue and only performance counts.
Part of the mitigations is a microcode update, which may be embedded in your computers firmware (BIOS/UEFI). You could of course _not_ update your server firmware, but it could make you vulnerable to all kinds of security issues.
AMD has fared better in the age of speculative execution attacks because they didn't allow their speculation to "cheat" and break hardware protection barriers, even though several of their major competitors did.
While it's by no means a guarantee, it does say something about the engineering culture that they understood the theoretical risks of violating a protection barrier and chose to respect those boundaries, instead of dismissing the risk and going for the bigger stats. That deserves notice and approbation.
Yeah, it really does seem like a fundamental difference in how AMD and Intel designed their processors. Researchers have found Intel processors allowing speculation to bypass hardware protection all over the place, and they just haven't found an AMD equivalent. Not only that, AMD have looked into this themselves and released an entire whitepaper explaining why this isn't possible: https://www.amd.com/system/files/documents/security-whitepap... (With, interestingly, one exception. Apparently AMD CPUs allow speculative execution to ignore x86 segmentation. It's just that no-one cares because that's not used anymore. I'm not sure there have even been any mainstream OSes that used that for process isolation.)
This is particularly bad for Intel because most of the SMT-based exploits can pretty much only be mitigated by disabling SMT entirely and taking such a substantial performance hit that they've been trying desperately to convince people not to.
While it is no doubt true that there are likely some vulnerabilities in AMD's chips that don't exist in Intel's, it is also true that because Intel has a larger market share, that's where most of the work is going to be done to find vulnerabilities, by the bad guys as well as security researchers. Much like how Microsoft Windows used to have a lot more security problems than any other OS, because they had >90% market share, so no one bothered to develop malware for anything else.
In this particular area though, it works both ways. Intel has so many researchers and resources, they are able to pursue tons of speculative execution / CPU doing something you wouldn't imagine it normally does experiments, and many of these find their way into their CPUs. A company with less resources might have less vulnerabilities of this nature simply because they don't have the resources to focus on that stuff in the first place. Just a theory.
I'm not exactly sure the monoculture argument is valid for Windows vs. Linux though; Linux still dominated server usage while Windows was attacked more often, despite the fact that you could potentially reap a much greater reward infecting a server. Unix-based OSs were just inherently more secure for the most part.
While it's certainly true that Linux does much better on servers than laptop/desktops, I don't think it ever got to the >90% market share that Windows had on laptop/desktop (although admittedly that depends on whose estimates you look at). But servers certainly do differ in regards monoculture in important ways, relative to laptop/desktop.
It's also possibly significant that Linux on servers is split between Redhat and Ubuntu and others, who have some non-trivial differences in regards to security updates, etc.
None of which means that Unix-based OS's don't have inherent advantages in regards to security, just that not having a monoculture target on their forehead also helps.
We have known for 15 years that speculative execution is a minefield. Intel seems to have placed a bet that this would never lead to any exploitable vulnerabilities. For what we know, AMD might have been more conservative.
If anything the bet was crazy, regardless who is affected.
Or the bet is more subtle, that buyers are willing to eat the risk in exchange for higher perf, letting Intel still hold the title of higher performing chip and market that.