Originally Posted by Majin SSJ Eric
Seems to me like all this stuff is still pretty "Much Ado About Nothing", at least for the average individual PC owner. Obviously data centers, businesses, etc will have a LOT to worry about when it comes to any sort of security vulnerabilities, but the reality of it is that the average user here on OCN is not bloody likely to be targeted by any hackers in the first place simply because we are just not that important to anyone. While its obvious that any security vulnerability like this is not good news, you have to remember that there are hundreds of millions of people in the world using Intel machines so the odds of anyone individually targeting YOU remains statistically insignificant, from a big-picture point of view.
Yeah, that's kind of my thinking as well about the danger at home, but I actually think there is basically no new risk from these exploits on a home PC that you just use privately. It's a very different situation for data center stuff like Amazon EC or Microsoft Azure where they rent virtual machines to random people, they have to worry a lot about someone attacking other people's VMs that are running in parallel on the same real machine.
But there is something big you are overlooking. There is a performance loss for your PC at home because of bug fixes about these exploits getting built into Windows and other software. I only saw good benchmarks about this for the Linux kernel, but the same should be happening with the Windows kernel. Here's those Linux benchmarks:
(that article has several pages)
The red bars in the graphs are the default setup of the latest version of the Linux kernel, and the blue bars are the same kernel but with the mitigations disabled. As you can see the performance hit is massive for anything that deals with transfers of data between drivers. On that page 2 I linked to, there's the disk transfer benchmark looking bad. On page three the network benchmark at the end is super ridiculous.
These performance losses shouldn't matter much for normal PC use and gaming. Maybe you'd lose a handful percent in fps at most? Those network and disk benchmarks earlier are doing one, ten or hundreds of thousands of accesses to the driver and hardware, but gaming is just about painting a hundred fps per second or so. Maybe for the graphics much of a frame's work can be collected into large batches of a single transfer instead of many small calls to the driver and hardware?
In any case, the problem is that you at home on your PC are sharing the same underlying kernel as what Amazon EC needs or Microsoft Azure needs.
In the Linux example, when using the stock setup of the kernel it seems your Intel CPU gets basically punted back by several generations of CPU improvements. That idea is annoying to think about. I don't need this at home.
The same should be happening on Windows, but how do you disable this on Windows? I don't think Microsoft would expose any options for this anywhere? They are probably thinking exposing this kind of stuff is a dumb idea because they would multiply what needs to be tested for quality assurance. I have a Linux installation where I have tried finding all parameters that can tweak this and disabled all of it. I could find four different options and two of them can be more than just on or off so it would be a lot of combinations if you'd have to do QA (Linux has no QA).
So, that's the thing I'm annoyed about whenever there's a new exploit. Each new exploit means an additional hit to your CPU's performance because some sort of work-around will get added to software.