On Thu, 4 Jan 2018 13:48, Carlos E. R. wrote:
On Wednesday, 2018-01-03 at 23:48 +0100, Yamaban wrote:
On Wed, 3 Jan 2018 19:45, Carlos E. R. wrote:
Any method to know if /my/ processor is affected? It was bought several years ago. A list of exact processor models for looking in /proc/cpuinfo, perhaps.
AFAICT, if is has 'Core' in its name, its affected, as well as all other Processor models of the last 6 years at least, talk is going on on the older models, up to 10 years back is hit in certain models.
Core 2 Duo, so yes, I'm hit.
Check the better sources, e.g. the more serious computer publications or proven to be better informed sources on the internet on details.
Links will pop up in time :-)
Root cause is a 'accelerated' branch pre-execution, in other words, they dropped the security checks to gain speed.
Yes, that part I found out.
Intentionally? They forgot? Ineptitude? Did they really think they would not be found out? Sigh :-(
Well, no. "The Issue" dated back to the development of the "Pentium Pro", the 32bit dual-core Pentium from 1995. Intel found out the hard way that the branch prediction engine they planned to include was a mass grave of transistors, thus immensely expensive and drove the already low yield of the production down to single digits of percents. So, to get even a kind of a grip at the situation, the whole branch prediction engine was re-designed. Much simpler, used less cycles, used not even a tenth of the transistors. Intel hyped that as "Productivity Enhanced" Branch Prediction in the pre-release days. The documentation of the Pentium Pro clearly stated the restrictions the reduced branch prediction unit had, and the compilers at the time handled that with the needed care. Fast forward to around 2005, the race between Intel and AMD got "not nice" and thus Intel "optimised" its own compiler to reduce the prior as "must have" declared security checks to gain speed. In the name of performance over everything else this kind of optimisation found its way into other code. Databases, OS-kernels, compilers. You can easily do the math. Using a non paranoid compiler opens you up the the holes now published (they where found before October 2017). (Much of the harshness of the situation is the dramatically risen level of visualisation compared to ten years ago.) So, much to late for Intel or AMD to include a hardware based defence into the latest architectures (that needs at least 15 month lead time) Will Intel's next Arch (Core i{3,5,7,9}-9yyy) have such a defence? More likely no, same with AMD and ARM. At the base we are left with dropped security checks and left out early randomisation for context changes and other heavy lifting stuff in nearly very bit of code, simply because there are very little people left that can program such check without opening wide the doors for much more ugly attacks. The TODO now is to redefine "best (and/or) acceptable practises in coding for such situations. The fastest way to get the redefined message out would be compilers that will not accept code that violates such "best practises" easily. Will we see a real break-through on that matter in 2018? - My gut tells me: NO real chance, just some small steps (with performance penalties). So, pressing anyone for replacemnt hardware: Prepare to be laughed out of the doors, while fingers are pointed to the compiler guys. - Yamaban. PS: If I've got stuff wrong, please speak out with the truth. I'm no god, nor otherwise omnipotent. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org