Originally Posted by Silent Scone
Depends on the level of stability also. Cache instability can manifest in a lot of ways depend on the point of failure. Not excluding fatal exceptions
Yep, literally anything - bad data in a CPU is bad. _usually_ if you are "close" you will see corruption in stressapp without any catastrophic failure, but its really a roulette wheel of which bits gets corrupt when...
My current frustration is that some instability I thought might be my OC on the 6950x turns out to be a problem in the ext4 file system and/or the kernel. I moved my code over to my dual xeon system and I'm seeing the same symptom with very large data movement. Move enough data and the ext4 journal deadlocks and stack dumps on the timeout. That explains why I could find no other symptom of OC instability and the failure came at relatively light load (multi-TB copy/rsync).... it wasn't the OC.... bum, bum, bah!
EDIT: ayup - and it seems no one has any intention of fixing it:
(disclaimer: not NEAT!)
Sorry, I don't have specific numbers at this point, but some surprising results comparing a dual 2690v4 BW to the 6950x for heavy, multi-threaded computes. Cinebench and other synthetics put a single 2690v4 roughly on par with an OC'd 6950x in many regards, but I'm seeing even limiting tasks to 8-10 cores there are some curious similarities between the much slower (3.2GHz all core turbo 2690v4 vs 4.4GHz 6950x).
Given the slower memory (DDR4 2400 CAS 17 ECC, registered and the hilarity that QPI/dual core introduces there) and slower clock, I really expected the 2690v4 to struggle to impress vs the 6950x for "low" thread count tasks (< 10). Even more so that uncore on the 26xx is VERY slow... <2Ghz - though asus allows override of this, it comes at the expense of the TDP envelope, so something else pays the price (gets TDP throttled).Edited by cekim - 8/8/16 at 10:28am