I've been a professional geek for almost 20 years, and I've never OC'ed a thing. I recently retired an old dual-Opteron mobo (ca. 2006, Opteron 140 or 160), and started testing some code on it. It was painfully slow, and since it was just a dev machine, I switched that CPU/mobo out for consumer parts, but something more modern. I ended with an ASUS M5A99X EVO R2.0 and an FX-6350. While I'm bummed that I didn't get the 8350, I'm quite happy with the rig. My project compile went from overnight-and-then-some to ~2 hours.
These "Black Edition" FX processors looked like they were "easy" to OC, and since I bought a decent heat-sink to go with it (Noctua NH-L12), and because the case has lots of room (Old CM Stacker STC-T01), I decided to try overclocking for the very first time. So, thanks again for the OP and everyone else who chimed in with their experiences.
Long story short, I was able to get 4.4 GHz on air, passing Prime95 Blend (75% memory) for 12 hours and also passing my own torture test.
I was bummed I couldn't get more, because I'm only hitting about 55-deg at load (Prime95, Blend, 75%). But, despite 5 days of fiddling CPU, VDDA, and any other setting I thought applicable, I just couldn't get 4.5 stable for more than 3 hours of P95 or my personal test suite. I have a notion that the rig could handle it, but it needs more finesse than I have time to develop at the moment.
My personal test suite consists of building an entire GNU/Linux "distribution" from scratch. It's based off LFS (the Linux From Scratch project), with my own bits for cloud-deployment (i.e., using Xen as a VM hypervisor). My script (again, heavily leveraging the work the wonderful folks at LFS provided) builds an entire GNU/Linux system--including bootstrapping binutils, the compiler, then the rest of the system, and finally the kernel. I run all the available regression-test suites (which, on the old dual-Opteron system would take more than 15 hours). That particular workload is mostly integer math (not much compiling happening on the FPU), and I use '-j 6', which allows 'make' to push independent compiles to all 6 "cores". I'm not sure that this is optimal for the BD/PD architecture, since the "cores" share so much in the module, but it's probably fairly stressful. Building the compiler, glibc, and the kernel is a pretty arduous workload. In fact, at 4.5, even with 3+ hours of P95 passed, the compile suite failed after about 90 minutes.
So, I wanted a 12-hour stable P95 build, and my own test suite to build (with all the regression tests passing).
This is Day-6, and I've finally achieved that, thanks to this board and the various posters.
Here's where I ended up:
22.0 / 200 / 100 (multi, bus, PCIe)
2200 MHz - CPU/NB Freq
2600 MHz - HT Link Speed
1.38125 - CPU
1.25 - CPU/NB
2.55625 - VDDA
1.4 - DRAM
1.1 - NB
1.2 - NB HT
1.8 - NB 1.8
1.1 - SB
DIGI+ Power Control
High - CPU LLC
High - CPU/NB LLC
130% - CPU Current
130% - CPU/NB Current
Optimized - CPU Power Phase Control
Auto - CPU Voltage Freq
Enabled - VRM Spread Spectrum
T.Probe - CPU Power Duty Control
Auto - CPU Power Response Control
Auto - CPU/NB Power Response Control
130 - CPU Power Thermal Control
130% - DRAM Current
300 - DRAM Voltage Freq
Optimized - DRAM Power Phase Control
The settings in BOLD differ from the OP's "Recommended Settings". Here's my story (I'm sure the veterans know all this, but in case some nubsicle like myself is trying this, I'll offer my rookie insights.):
Initially, I stuck with the Recommended Settings. And, even at 4.6 GHz (multi-only, at 23.0), I was able to get, at stock voltages, a 10-min pass of Prime95 Small FFT. But, once I started the Blend tests, the system failed quickly. Almost always (and in the past 6 days I've done the SmFFT test many dozens of times), it was a single core failing, and always in the 3rd module. I assumed it was the 3rd because I used AMD OverDrive to monitor the system (the UI made it immediately obvious when P95 stopped working) and it was always showing either Core 5 or Core 6 failing.
I started playing with voltages, and here's where things started going sideways.
First, I noticed (I became more aware over time of things to look for) that my DRAM was not showing the right settings. I first saw this in my Memtest+ (which I *always* do before building a machine). It passed 2 passes, and I figured that my version (4.10) was old, and just wasn't seeing the right SPD settings. I learned, after much surfing, that SPD is sort of like a "configuration" written to the RAM, and may not reflect its actual capabilities.
I looked to the BIOS. The settings were off there, too. It showed 1333 MHz, and my sticks had 1600 MHz printed on the box, and these were "name-brand" sticks. Not the revered G.Skills, but Corsair (though the lower-quality XMS3 sticks). Since I had 32 GB of this stuff (4x8), I figured that running it at speed would be nice. In case anyone else has this memory, it's: Corsair XMS3 CMX16GX3M2A1600C11.
So, in the Ai Tweaker, I upped the Auto 1333 to Manual 1600. Boy was that a...mistake.
After that "fix", I wasn't able to get P95 Blend working for more than 2 minutes at a time. Seemed pretty obvious that Blend used a ton of memory, and I either had bad sticks (which seemed unlikely, given the multiple Memtest+ passes) or bad settings (duh). So, I took more steps, but sadly for me, that was BEFORE I read about SPD. I set the DRAM Timings to 9-9-9-24-T1, manually. I thought to myself: "How clever, man, you can totally rock this."
Oh was I wrong. Windows sometimes didn't boot, and sometimes BSODed even before I got to start P95.
Turns out, I should learn to read memory model numbers. The "...C11" should have been a clue. After some more research on the Interwebs, I realized that the timings which were actually verified for that memory at 1600 MHz were 11-11-11-30. So, I plugged that in. I thought: "Good job. You've got it now." Wrong again. More research. Thanks to other blog posts (heck, it may have been another OCN thread) I realized the correct DRAM timing was actually 11-11-11-30-T2.
That worked. Now, Blend was running, but never more than about 1 hour at a time before failure.
So, I was back to voltage-grinding (think MMORPG levels here...).
I kept going up, until I hit the thermal ceiling (also took a while to realize the the PACKAGE temp was actually Core Temp--should have read OP's post more carefully, and that it would be lower than the SOCKET temp (labeled "CPU"). Anyway, I wasn't having much luck with the multi-only method. So, I tried to turn down the multiplier and bumped the FSB. Still no joy.
Then, going back and reading the original post more carefully (that post is dense, I tells ya), I started over. In doing so, I realized that I had never bumped the CPU/NB Manual Voltage setting. Realizing that it was related to the IMC--and that my Small FFTs were perfectly solid and that my Blends were failing hard--I then bumped the CPU/NB voltage to 1.25. This seemed to help, but I still wasn't able to get across the 3-hour mark in Blend.
Same with VDDA. I hadn't adjusted that up from what Auto was (don't have clear notes about that setting). Went back to read the original post. Same story; I hadn't been careful enough...
Also, at some point around Day-3, I installed OCCT. ZOMGwonderful. That told a lot of the story right away. I set it to watch Vcore. Now, after achieving a stable setup, I have no idea if what I did was "right", or even "on track". I can only observe the result, and I only have a vague intuition about correlation. Before correctly setting VDDA, which by Day-4 I had not touched (again, not careful enough reading the original post), I saw two different things with Vcore:
1) On Prime95 Small FFT, I saw a straight line. Virtually no deviation once load was applied, at least as far as the sampling aliased the measurements.
2) On Prime95 Blend, I saw a...saw. The voltage was super-erratic.
On Blend, I would notice an immediate spike. Followed by an erratic shark's mouth of voltage "teeth". Now, for those who are even casually versed in signals knows that it's pretty hard to tell a spike from a droop. A short spike might actually be 2 long droops, etc. So, what I'm describing might not actually be the case, but I'm going on my gut--since I have no theory, only observations. Point is, I was seeing lots of spikes, a lot of the time. For example, if I had CPU at 1.38125 (@ 4.5 GHz), idle Vcore was 1.38. When Blend started, there were spikes up to 1.392. Then, maybe there were stretches at 1.392, and then other stretches at 1.38, and in between those plateaus there were just these awful-looking spikes and droops, occurring at ~1sec intervals. At that point, it's hard to tell one from the other.
The key word now, of course, is "droop". And so started an investigation into Vdroop (which I thought was hacker slang). I ended up in a mess of technical documents, far out of my depth. So, here's where I had to turn knobs "in the dark", and made some "educated" guesses. Vdroop was a real thing. I developed a vague notion of what LLC was doing. And, I thought that might be the cause of the spikes. I got worried that spikes were causing the instability (pushing the CPU too hot with LLC st to "Ultra-High"). And, since VDDA helps with stability, I did two things (not at the same time, but I think I've been going on for a quite a while now, given how small my scroll thumb has gotten): I upped VDDA to 2.5. That helped. I was getting past 3 hours. I also changed the Load Line Calibration settings. I went from CPU LLC of "Ultra High" to just "High". I also eventually landed on VDDA at 2.55625, because that seemed to offer more stability than just upping the CPU Manual Voltage setting and didn't seem to push the temps as hard.
In the end, I still see frequent voltage spikes, in between stretches of voltage quiescence, even in my final stable settings. Vcore swung from 1.356 to 1.368. But, they look like droops now (which I expected, from a lower LLC setting). And, that sorta makes sense to me....
Even still, the Vcore swings look wrong. I feel like, even at load, Vcore should be stable. And, that's not to say it isn't ever stable. I might get 20 minutes of a flat line, but that's followed by 5 minutes of swings. But, maybe that's how voltage always looks when running Blend...IDK...maybe memory accesses cause droops/spikes. I'd very much welcome a knowledgeable opinion here on the Vcore swings.
I lowered the DRAM voltage down to 1.4 eventually, and that didn't have any observable effect on the saw-tooth voltage, though OP relayed that lower DRAM voltage might help reduce the stress on the IMC, which I figured was good given that Blend--and never Small FFT--was the problem.
So, at the end of the last Prime95 Blend (75%) test, I stopped at 12 hours stable (no cores failed), and the max temps reported by OCCT were 55-deg C (package/core) and 59-deg C (socket/cpu). That seemed healthy enough to me. And, I passed my own compilation torture test in the middle of the afternoon (ambient was ~30-deg C).
I have some images, too, of my air-cooled setup (it's pretty...ugly, but, IMO, clever), and of the OCCT voltage spikes, if anyone happens to have made it past this wall-of-text-crittage, and would like to see those.
Sorry about the ramblage...
TL;DR - Thanks for creating this post. 4.4 GHz (13% OC) on air is a fine result. For me, stability is a big issue--I used it to test the a system I eventually duplicate in production, so stable >>> crazy-fast. Grats, too, for all those who have successfully OC'ed from this post. Perhaps, armed with more time and experience, I'll be able to get better results in the future. And, for other intrepid explorers, READ THE ORIGINAL POST CAREFULLY. Duh. And, may the source be with you.