I must confess something: A part of me is delighted sometimes when I hear of certain overclocking maladies and misadventures.
Not that I'm sadistic (well, not overly nor overtly, anyway), or that I wish harm to someone. It's just that, in my warped mind, this is some kind of justice. Silicon karma, perhaps, rears its bad side sometimes when people just don't know that the problem is not with their machines; sometimes bad things happen because of that most common of technical problems.
User error is the one universal "design fault" to which all machines are vulnerable.
There is no machine that is perfectly and completely fool-proof. After all, as long as there are fools, there will be equipment failures to diagnose and, hopefully, rescue from oblivion. Sometimes, though, machines die at the hands of people who just don't understand some fairly basic concepts.
1) Every machine, and every component of any machine, has a finite design limit. Therefore, once this point is reached, a machine (or component) simply cannot exceed its own unique performance potential. Patient testing and analysis should at least probe for where these limits are. If done properly and patiently and methodically, you don't cross the line, and therefore not risk breaking something permanently.
2) Your components are unique, like tiger stripes or fingerprints or snowflakes. Therefore, it's a little foolish to expect that your particular CPU, let's say, will perform as well as someone else's. Just because User A got his Brisbane to 3.2GHz on air doesn't mean User B's Brisbane will get there, or even get close. Don't compare your own machine's limits to other people's.
3) Overclocking is all about compromises. It's exactly like setting up a racing car for a particular circuit. Every driver has a unique feel for his/her car, and therefore will set his/her car up uniquely. There are always compromises to acknowledge: Do we want minimum downforce for ultimate straightline speed, or more downforce for more cornering performance? Likewise, an OCer has to decide just how fast is fast enough, and how important is the machine's stability. Personal priorities dictate setup decisions.
When people ignore these basic tenets, user errors will occur. In fact, to simply ignore these concepts practically defines user error. User error, I suppose, stems from hubris, in a way. The thought that your own desires and preconceptions and assumptions (without the balancing effect of fact-finding and testing) defy the concepts spoken of above is nothing but bald and naked arrogance.
Honesty to oneself is probably the one true way to avoid making disastrous mistakes with any machine. This requires a kind of sympathy for the machine. Though you are always the master over the machine, a kind and wise master understands that all things have limitations. The sooner this is understood, the sooner we'll see a decrease in user-induced failures and disasters.
As always, thanks for reading! I welcome your thoughts and comments.