Originally Posted by guitarmageddon88
I recently started using OCCT and I completely trust it now. It has found errors within 4 minutes at the longest and I bump up voltage and the o.c. seems perfectly fine. If I see a random artifact in the future, ill just bump the voltage up no big deal. I was wondering about shader complexity though. In the occt menu, it says "lower is better for nvidia, higher for ati" so I have been using b/w 3 and 5. Is higher/lower going to put more stress on the card? Ive noticed changing the complexity does not affect my gpu usage or memory usage so what does it do?
My guess is that it's supposed to allow you to control what FPS your cards will be pushing so that they're doing a lot of work but also pushing enough frames to detect an error quickly.
When I start a test my FPS is very high but it gradually drops, I think that's the shader complexity auto-tuning itself to suit my cards. Regardless of my OC or shader complexity settings I end up with around 8-11 FPS when it settles down.
I'm not really sure what it does but it doesn't seem to change much from my experience.
EDIT: Just ran it at 607MHz core and 900MHz core with a single card and at 815MHz core with SLI.
Shader complexity 1 vs. 8 didn't seem to do anything in any of those cases.Edited by Mike-IRL - 1/13/12 at 6:21am