Overclock.net banner

1061 - 1080 of 1115 Posts

·
Registered
Joined
·
630 Posts
And I will explain it again too, MSI afterburner by design it does offer Core voltage control by good precision.
And the funny part is that it can eliminate GPU Core voltage fluctuation too.

Unfortunately for me, MSI afterburner while this now controls perfectly the card, I can not enjoy the step up OC benefits because this stealing portion of valuable CPU usage.
And final bench score this is lower with MSI afterburner active than when the card this is running at stock with out loaded the MSI afterburner. :)

I am now examining the option to get an Intel Core 2 Quad Q9650 (12M Cache, 3.00 GHz, 1333 FSB) Socket 775.
This is capable to boost approximately 25% the FPS and when Minimum FPS this become 25 the Higher will be 80 and the average will get at magic number of 59~60.

By the use of AIDA64, I made today another discovery for my monitor, max resolution which this can do 75Hz 32 bit color, this is 1280 x 960.
At 1280 x 1024 75 Hz my DELL deliver 16-bit color.
Theretofore with such of baby steps of optimizations, I am getting closer to the standard of acceptable performance.
 

·
Like a fox!
Joined
·
2,724 Posts
And I will explain it again too, MSI afterburner by design it does offer Core voltage control by good precision.
And the funny part is that it can eliminate GPU Core voltage fluctuation too.

Unfortunately for me, MSI afterburner while this now controls perfectly the card, I can not enjoy the step up OC benefits because this stealing portion of valuable CPU usage.
And final bench score this is lower with MSI afterburner active than when the card this is running at stock with out loaded the MSI afterburner. :)

I am now examining the option to get an Intel Core 2 Quad Q9650 (12M Cache, 3.00 GHz, 1333 FSB) Socket 775.
This is capable to boost approximately 25% the FPS and when Minimum FPS this become 25 the Higher will be 80 and the average will get at magic number of 59~60.

By the use of AIDA64, I made today another discovery for my monitor, max resolution which this can do 75Hz 32 bit color, this is 1280 x 960.
At 1280 x 1024 75 Hz my DELL deliver 16-bit color.
Theretofore with such of baby steps of optimizations, I am getting closer to the standard of acceptable performance.
Even your voltage is going to change by the way. The voltage is tied to the core clock. If your core clock drops down from any of the factors (not enough load on the card, too hot / thermal limits, power limits) then the voltage will drop down too based on what the clock of the core speed changes to. Nvidia's newer cards All do this from the 1000 series, 2000 series and 3000 series. You wouldn't want it to run at the voltage required for P0 (gaming at full speed) while you're idle at windows for example. What you should do with the curve editor is only change the maximum voltage while under "full load" where it enters P0 (you can see the P-States it's in with nvidia inspector program open, it's free, get it in google) and leave all other voltages alone. Generally the best way to overclock nvidia cards is on an offset of say +23 Mhz or +50 Mhz and then let the card automatically cycle up/down depending on what it needs. On the 1060 cards "out of the box" with a power limit you might actually get better overclocks by under-volting or reducing the voltage when it's boosting up in higher demand titles. More voltage = more power usage, and with a power limit that will make the cards hit the power limit at a lower clock. Sometimes if you reduce voltage so it uses less power then it will boost higher for longer.
 
  • Rep+
Reactions: T.Sharp

·
Registered
Joined
·
630 Posts
I did get my card warm for 45 minutes, entire card thermally this were stabilized and then I did my magic.
In simple English I am now 100% aware of the sweet spot of the specific PCB.

Another person he might arrive with the idea .... I will stress my card at specific Core MHz .... and this is the mistake that all people do.

Any card operating at it sweet spot with only 50% fan usage at winter, this can compensate and summer time temperatures.
Highest indoor temperature in my location this is 34 C at August (Thermal peak).
I did my testing at 18 C, therefore I have 22 C headroom.
60C current temperate it might increase to 82 C max only at benchmarks.
Within game temperature will be much lower.
 

·
Like a fox!
Joined
·
2,724 Posts
I did get my card warm for 45 minutes, entire card thermally this were stabilized and then I did my magic.
In simple English I am now 100% aware of the sweet spot of the specific PCB.

Another person he might arrive with the idea .... I will stress my card at specific Core MHz .... and this is the mistake that all people do.

Any card operating at it sweet spot with only 50% fan usage at winter, this can compensate and summer time temperatures.
Highest indoor temperature in my location this is 34 C at August (Thermal peak).
I did my testing at 18 C, therefore I have 22 C headroom.
60C current temperate it might increase to 82 C max only at benchmarks.
Within game temperature will be much lower.
Sounds good. Also another reminder for you: Remember to keep vsync on at all times so that you limit your video card to your monitor's maximum refresh rate. This will help the card not run as hot, reduce temps, and help it boost higher when it needs to because the temps may be lower.
 

·
Registered
Joined
·
630 Posts
Sounds good. Also another reminder for you: Remember to keep vsync on at all times so that you limit your video card to your monitor's maximum refresh rate. This will help the card not run as hot, reduce temps, and help it boost higher when it needs to because the temps may be lower.
Yesterday I had an experience relative to V-Sync, tested my old game Unreal 2 with the new graphic card, and NVIDIA driver started crashing and recover.
One day before I did fresh identical driver installation and this reverted V-Sync to On with out of me to be aware of that, I always keep it to off.
It took me 30 minutes of troubleshooting, because started looking the issue as it was recorded at Windows Event log, I did found few related messages but users was unaware that this error code relates to V-Sync when the driver crashing.
With V-Sync off, the game worked smooth and even colors they did recover to original, old AMD 64bit drivers they was damaging specific surfaces colors due broken compatibility.

Now I will give you something to remember too.
We humans, we make our electronics to be robust, we use them at normal and at extreme applications.
Extreme applications has operating range of minus 124 Celsius and positive 124 Celsius ( our satellites in orbit) their components get stressed at that condition every single day.

Normal applications has operating range of 55 Celsius up to 120 Celsius and NVIDIA did set the thermal limiter to 93 C this helps them minimizing even further the percentage of GPU failure events.
When I do boot my Oscilloscope or my benchtop multimeter, I can trust the measurement as soon this gets warm in about 25 minutes (internal temperature around 45 Celsius).
When you do power on your television set, colors are blurred and back-light this is dim, as soon and this get warm up, them it comes at full performance.
VGA cards are not any different.

It is a huge mistake using water-block over low power thermal sources, or at least retain water temperature not lower than 50 C.
 

·
Like a fox!
Joined
·
2,724 Posts
Yesterday I had an experience relative to V-Sync, tested my old game Unreal 2 with the new graphic card, and NVIDIA driver started crashing and recover.
One day before I did fresh identical driver installation and this reverted V-Sync to On with out of me to be aware of that, I always keep it to off.
It took me 30 minutes of troubleshooting, because started looking the issue as it was recorded at Windows Event log, I did found few related messages but users was unaware that this error code relates to V-Sync when the driver crashing.
With V-Sync off, the game worked smooth and even colors they did recover to original, old AMD 64bit drivers they was damaging specific surfaces colors due broken compatibility.

Now I will give you something to remember too.
We humans, we make our electronics to be robust, we use them at normal and at extreme applications.
Extreme applications has operating range of minus 124 Celsius and positive 124 Celsius ( our satellites in orbit) their components get stressed at that condition every single day.

Normal applications has operating range of 55 Celsius up to 120 Celsius and NVIDIA did set the thermal limiter to 93 C this helps them minimizing even further the percentage of GPU failure events.
When I do boot my Oscilloscope or my benchtop multimeter, I can trust the measurement as soon this gets warm in about 25 minutes (internal temperature around 45 Celsius).
When you do power on your television set, colors are blurred and back-light this is dim, as soon and this get warm up, them it comes at full performance.
VGA cards are not any different.

It is a huge mistake using water-block over low power thermal sources, or at least retain water temperature not lower than 50 C.
That is not how the Nvidia video cards work in regards to temperatures. I will try to explain for you: With Nvidia 1000 series video cards you DO NOT want them to run hot. The way Nvidia has programmed these video cards the hotter they run the less clock speed they run. I do not remember the EXACT formula but it works out to something like this: For every +10c it reduces the card's clocks -25 Mhz, starting from 23c and going upwards. That means if you can reduce the operating temperatures of your card say from 80c -> 70c, then it will boost another +25 Mhz on the core clock. If you can reduce the temps from 80c -> 60c, it will boost +50 Mhz more, etc. All the way down as far as you can go with it. This is part of the video cards and you can not disable this function. I also own a GTX 1080 Ti in my other computer. On air cooling where it would run 80-85c on the stock cooler I could only ever get it to boost to 1950~1975 Mhz core speed and often times it would drop down to around 1880 Mhz or 1900. I have since then put it on a large custom water loop where I can keep it < 40c at all times and now it's able to run 2126 Mhz stable and never drop down, for the most part. The second it hits 40c it will drop down to 2100 Mhz. During the winter months when it was super cold here I was able to keep it < 30c and I was able to enjoy using it at 2155 Mhz. But now that it's warmer I can not use that speed on that card any more. This is how all Nvidia video cards from the 1000, 2000, and 3000 series work. If you can get the core temps to run colder and reduce the voltage so it uses less power you should be able to get it to boost a lot higher. At least until it hits the power limit of the bios.
 
  • Rep+
Reactions: T.Sharp

·
Facepalm
Joined
·
9,883 Posts
That is not how the Nvidia video cards work in regards to temperatures. I will try to explain for you: With Nvidia 1000 series video cards you DO NOT want them to run hot. The way Nvidia has programmed these video cards the hotter they run the less clock speed they run. I do not remember the EXACT formula but it works out to something like this: For every +10c it reduces the card's clocks -25 Mhz, starting from 23c and going upwards. That means if you can reduce the operating temperatures of your card say from 80c -> 70c, then it will boost another +25 Mhz on the core clock. If you can reduce the temps from 80c -> 60c, it will boost +50 Mhz more, etc. All the way down as far as you can go with it. This is part of the video cards and you can not disable this function. I also own a GTX 1080 Ti in my other computer. On air cooling where it would run 80-85c on the stock cooler I could only ever get it to boost to 1950~1975 Mhz core speed and often times it would drop down to around 1880 Mhz or 1900. I have since then put it on a large custom water loop where I can keep it < 40c at all times and now it's able to run 2126 Mhz stable and never drop down, for the most part. The second it hits 40c it will drop down to 2100 Mhz. During the winter months when it was super cold here I was able to keep it < 30c and I was able to enjoy using it at 2155 Mhz. But now that it's warmer I can not use that speed on that card any more. This is how all Nvidia video cards from the 1000, 2000, and 3000 series work. If you can get the core temps to run colder and reduce the voltage so it uses less power you should be able to get it to boost a lot higher. At least until it hits the power limit of the bios.
Pretty sure on Pascal it was -15 mhz every 6C temp increase, starting at 38C. With ampere being slightly different, either higher starting point and an 8C step instead, with some weird +15 mhz rise somewhere at 48C for some reason (then normal drops).
 

·
Like a fox!
Joined
·
2,724 Posts
Pretty sure on Pascal it was -15 mhz every 6C temp increase, starting at 38C. With ampere being slightly different, either higher starting point and an 8C step instead, with some weird +15 mhz rise somewhere at 48C for some reason (then normal drops).
Thank you for the clarification. That's why I wrote in my post I wasn't sure of the EXACT formula Nvidia uses. Now I know.
 

·
Registered
Joined
·
630 Posts
To me the entire card this is not just the GPU.
I am now aware too, that this card it does allot of switching at voltages and frequencies, and there is nothing wrong with that.

Every single resistor and or capacitor and or MOSFET they should warmup to 50C, so all voltages to get at the so called as calibration point.
You read 1.060 V at the software and this is not actually 1.060 V until entire PCB get at the proper temperature.

Power limiter control this is great but most people forget that they will never use entire 120 TDP because portion of it, this will be lost as thermal losses due energy conversion.
Shunt-mod will simply increase heat generation, and GPU temperature will be higher, therefore it will drop the clock instead to increase.

I did also challenged, to understand how special it could be my GPU as silicon batch, as far I remember, one Chip (CPU or GPU) it can be classified as special when this holds specific clock with much lower core voltage.
Currently I do not have, or will ever find such comparison data, because no one though to use such scientific approach at GPU testing so to collect them.

I am going to pause my own testing for OC, because this card it is a great performer at stock OC , theoretically the Boost Clock is at 1759 MHz, and at every benchmark the GPU delivers 1949 MHz clock and never fall below at 60C.
The worst clock was at 63C and this is 1898 MHz.
The picture it might chance as soon I will replace the CPU with a better one (12~19% higher FPS), until then I will demonstrate patience.

2485969
 

·
Like a fox!
Joined
·
2,724 Posts
Shunt-mod will simply increase heat generation, and GPU temperature will be higher, therefore it will drop the clock instead to increase.
This is not true, please do not spread this on the internet as misinformation. I'll explain: Before modifying my 1060 I was hitting the power limit easily even at 1850 Mhz core speed. I could not raise the voltage at all and my card was limited to 1850 Mhz core speed no matter what I did or how cold I could keep it. Additionally I could only put +200 Mhz on the memory clock, all because of the power limit.

Now with the shunt mod on the card I can increase the voltage on it which allows me to run it up to 2100 Mhz core speed and it will stay there and hold 2100 mhz until the card reaches 70c, which would reduce it to 2011 Mhz. Fortunately MSI has a very nice cooler on it and it is able to keep the core < 60c and allow me to hold 2100 mhz on the card at all times even in games that run the card at 100% load and in benchmarks. Typically this card only runs around 50-53c, 55c max even at 2100 mhz and now I can also run it with +800 memory on it. I was able to get a new higher core clock and memory clock with the shunt mod. It does not and will not REDUCE clocks. At least as long as it's mated to a fan cooler that can handle the extra heat that is. It allows for higher overclocks with no power limit at all, as long as it can be cooled properly. I theorize that if I'm able to get the core speed down to < 40c with an overclock I may see this card able to run even 2200 Mhz or faster stable. I won't know because I can't seem to find a water block compatible with it. I would prefer to find a full cover one that also cools the VRM.

I'm trying to help you understand how the cards work: With Nvidia's Pascal and newer video cards COLDER is best, not hotter. We want the new cards to run as cold as we can possibly manage to get the most out of them.
 

·
Registered
Joined
·
630 Posts
I'm trying to help you understand how the cards work
This is well understood, but we are living in a world that neither two cards of identical brand and or model these will demonstrates 100% identical behaviour.
At electronic components design, it is accepted component values to differentiate up to 5% (plus or minus) when in operation.
In a extreme example: my VGA has a critical component this be at minus 5% from ideal value, and your VGA has identical component at plus 5% (of tolerance).
Our difference will be at 10% at such extreme example, and this has to do mostly with power circuitry tolerances.
At fresh made electronics (past 10 years) and because manufacturing cost of tight tolerances components this is now cheaper, all parts in use are now at 1% tolerance.

About my own VGA I am lesser confused now after using EVGA Precision X1, instead of Afterburner.
I did received a VGA card that has non-recognizable part code in retail market.
I am not saying that this is an GTX 1060 burning nitro fuel :), but when I do activate at EVGA X1 the Boost clock this is at 1949 MHz with 1.050mV


Preliminary testing with EVGA X1, this gave me clues that my card at 101% of power limit, 98% GPU usage, it can keep up at 1023 MHz, this be warm at 58~62C stock BIOS controlled DC Fan profile.
Heaven benchmark this is too old, Valley Benchmark 1.0 instead, this is a better representative of GPU stress caused due modern game titles, but this is not perfect enough either.
With EVGA X1 I do not have manual DC fan speed control.
I am still in the condition .. To get to know better my enemy (MSI iGAMER OC), this now indicating to be a very good performer, and additional tweaking this may be not required and even this be totally avoided.

Our exchange of facts this is tremendously productive task and delivers useful clues to this community.
Thanks.

2486120
 

·
Registered
Joined
·
630 Posts
This time I will contribute regarding suggesting a method that I consider as the best so far, so every one to be able to test his own VGA card actual Boost clock at close to realistic conditions.
One benchmark this pushing hard the GPU power limiter, the other benchmark stress GPU usage beyond what a game will do, a third benchmark will stress something else.
All caused confusion this is well justified, and us we staying unable to have our own measuring standard and therefore we stay as lost.

Therefore here it is, I am Kiriakos from Greece, this is overclock.net, and here is my own suggested comparison method that I have the ambition this to become and your own too.

Few steps to follow:
1) Set NVIDIA driver to Prefer Max Performance = Reference 1544 MHz clock all times (other than gaming) and Reboot
2) Install .NET Framework 4.6.1 and then this EVGA software. and Reboot
If you do have installed higher version of .NET Framework 4.6.1? then uninstall it (non necessary layers of .NET they are PC performance thieves)

3) Start the EVGA X1 and enable Boost Clock.
4) Use FurMark GPU stress test, to warmup the GPU approximately to 50 Celsius and stop when temperature this is about there.
5) I made the discovery that FurMark at 640x360 this is using a constant of 98% GPU usage which this is fantastic, additionally FurMark at 640x360 this causing GPU power spikes up to 114% from 117% as Max that we can control due software. Set at EVGA software the Power limiter to the Max and hit Apply.
And therefore at this testing method, there is stable GPU load and power consumption below the power limiter, and there is delivery of stable Boost Clock at very high temperature.

Benefits:
1) You will learn the truth of your worst Boost clock at the extremest thermal condition.
2) You will be able to compare your GPU cooler actual performance with my own = MSI GTX 1060 iGAMER Single Fan 85mm diameter of blades.
3) You will discover the total range of the Boost Clock that your card deliver = Cold vs ultimately warm and then your may actually mathematically calculate your average Boost clock.

4) This is strict evaluation test of your card PCB and electronics as this came from the factory and it is forbidden to over-volt your GPU at this test, if you plan to post your results at this topic.

One reason that I love electronics this is because this universe has written laws and written best practices and written specifications, we have electrical parameters along with measuring tools of extreme precision and speed, and every single application this also obey at those rules, and I never feel lost in there.;)

The major success of this testing (Kiriakos T.) method this is the high repeat-ability rate.
You may also consult the picture bellow if you get lost regarding instructions.

When product reviewers of PC hardware them also using best practices, they won trust and credibility especially when their content this is read in detail by other experts.

2486147


2486148
 

·
Like a fox!
Joined
·
2,724 Posts
Few steps to follow:
1) Set NVIDIA driver to Max power = Reference 1544 MHz clock all times (other than gaming) and Reboot
2) Install .NET Framework 4.6.1 and then this EVGA software. and Reboot
If you do have installed higher version of .NET Framework 4.6.1? then uninstall it (non necessary layers of .NET they are PC performance thieves)
I wanted to make a couple comments about this in case someone else finds this thread in the future. #1 above: Setting the video card's Power Management option to Max Power in nvidia control center is okay for temporary testing and for benchmarks but you forgot to remind everyone to put it BACK to Optimal power for normal operation for gaming. By using the "Max Power" option this will prevent Nvidia's boost algorithms from running and not allow the card to adjust clocks when gaming. It also will prevent the card from dropping down to the 100'ish Mhz for idle while at windows desktop. Everyone should have this set to Optimal Power for normal operation of the card(s).
Secondly: Microsoft Dot-Net has no effect what so ever on any performance. It does not run any processes in the background of the computer and doesn't effect anything. I know this because every single process running on the system effects the benchmark score and is important for world records. I know what all of the "core windows" processes are and what they do and dot-net is not one of them. What I think you are referring to is the microsoft Dot-Net optimization service. Which is listed as "Microsoft .NET Famework NGEN" in windows services. THAT runs a background process. You can safely set that to "Disabled" in windows services and it will not effect anything. By disabling that service it will kill the background processes and not allow them to run. The NGEN is not required to be running for programs that need dot-net to function. But no one should ever uninstall any dot-net service for any reason. It is required for many programs used in windows to work. Any version that is installed should remain installed.

4) Use FurMark GPU stress test, to warmup the GPU approximately to 50 Celsius and stop when temperature this is about there.
5) I made the discovery that FurMark at 640x360 this is using a constant of 98% GPU usage which this is fantastic, additionally FurMark at 640x360 this causing GPU power spikes up to 114% from 117% as Max that we can control due software. Set at EVGA software the Power limiter to the Max and hit Apply.
Additionally: If you might of noticed furmark only runs the video card to 98%, not 100%. Furmark should not be used in 2021 to test anything to do with any video card. Nvidia and AMD drivers detect furmark as a "Power Virus" and they will not allow the card to run at full power when furmark is running. It is not a very good test and is mostly useless today. I would suggest you use 3dmark instead. It can be configured to run on a loop. 3DMark11 for example can be configured to run in a loop and it will completely max out any video card when run in Performance mode @ 1920x1080. You can download 3dmark 11 free and use the free public key to access the pro version to configure it for 1920x1080 here: Futuremark Legacy Benchmarks.

However you should really be using either superposition or the modern "3DMark" with firestrike and time spy to test your card. Those both allow modern cards to run full speed at 99%/100% utilization.
 

·
Registered
Joined
·
571 Posts
Isn't "Adaptive" the best power setting to use? It lets the card downclock at idle and boost when gaming. I thought "optimal power" limits the max performance.

Nvidia and AMD drivers detect furmark as a "Power Virus" and they will not allow the card to run at full power when furmark is running.
I think this is a bit of a misconception. The card still runs at the max power limit but it reaches the limit at a lower voltage than normal 3D benchmarks. You can watch "PerfCapReason" and power draw in GPU-Z if you want to compare. So Furmark draws more current than typical workloads (similar to P95 with AVX) which makes it hit PL at a lower voltage, which limits the clock speed.

100% a useless test
 

·
Registered
Joined
·
630 Posts
1) NVIDIA Driver power management.
Use is as you wish for daily use, but prior following my testing method follow my instructions all the way.

2) Microsoft .NET Framework all revisions they are software layers for software developers so to use them for Windows Server related applications.
This is a monster feeling up your windows registry, slowing down system boot, wastes system memory.

Today I did another hunting about the ultimate OC of my card VRAM.
I got these nice numbers, this OC tested for 1Hour no crash, I will compute of what they worth tomorrow.
+575 = 2289,5 MHZ (GPU-Z 2290) (X1 read 4579 MHz) Effective Clock 9158 MHz +28,7% OC (574,861) = +575
 

·
Registered
Joined
·
630 Posts
Isn't "Adaptive" the best power setting to use? It lets the card downclock at idle and boost when gaming. I thought "optimal power" limits the max performance.
Prefer Max Performance = keeping the card at default clock, and does not dive to any lower than that when Gaming.

Adaptive = Equal to economy mode with increased millisecond delay between steps.


100% a useless test
Do not play the game, I have nothing to gain if you do play.
But I will share one thought this eating my brain, that all of you with inferior GTX 1060 revisions you will loose your face if you do post any screenshot with lower score. :devilish:
 

·
Registered
Joined
·
571 Posts
3DMark is the standard Epeen metric.

Here's my 3GB single fan EVGA 1060 SC in Time Spy

2177 MHz max, 2161MHz average
2479 MHz memory (Samsung)

5th place overall for 1060 3GB single card if you list by graphics score
(ignoring 1st because it looks fake)
 

·
Like a fox!
Joined
·
2,724 Posts
2) Microsoft .NET Framework all revisions they are software layers for software developers so to use them for Windows Server related applications.
This is a monster feeling up your windows registry, slowing down system boot, wastes system memory.
Many games require Microsoft dot-net to run. Many programs in windows requires dot-net to run. If you uninstall it you will not be able to play some games and certain programs will not run at all and crash. It is absolutely a requirement to use windows. Even if you uninstall it most games will re-install it when you first install the game anyway. The main program I know of that requires dot-net to function is Nvidia Inspector and/or Nvidia Profile Inspector. Both I would consider very important to anyone that owns an Nvidia video card so we can modify the driver profiles in ways beyond what is allowed just through the nvidia control panel.
 

·
Registered
Joined
·
630 Posts
Many games require Microsoft dot-net to run. Many programs in windows requires dot-net to run. If you uninstall it you will not be able to play some games and certain programs will not run at all and crash. It is absolutely a requirement to use windows. Even if you uninstall it most games will re-install it when you first install the game anyway. The main program I know of that requires dot-net to function is Nvidia Inspector and/or Nvidia Profile Inspector. Both I would consider very important to anyone that owns an Nvidia video card so we can modify the driver profiles in ways beyond what is allowed just through the nvidia control panel.
We are both positively thinking people, but at my personality it dominates the side of troubleshooter and problem solving at any cost, or else I do not get paid.
This Link, it might help you to understand that regular computer users, they do suffer when MS this delivers imperfect software.

 

·
Registered
Joined
·
630 Posts
3DMark is the standard Epeen metric.

Here's my 3GB single fan EVGA 1060 SC in Time Spy
2177 MHz max, 2161MHz average
2479 MHz memory (Samsung)
You have lost the point, and I will try for one last time..
I did suggested combination of Free to get tools, for inspection of Boost Clock at high temperatures.
FPS metrics are relevant to its one CPU performance.
FPS metric this is something else and irrelevant to the goal that is the discovery of quality of hardware in use at your GTX 1060 card as this came from the factory.
 
1061 - 1080 of 1115 Posts
Top