Overclock.net banner
1 - 14 of 14 Posts

·
Registered
Joined
·
966 Posts
Discussion Starter · #1 ·
This post has been edited significantly to state my findings in an abbreviated, clear, and concise manner.

Post #6 contains the actual data as measured over 48 hours of folding, calculating, compiling, posting, then editing the data.

Post #8 contains theoretical data based on power savings derived from an iterative method of estimating power savings by using a more efficient PSU. Note that this introduces a margin of error, and although scientifically valid, the data are not actual measured values.

NOTE:

My intent when posting this thread, and my choice to post it in the Community Folding Project section, was to make anyone planning to donate equipment or donate their own electricity costs and time to the folding cause aware that what looks best on paper or what looks best in a competition is probably not the most economically or electrically efficient solution to folding.

The intent of this thread is not to frighten people, nor is it to discourage anyone from folding. I deliberately posted this outside of the Team Competition forums because I wanted to be sure that I was not discouraging people from folding in the competitions or detracting from the competitive nature and good morale of the competitions.

If this post helps even one person decide what to fold and how to fold it, then I'll consider the few hours of writing and editing and the last two days of data collection and measurement worth every minute of my time.

Often times, folders only consider the cost of the hardware they're going to fold on and the points per day (PPD) values when they're deciding what system or hardware to fold on, how far to overclock, and what settings to use when folding.

Some folders take it a step farther and use PPD/MHz to get an idea of clock speed or overclock efficiency for folding.

I know that people have mentioned this before, but what truly matters is not your PPD or your PPD/MHz but instead your PPD/KWh or in layman's terms, your PPD vs your cost in utility bills. And what may matter even more to you, is that you have the least energy consumption possible, regardless of your PPD.

This might be a good place to collect data (voluntarily, of course) and compare notes on the most power efficient (and thus cost efficient) folding setups. PPD is well and good, and of course Stanford wants to see the fastest turnaround time on all WUs that they can get, but we should be concerned about providing research in the most cost effective and energy-efficient manner possible and in my recent experience, Fermi GPUs are not the way to do that.

Conclusions:

After conducting two days of data collection I have compiled a summary of my recommendations for anyone who is serious about contributing to the [email protected] cause, whether in donating equipment or donating their own cycles and electricity to the cause.

For anyone who wishes to contribute to the [email protected] project, the most efficient way to do so would be as follows: (This is both the most efficient in terms of new hardware investment and in operational costs.)

  • Operate a "headless" (meaning no monitor, keyboard, mouse, or GPU) "satellite" folding machine networked to an ultra-high efficiency home "server" to coordinate the satellites. (The server could be incredibly low cost and low power.)
  • Use a Linux distro optimized for [email protected] in a console (no GUI) format.
  • Use the most power efficient motherboard possible. (Platinum 90+ is a good start.)
  • Use the most efficient PSU possible. (80+ Platinum or 80+ Titanium)
  • Choose a PSU Watt rating that will have peak efficiency at the desired operating load of the system. (The best way to determine this would be to purchase, tune, and operate the system on an old or borrowed PSU, then purchase the precise PSU to match).
  • Use ECO DIMM for RAM (1.250V DIMMs for Sandy Bridge / Ivy Bridge.
  • Use the smallest die technology of the latest generation possible. Sandy Bridge and Ivy Bridge are both excellent choices for power to efficiency ratios. (The 2600K and 2700K SBs may have an advantage over the 2500K SBs. Further testing is required.)
  • Operate without a UPS if possible, but be aware that power spikes could be catastrophic, even through a surge protector.
  • Consider heavily overclocking if using an Intel K series CPU. (and most likely any of the "core" processors). The additional power cost of overclocking the CPU is minor compared to the performance gains.
  • Even the most extreme overclock on a 2500K uses less power than running a reference GTX 580.
  • PPD (on a CPU eligible for bonuses) scales in a positive and non-linear fashion with clock speed. The higher the clock speed, the more PPD you receive and the more efficient the PPD/W value becomes, because of how Stanford's bonus points work.
  • Note that operating a Fermi card requires a copy of Windows and either running Windows native or Windows in a VM or emulated WINE environment within Linux. (No legitimate copy of Windows is free.) Linux is free. The savings of not needing to purchase an OS with the headless folding satellites and for the "server" that coordinates them is considerable.

There are guides on how to set up headless folding satellites that will link to a main PC (which can be a very low power and inexpensive unit) that coordinates sending and receiving the WUs for the entire folding farm.

For the cost of building a system that can fold SMP and GPU simultaneously, one could build two satellite folding rigs and a low-cost "server" to coordinate them. In terms of operating costs, two highly overclocked headless 2500Ks can operate at the same power consumption as an overclocked SMP and reference GPU combination unit can in a complete system.

The total cost of a 2500K high-efficiency headless folding satellite unit is equal to or less than the cost of a single GTX 580 at this time and the operating costs are significantly less. (Even a highly overclocked 2500K draws 12% less power as a system than a reference clocked GTX 580 does in a system operating at idle. That's the worst-case scenario power savings. Best case is a 36% savings in operating costs comparing reference CPU to reference GPU. )

A next-generation Xeon may be even more efficient both in terms of power consumption and PPD/W, but the initial cost could be prohibitive, especially when one considers the cost of server motherboards and registered ECC DIMMs.

Below is a more detailed conclusion of my findings, supported by the data that are included in my later posts:



  • The -advmethods SMP client with big packets enabled at a default CPU clock uses the least electrical power.



  • The most efficient client in terms of PPD/W is the -advmethods SMP client with big packets enabled in a highly overclocked CPU state.The more overclocked the CPU is, the more efficient, in terms of PPD/W the rating is. Stanford's bonus points are an inverse and non-linear proportion to frame time; the faster your frame time, the higher your bonus becomes. In the case of this WU, so long as computational performance scales linearly with CPU clocks, the PPD/W values will increase in a non-linear rate. This will apply to all WUs, so long as one compares the same WU with different clock values. Perhaps at some point, due to thermal or current-limit throttling of the CPU core, one would reach a point where the overclock becomes detrimental, but as long as you can keep the CPU cool enough to avoid temperature and current/power limits, the higher the overclock, the higher the PPD and PPD/W values will be. A 2600K or 2700K should show considerably better efficiency in terms of PPD/W with minimal power increases. High overclocks on the 2500K (and most likely the 2600K, 2700K and upcoming Ivy Bridge systems) are surprisingly electrically efficient. The additional power use is far less than I expected it to be.



  • The least efficient CPU client (or SMP client) still uses significantly less total power and produces produces significantly more efficient performance in terms of PPD/W than the most electrically efficent or most PPD/W efficient GPU client. To be blunt, the GPU clients are wretchedly innefficient.



  • A mild overclock (around 4.5 GHz on my system) on either the normal methods SMP client or the -admethods SMP client is required to match the PPD output of the normal GPU client (non -advmethods) at reference GPU clocks. A reference clock 2600K or 2700K should match the normal GPU client at reference clocks.



  • Only the highest stable overclock (on my system) for the -admethods SMP client is able to produce more PPD than the -advmethods GPU client when the GPU is overclocked.



  • The GPU client with the highest PPD, when not running SMP, is the -advmethods Fermi client with the core affinity lock in its default enabled position with the CPU either forced into a high clock state on all cores, or locked in a high clock state on the core with matching affinity to the GPU client. Speedstep can be left enabled and the core can remain idle, but performance in the GPU client will suffer slightly. As there are no bonuses for early completion of WUs with the GPU clients, lower clock speeds give better PPD/W, but the difference is relatively small. A mildly overclocked 2600K or 2700K should match the highly overclocked -advmethods GPU PPD.



  • The -advmethods GPU client uses considerably more power than the standard methods GPU client, by 18 to 27% ! (And the PPD increase from using it is 5% if only running GPU client to 8% if running with SMP clients. (More details on that interesting phenomenon below.)



  • Maximum overall PPD in the GPU client in -advmethods mode is not impacted in the slightest by the SMP client running, with the core affinity lock disabled (non-default setting). Not only does the SMP client running have no negative impact on the GPU PPD performance. In fact, running SMP (or keeping the CPU cores at maximum clocks by any other means, even Prime95) actually increases GPU -advmethods performance.

Just to drive home how inefficient the GPU client is, at reference CPU and GPU clocks, running the lower intensity normal methods GPU client, my lowest total system power was 328 W. At 4.9 GHz and 1.416 Vcore, the -advmethods SMP client was only using 245 W. Reference CPU clocks on the same WU were 183W. The GPU client is extremely inefficient; the only reason to use the GPU client is to fill a slot in Team Competition, or to fold for high PPD values on an incredibly outdated CPU.

Below is my opinion only. I feel disheartened by how inefficient the GPU clients are and how disproportionately low the PDD values are both in terms of initial equipment investment and in annual operating costs. I feel that Stanford has done the folding community a disservice with the GPU point system as compared to the SMP system. Perhaps someone on the [email protected] advisory panel will see this post and take this information to the [email protected] team.

In my opinion, Stanford should reconsider how GPU WUs are valued (or assigned) to reflect the significantly higher electrical costs associated with them. I believe that Stanford is doing folders, ---and our electrical power resources--- as both a local and global community, a disservice with the GPU client point values. Points are intangible and are only supposed to reflect the value of the research. As such, Stanford is tacitly stating that SMP research is worth more than GPU research, even though GPU research costs us considerably more in initial investment and operating costs than CPU SMP research does.

I believe that Stanford [email protected] needs to restructure the point system to address this disparity. Stanford [email protected] could accomplish this in a number of ways:

  • Stanford [email protected] could implement a K factor bonus for GPU WUs just as they do SMP WUs.
  • Stanford [email protected] could increase the base point values on GPU WUs.
  • Stanford [email protected] could implement both a K factor bonus for GPU WUs as well as change the base value of GPU WUs, by increasing or decreasing base values as necessary for balance. (This is probably the most attractive alternative to me.)
  • Stanford [email protected] could decrease the base point values on SMP WUs. (This is not my preference, and probably the least productive of the alternatives, but it is an alternative.)
  • Stanford [email protected] could give up on GPUs entirely, move GPU WUs to distributed CPU projects, and accomplish the same research at a much lower electrical burden. (This is probably impossible, due to the way parallel stream processing architectures differ so greatly from SMP architectures.)

As things stand right now, I see absolutely no reason to recommend folding on a GPU to anyone, for any reason, other than to complete the Stanford [email protected] GPU projects.

In the interest of continuity, and so as not to make the responses to this post seem out of place, this portion of the post contains some of my PRELIMINARY results and analysis, which the responses relate to. Please note that these data have changed, both in scope and format, to much more presentable and scientifically accurate data sets in the following posts.

My signature rig is in my signature below, but the pertinent components are listed, along with voltages and frequencies:

  • CPU: i5-2500K overclocked to 4.7 GHz / 1.336 - 1.344 V (average 1.340V) (as tested)
  • RAM: 2x PC3-12800 GmSkill CL8 / 800 MHz (1600 effective) / stock at 1.500V (as tested)
  • Motherboard: Asus P8P67 WS Revolution Rev B3 (92% platinum power efficiency rating) (as tested)
  • HDD: WDC WD2002FAEX-007BA0 (2 TB Caviar Black 7200 RPM Sata3 6.0GB/s) (as tested)
  • Graphics: evga GeForce GTX 580 SC (1.5 GB VRAM) overclocked to 904.5 MHz core / 1809 MHz shader / 2106 MHz Memory / 1.113Vcore (as tested)
  • OS: Windows 7 64 bit Professional
  • PSU: Corsair TX850W (CMPSU-850TX) ( load as tested is one of three values: 235W (28% load / 83% efficient) , 450W (53% load / 84% efficient), and 570W (67% load / 83% efficient) )
  • UPS: APC BX1500G 1500VA / 865W (87% efficiency at full load 865W / 86% efficiency at half load 432.5 W / unknown efficiency at 235 W, assumed to be >80% )

With standard Windows 7 64 Professional background processes running, Open Hardware Monitor, evga precision for software controlled GPU fan speeds, and NVIDIA Inspector for multi-display power saver and overclock settings running as well, I tested the following clients:

  • Windows XP/2003/Vista/2008/7 SMP2 client console version 6.34 (32 bit) running -SMP 4 / -verbosity 9 / -advmethods / bigpackets / checkpoint=30 / nocpulock=1
  • Windows XP/2003/Vista/7 GPU3 (required for Fermi) no-nonsense console client version 6.41 (32 bit) -advmethods / bigpackets / -verbosity 9 / priority=96 (low) / checkpoint=30 / nocpulock=1

From that, I determined that my average PPD on SMP was 26,000 and my average PPD on Fermi was 20,000. This is over 323 WUs between both clients in the last two months.

My PPD/MHz on SMP was 5.5 and my PPD/MHz on GPU was 11.

However, I measured the total power drawn at the UPS at the following values:

  • 235 W total power use for SMP and no other clients.
  • 450 W total power use for Fermi and no other clients.
  • 570 W total power use values with both SMP and Fermi client combined.

I then applied the following simple formulas (which are more unit-conversion than anything else)

( Total Power (W) / 1000 W / KW ) * ( 24 hr / day ) * ( 365.25 day / yr ) * ( 1 yr / 12 mo ) = Total KWh / month

( Total KWh / month ) * ( utility cost / KWh ) = Monthly Cost ( cost / mo )

( cost / month ) * ( 12 mo / yr ) = Annual Cost ( cost / yr )

I then verified this at my utility meter, which measures KWh or Kilowatt-hours over an hour of operation with no cyclic loads and compared it to the same no cyclic load condition with no clients running. My theoretical values based on the UPS were accurate at my utility meter within +/- 2%

Conclusion:

  • SMP client in my configuration, with my system, and at my overclock, is 110.64 PPD/W
  • Fermi client, in my configuration, with my system, and at my overclock is 44.46 PPD/W
  • Both clients running simultaneously in my configurations, with my system, and at my overclocks are 80.70 PPD/W

The SMP client yields 30% more PPD at 47.8% less power use, making it 249% more efficient than the Fermi client, in terms of PPD/W.

  • SMP client in my configuration, with my system, and at my overclock, is 2060 KWh / year or 171.6675 KWh / month
  • Fermi client, in my configuration, with my system, and at my overclock is 3944.7 KHh / year or 328.725 KWh / month
  • Both clients running simultaneously in my configurations, with my system, and at my overclocks are 4996.62 KWh / year or 416.385 KWh / month

US average electrical cost per KWh is 9.83 cents / KWh (nationwide household average 2010 U.S. Energy Information Administration)

My total costs, if I paid the US national average for power would be as follow: (rounded to the penny)

  • SMP client in my configuration, with my system, and at my overclock, is $202.50 / year or $16.88 / month
  • Fermi client, in my configuration, with my system, and at my overclock is $387.76 / year or $32.31 / month
  • Both clients running simultaneously in my configurations, with my system, and at my overclocks are $491.17 / year or $40.93 / month

Furthermore, in the U.S. most electrical companies have a tiered price per KWh. They'll have something similar to the following:

0-500 KWh / month = $0.075 / KWh

501-1300 KWh / month = $0.0883 / KWh

1300+ KWh / month = $0.1102 / KWh

(The above values are simply examples, not my costs or rates, and prices vary by region and type of power used to supply the region. ie. Power in Hawaii is considerably more costly than power in Idaho.)

So if you're already using around 1100 KWh / month (based on an annual average) and if you have a system like mine, using 416 KWh / month tacked on, the first 200 KWh of your SMP/Fermi will be at around 9 cents / KWh and the additional 216 KWh get charged at around 11 cents / KWh. (Using the above table and the hypothetical values that I just listed, folding my system in those utility parameters would cost an extra $41.46 / month or $497.52 / year.) (Rather than folding, I could have bought a second GTX 580 for SLI or paid in full for my car insurance...)

Now these dollar amounts are not what I pay, and I'm not going to discuss what my actual electricity rates are or what my actual utility bills are, but I am posting actual power statistics of my system in a 24/7 real-world folding use state.

Electrical osts add up very fast and unless you're a college student with free electrical power or in a unique rent situation where your electricity is covered, the costs of folding are more than you might expect.

To make matters worse, if you are in a hot climate and you use your air-conditioning, then folding will heat up your PC room, causing your AC system to work harder to maintain a constant comfortable temperature, and the hidden costs of additional cooling are not something that I have even considered. (Of course in a cold climate, folding acts like a space heater and you might reduce your heater run-time to keep the house warm, which won't reduce your power bill, but at least will offset it some.)
 

·
!
Joined
·
9,170 Posts
Good post shadowfax.

I would like to add a few things for any that might be daunted by the numbers they see in front of them for cost and energy use. First he is using a UPS which is yet another step in the chain from wall to hardware that the power must go through and one where he loses a considerable amount of efficiency. I'm not saying UPSes are a bad thing but it should be kept in mind for any that are a little surprised by the cost per month in his average, because basically he is losing about 15% efficiency in two places, the PSU and the UPS, while with the right power supply it's possible to only lose ~8-10% once. This would be a significant difference in energy use and folding cost per month. Another thing to be considered is the power use scaling of your hardware, a small overvoltage may not have a drastic effect on power consumption but the phenomenon of people pushing much much harder for that last 100MHz to hit "zomg 5GHz!!!" can end up costing you 50w for 100Mhz which is just not worth it. This is especially relevant in the case of the 580 he is folding on here because when overvolted these things guzzle power like no other. He wanted to maximize his points per day and be less concerned with his efficiency because he is competing in a team competition here but for any new prospective folders his numbers are not the norm and we could make them considerably lower even with similar hardware. His numbers are those of a folding enthusiast, for somebody just getting into folding that would just like to contribute to this cause the cost in energy consumption can be much less than what you see here.

Now, this is my preferred way of trying to lower energy consumption a little bit outside of the PC methods I mentioned above like voltage scaling and the UPS consideration. Using these LED bulbs I have basically offset the energy use of my computer folding 24/7.
Quote:
Originally Posted by juano View Post

It may or may not help you but I like LED lightbulbs for trying to reduce power consumption, they aren't cheap upfront but they do put out great light and save lots of energy. They have a few different wattages; a 6 watt that is similar to a 50 watt incandescent, a 7 watt ~ 60 watt, a 9 watt ~ 75 watt, and a 12 watt ~ 100 watt, so depending on how much light you need you could be saving a huge amount of power. I prefer the warm white temperature ones. Only possible downside to these is that they only put off light in 180 degrees as opposed to nearly 360 degrees of incandescent, it's not like they are spot lights but you just wouldn't want to use them in like a desk lamp that faces the ceiling if you wanted light below it.
I have a few of these bulbs and can highly recommended them with the caveat of not pointing them in the opposite direction of where you need light. With these bulbs you could be saving 44w, 53w, 66w, or 88 watts compared to an incandescent putting out similar light. That also directly translates to a real reduction in the amount of time the AC has to run for those in hotter climates saving more power.
 

·
Registered
Joined
·
2,384 Posts
Thank you both for this, will be subscribing. Also, about the bulbs, I just use florescent bulbs which cut my costs considerably, are cheap, and last for quite awhile.
biggrin.gif
 

·
!
Joined
·
9,170 Posts
Ah yes I forgot to mention that aspect, the LED bulbs are much more environmentally friendly than the gas in the CFLs. There are also a bunch of other little nice things about LEDs compared to CFLs like the instant on, zero flicker, about half the power consumption of even a CFL, and some CFLs have problems overheating and thus significantly shortening their lives when they are placed socket up. So CFLs aren't terrible but if I'm going to recommend something then I feel better recommending the LEDs as a better forward looking solution.
 

·
100LB Rig Club
Joined
·
2,579 Posts
Article deserves to be added to new folder help. Gives a fairly in depth look at why costs are what yet are. Yet, as Juno mentioned, costs can be cut further on this system. A more efficient psu and no UPS could cut bills by perhaps another 15-19% representing a very real savings.

Also by heading closer to stock volts on the gtx 580, you could shave over 10% there as well.

Folding DOES NOT have to be expensive. Dedicated 26/2700k rigs are even more efficient PPW. Decent z68 mobo (integrated graphics save power), CPU, high speed ram, 212+ (other affordable cooler) And higher efficiency psu. Figure $600 here but can be done for much lower if you are willing to look around.
 

·
Registered
Joined
·
966 Posts
Discussion Starter · #6 ·
EDIT:

I have restructured the posts, to present the summary first and the supporting data in detail later.

I have left my PRELIMINARY results in the first post to preserve the integrity of the post so that the replies to it, above are relevant and accurate.

Please note that the data below are more inclusive, more accurate, and supersede all data previously posted, save for national averages and hypothetical power costs.

The new data will also apply to all folders, overclocked or at stock/reference settings, and they actually disprove a few of the conjectures made in the previous post by juano.

(Overclocking the CPU to hilt increases its PPD/W efficiency and still uses less power than the reference GPU client, for example. Also, the savings from going to a newer PSU would only pay for themselves after 5.4 years of 24/7 folding based on the national average. Although the power reduction is significant, the initial investment is not worth it if you have a decent PSU already. By the time it has paid for itself, you'll be needing a replacement PSU anyhow.)

The GPU is a wretched client for power consumption, even at reference voltages, and should be avoided at all costs, unless finishing Stanford's GPU WUs is your goal, in which case, fold on!

I apologize for the editing, but it was necessary, to present things in a logical format.

For all tests performed:

  • CPU: i5-2500K
  • RAM: 2x PC3-12800 GmSkill CL8 / 800 MHz (1600 effective) / stock at 1.500V (as tested)
  • Motherboard: Asus P8P67 WS Revolution Rev B3 (92% platinum power efficiency rating) (as tested)
  • HDD: WDC WD2002FAEX-007BA0 (2 TB Caviar Black 7200 RPM Sata3 6.0GB/s) (as tested)
  • Graphics: evga GeForce GTX 580 SC (1.5 GB VRAM)
  • OS: Windows 7 64 bit Professional
  • PSU: Corsair TX850W (CMPSU-850TX) (25% load / 83% efficient) (50% load / 84% efficient) (100% load / 83% efficient) ) (Corrected to Platinum Plus values through iterative process detailed below.)
  • UPS: APC BX1500G 1500VA / 865W (25% load / 84% efficient) (50% load / 86% efficient) (100% load / 87% efficient

  • Standard Windows 7 64 Professional background processes running, Open Hardware Monitor, evga precision for software controlled GPU fan speeds, and NVIDIA Inspector for multi-display power saver and overclock settings running as well, I tested the following clients:
  • Windows XP/2003/Vista/2008/7 SMP2 client console version 6.34 (32 bit) running -SMP 4 / -verbosity 9 / checkpoint=30 / nocpulock=1
  • Windows XP/2003/Vista/7 GPU3 (required for Fermi) no-nonsense console client version 6.41 (32 bit) -verbosity 9 / priority=96 (low) / checkpoint=30 / nocpulock=1

For the following table: (Overclocks and OC values, changes from defaults, and best overall value in category are bold-faced.)

CPU Client is Core A4 (with normal download size in config)

CPU WU is Project 7200 (not advanced methods project, ClientType=0)

GPU Client is Core 15 (with normal download size in config)

GPU WU is Project 6803 (not advanced methods project, ClientType=0)

CPU ReferenceGPU Reference CPU ReferenceGPU Overclock CPU OverclockGPU Reference CPU OverclockGPU Overclock
CPU Clock (MHz)GPU Shader Clock (MHz) CPU Clock (MHz)GPU Shader Clock (MHz) CPU Clock (MHz)GPU Shader Clock (MHz) CPU Clock (MHz)GPU Shader Clock (MHz)
37001544 37001856 47001544 47001856
GPU Memory Clock (MHz) GPU Memory Clock (MHz) GPU Memory Clock (MHz) GPU Memory Clock (MHz)
2005 2106 2005 2106
CPU VcoreGPU Vcore CPU VcoreGPU Vcore CPU VcoreGPU Vcore CPU VcoreGPU Vcore
1.2481.05 1.2481.138 1.341.05 1.341.138
CPU Only CPU Only CPU Only CPU Only
PPD12563.38 PPD12563.38 PPD18220.37 PPD18220.37
PPD/MHz3.40 PPD/MHz3.40 PPD/MHz3.88 PPD/MHz3.88
Power (W)187 Power (W)187 Power (W)225 Power (W)225
PPD/W67.18 PPD/W67.18 PPD/W80.98 PPD/W80.98
GPU Only GPU Only GPU Only GPU Only
PPD16123.90 PPD19080.00 PPD16123.90 PPD19080.00
PPD/MHz10.44 PPD/MHz10.28 PPD/MHz10.44 PPD/MHz10.28
Power (W)278 Power (W)346 Power (W)282 Power (W)342
PPD/W58.00 PPD/W55.14 PPD/W57.18 PPD/W55.79
GPU & CPU GPU & CPU GPU & CPU GPU & CPU
PPD CPU10503.14 PPD CPU10236.12 PPD CPU15659.48 PPD CPU15018.62
PPD/MHz CPU2.84 PPD/MHz CPU2.77 PPD/MHz CPU3.33 PPD/MHz CPU3.20
PPD GPU16951.30 PPD GPU19403.40 PPD GPU16591.30 PPD GPU19403.40
PPD/MHz GPU10.98 PPD/MHz GPU10.45 PPD/MHz GPU10.75 PPD/MHz GPU10.45
PPD Total27454.44 PPD Total29639.52 PPD Total32250.78 PPD Total34422.02
Power (W) Total343 Power (W) Total411 Power (W) Total387 Power (W) Total448
PPD/W Total80.04 PPD/W Total72.12 PPD/W Total83.34 PPD/W Total76.83

For the following table: (Overclocks and OC values, changes from defaults, and best overall value in category are bold-faced.)

CPU Client is Core A4 (with normal download size in config)

CPU WU is Project 7200 (not advanced methods project, ClientType=0)

GPU Client is Core 15 (with big download size in config)

GPU WU is Project 7622 (-advmethods project, ClientType=3)

CPU ReferenceGPU Reference CPU ReferenceGPU Overclock CPU OverclockGPU Reference CPU OverclockGPU Overclock
CPU Clock (MHz)GPU Shader Clock (MHz) CPU Clock (MHz)GPU Shader Clock (MHz) CPU Clock (MHz)GPU Shader Clock (MHz) CPU Clock (MHz)GPU Shader Clock (MHz)
37001544 37001856 47001544 47001856
GPU Memory Clock (MHz) GPU Memory Clock (MHz) GPU Memory Clock (MHz) GPU Memory Clock (MHz)
2005 2106 2005 2106
CPU VcoreGPU Vcore CPU VcoreGPU Vcore CPU VcoreGPU Vcore CPU VcoreGPU Vcore
1.2481.05 1.2481.138 1.341.05 1.341.138
CPU Only CPU Only CPU Only CPU Only
PPD12563.38 PPD12563.38 PPD18220.37 PPD18220.37
PPD/MHz3.40 PPD/MHz3.40 PPD/MHz3.88 PPD/MHz3.88
Power (W)187 Power (W)187 Power (W)225 Power (W)225
PPD/W67.18 PPD/W67.18 PPD/W80.98 PPD/W80.98
GPU Only GPU Only GPU Only GPU Only
PPD16975.60 PPD20096.70 PPD16975.60 PPD20096.70
PPD/MHz10.99 PPD/MHz10.83 PPD/MHz10.99 PPD/MHz10.83
Power (W)331 Power (W)413 Power (W)335 Power (W)430
PPD/W51.29 PPD/W48.66 PPD/W50.67 PPD/W46.74
GPU & CPU GPU & CPU GPU & CPU GPU & CPU
PPD CPU12381.74 PPD CPU12117.37 PPD CPU17398.48 PPD CPU17557.85
PPD/MHz CPU3.35 PPD/MHz CPU3.27 PPD/MHz CPU3.70 PPD/MHz CPU3.74
PPD GPU17236.80 PPD GPU20463.80 PPD GPU17236.80 PPD GPU20463.80
PPD/MHz GPU11.16 PPD/MHz GPU11.03 PPD/MHz GPU11.16 PPD/MHz GPU11.03
PPD Total29618.54 PPD Total32581.17 PPD Total34635.28 PPD Total38021.65
Power (W) Total408 Power (W) Total491 Power (W) Total453 Power (W) Total551
PPD/W Total72.59 PPD/W Total66.36 PPD/W Total76.46 PPD/W Total69.00

Just for kicks, I threw in one more category, because I got tired of waiting for this SMP WU to finish. I'm not considering this in my research; it's here for posterity's sake:

+CPU Overclock+GPU Overclock
CPU Clock (MHz)GPU Shader Clock (MHz)
49001856
GPU Memory Clock (MHz)
2106
CPU VcoreGPU Vcore
1.4161.138
CPU Only
PPD19295.50
PPD/MHz3.94
Power (W)260
PPD/W74.21
GPU Only
PPD20096.70
PPD/MHz10.83
Power (W)430
PPD/W46.74
GPU & CPU
PPD CPU18745.10
PPD/MHz CPU3.83
PPD GPU20463.80
PPD/MHz GPU11.03
PPD Total39208.90
Power (W) Total590
PPD/W Total66.46

For the following table: (Overclocks and OC values, changes from defaults, and best overall value in category are bold-faced.)

CPU Client is Core A3 (with big download size in config)

CPU WU is Project (-advmethods project, ClientType=3) (note that this is not "-bigadv" or "-hugeadv" as some refer to them. This is the SMP advanced with a big download size.)

GPU Client is Core 15 (with big download size in config)

GPU WU is Project 7622 (-advmethods project, ClientType=3)

CPU ReferenceGPU Reference CPU ReferenceGPU Overclock CPU OverclockGPU Reference CPU OverclockGPU Overclock
CPU Clock (MHz)GPU Shader Clock (MHz) CPU Clock (MHz)GPU Shader Clock (MHz) CPU Clock (MHz)GPU Shader Clock (MHz) CPU Clock (MHz)GPU Shader Clock (MHz)
37001544 37001856 47001544 47001856
GPU Memory Clock (MHz) GPU Memory Clock (MHz) GPU Memory Clock (MHz) GPU Memory Clock (MHz)
2005 2106 2005 2106
CPU VcoreGPU Vcore CPU VcoreGPU Vcore CPU VcoreGPU Vcore CPU VcoreGPU Vcore
1.2481.05 1.2481.138 1.341.05 1.341.138
CPU Only CPU Only CPU Only CPU Only
PPD14093.65 PPD14093.65 PPD19860.53 (Best CPU PPD) PPD19860.53 (Best CPU PPD)
PPD/MHz3.81 PPD/MHz3.81 PPD/MHz4.23 PPD/MHz4.23
Power (W)183 (Lowest Power) Power (W)183 Power (W)225 Power (W)225
PPD/W77.01 PPD/W77.01 PPD/W88.27 (Best PPD/W) PPD/W88.27 (Best PPD/W)
GPU Only GPU Only GPU Only GPU Only
PPD16975.60 PPD20096.70 (Best GPU PPD) PPD16975.60 PPD20096.70
PPD/MHz10.99 PPD/MHz10.83 PPD/MHz10.99 PPD/MHz10.83
Power (W)328 Power (W)423 Power (W)331 Power (W)430
PPD/W51.75 PPD/W47.51 PPD/W51.29 PPD/W46.74
GPU & CPU GPU & CPU GPU & CPU GPU & CPU
PPD CPU13436.34 PPD CPU13029.89 PPD CPU18510.24 PPD CPU18959.70
PPD/MHz CPU3.63 PPD/MHz CPU3.52 PPD/MHz CPU3.94 PPD/MHz CPU4.03
PPD GPU17303.40 PPD GPU

20557.70 (Best GPU PPD) *1

PPD GPU17303.40 PPD GPU20557.70
/MHz GPU11.21 PPD/MHz GPU11.08 PPD/MHz GPU11.21 PPD/MHz GPU11.08
PPD Total30739.74 PPD Total33587.59 PPD Total35813.64 PPD Total39517.40 (Best Total PPD)
Power (W) Total405 Power (W) Total502 Power (W) Total448 Power (W) Total550
PPD/W Total75.90 PPD/W Total66.91 PPD/W Total79.94 PPD/W Total71.85

+CPU Overclock+GPU Overclock
CPU Clock (MHz)GPU Shader Clock (MHz)
49001856
GPU Memory Clock (MHz)
2106
CPU VcoreGPU Vcore
1.4161.138
CPU Only
PPD21634.92 (Best CPU PPD)*2
PPD/MHz4.42
Power (W)245
PPD/W88.31 (Best PPD/W)*3
GPU Only
PPD20187.20 (Best GPU PPD)*4
PPD/MHz10.88
Power (W)430
PPD/W46.95
GPU & CPU
PPD CPU19860.53
PPD/MHz CPU4.05
PPD GPU20557.70
PPD/MHz GPU11.08
PPD Total40418.23 (Best Total PPD)*5
Power (W) Total574
PPD/W Total70.42

  • *1 The best GPU PPD performance occurs when the CPU has some amount of load on it to force the CPU out from an idle 1.6 GHz mode and into a dedicated full power mode. Even a mild load on one single core (if the core affinity lock switch is returned to its default enabled position) will cause the GPU PPD to increase by a small amount. SMP folding concurrently is not required.
  • *2 The best CPU PPD performance comes, not surprisingly, from the 4.9 GHz overclock that I only tested on a whim. Interesting to note is that the -advanced SMP client out-performs the standard SMP client, even with a low-point WU. (I have had much higher PPD values with the 11040, 11041, and 11070 WUs on -advmethods in the SMP client.
  • *3 Surprisingly, the most efficient PPD/W is with the 4.9 GHz overclock. However, the 4.7 GHz overclock, in a CPU only setting, is nearly identical in efficiency. The SMP client (on -advmethods) in a default clock state is still the most power efficient overall, and still retains quite good PPD/W efficiency.
  • *4 The GPU performance does increase slightly with no CPU client running and the CPU allowed to idle at 1.6 GHz if the CPU is overclocked significantly. Again, returning the core lock to its default enabled position and then forcing a single core to remain active would give the maximum GPU PPD, regardless of what overclock is used. The 4.9 GHz overclock with speedstep enabled and no core lock affinity simply out-performed lower clock values by virtue of its response time.
  • *5 It should come as no surprise that the best PPD total came from the maximum overclocked condition with both clients running. What's interesting to note in the GPU/CPU both overclocked scenario is that the difference in power consumption, PPD Total, and PPD/W is relatively insignificant when comparing the 4.9 GHz CPU clock to the 4.7 GHz CPU clock.
 

·
Registered
Joined
·
966 Posts
Discussion Starter · #7 ·
Thank you for pointing out other methods of energy savings, juano, cubanresourceful, and RussianJ. Your input on this thread is very welcome and the emphasis you made that my results are outside of the norm is both valid and necessary. (That's why I included as many relevant system specifications, voltages, and clock speeds as I could, so that my results would be scientific rather than purely anecdotal.)

I'm of the belief that when performing any form of upgrade, you have to look at the initial cost of the upgrade, the operating cost change (positive or negative) of the upgrade, and the expected lifetime of the upgrade to determine if it is a worthwhile investment or not. This statement applies mostly to the PSU efficiency in my case, but it applies to anything in life.

As you pointed out, that GTX 580, especially in my significantly overclocked configuration, is pulling far more power than I expected it to. Since I was in the Team Competition, I was going for a rather aggressive overclock to maximize my PPD and this isn't the norm. However, it does not change the fact that the GTX 580 is far less efficient to fold on than the 2500K. In fact, the Fermi client in general, is far less efficient than the SMP client. Even at stock clocks, the Fermi client on a GTX 580 is going to pull a large amount of electrical power. Now, further increasing efficiency is possible by folding on a standard GPU configuration rather than the -advmethods configuration.

In the case of the PSU, for me, upgrading from 80+ Bronze to 80+ Platinum only pays off if I fold 24/7 for 10 5.43 years straight. The expected lifetime of the PSU is 50,000 hours, which is 5.7 years. (Realistically, it's probably closer to ten years at normal use and 3 or 4 years at 85% or higher load.) My estimate was off by a factor of nearly 2.

Is an 80 Plus Platinum PSU more efficient? Yes. If you are buying a new PSU for the first time, is it worth spending a small amount extra on a more efficient unit? Yes. If you already have a bronze 80 plus PSU and do not own an 80 Plus Platinum PSU, is it worth upgrading solely for the efficiency? Probably not; it's best to wait until you need a new PSU and buy a more efficient one at that time, unless you plan to use your PSU for the next ten years at 100% load. (A PSU will most likely fail before then, by the way.) My data in post 8 support this hypothesis. I consider it a theory now, unless my current PSU fails and I buy the unit I noted in Post #8.

Fold without a UPS at your own risk. Where I live, if you do that, you're highly likely to have a total system loss at some point. Again, I simply cannot risk my system to random chance, in my area.

In my case, the UPS cost me close to $160 and it costs me an additional $80 annually assuming that I operate it at 80% load 24/7 when compared to operating without a UPS. However, the average number of power events that would randomly shut down or start up (yes randomly start up) my PC here (as well as dim the lights, buzz, and other oddities) is about six per year. A few years ago I had one of those power events destroy a motherboard and HDD. (Luckily the CPU and RAM were not damaged.)

For me, paying an initial investment of $160 and a maximum annual operating cost incarease of $80 (and in reality, closer to only $15) for a UPS that has a life expectancy of 5 years before $90 battery replacement and 10 years before unit replacement that I can use on any PC now or in the future (within maximum power consumption ratings) is worth it. The UPS is like insurance for me, and operating without one has burned me in the past. I'm willing to spend $160 initially, $15 annually for 10 years ($150 total) and a $90 one-time battery replacement ($400 grand total, not accounting for inflation or interest yields/losses) to protect this PC and future upgrade/replacement PCs from random power events that happen relatively frequently here. (The cost of the individual components in my signature rig can be verified ay any time. I'm not going to bother posting them here.) Furthermore, my time is worth something, and the time it takes to recover from a HDD data loss, order new parts, install new parts, install a new OS, restore backed up / archived data (and overclock all over again...) can't be directly measured. The UPS is here to stay. :)

  • Updated windows (glass ones, not the OS) and doors with high insulation rating / thermal efficiency units to reduce seasonal heating and cooling costs.
  • Reduced thermostat settings in the wintertime to reduce seasonal heating costs.
  • Increased thermostat settings in the summertime to reduce seasonal cooling costs. (Eliminating the use of AC is not an option for me, due to allergies.)
  • Use a high-efficiency heat-pump year-round rather than conventional AC in summer and conventional furnace in winter.
  • Installed high efficiency rigid board insulation that exceeds code in the basement and garage to reduce heating and cooling costs.
  • Use of high efficiency washing machine and dryer (more water savings than electrical savings though, in the case of the washer). Additionally, hang clothes out to dry in the summer rather than use the dryer.
  • Use lights only when necessary; all lights are CFL or LED, except in one chandelier that requires incandescent bulbs (and is only used for "special" meals at holidays).
  • High efficiency water heater with reduced thermostat setting.
  • Installed switches that disable power to monitors when PC is in power off or stand-by mode, forcing a "hard off" state rather than a 1-2W standby state on LCD monitors.
  • Related to above: any device with a low-power standby mode in the house is on a power strip with an interrupt switch that can force a "hard off" state, rather than allow a small trickle load to persist.
  • All aspects of the home meet or exceed building codes. (electrical, structural, plumbing, etc.
  • I even unplug my smart-phone charger when I'm not actively charging my phone. (Leaving AC/DC converters plugged in drains electrical power even with no device attached.)

Short from ripping the drywall out of my home to upgrade the insulation in some of the walls and the roof (which already meets and exceeds building code), there's nothing more that I'm aware of and that is feasible that I can do to increase efficiency and reduce my electrical costs.
 

·
Registered
Joined
·
966 Posts
Discussion Starter · #8 ·
I believe that I am finished presenting all of my data. The data in this post are based on the exact data above, but omit the PPD/MHz values, as I consider them to be somewhat irrelevant and also redundant at this point.

UPS Efficiencies:

So the Load in Watts, displayed on the UPS, is the actual load demand of the system that the UPS is powering, which reflects the amount of power the UPS is providing to the device. Thus when I verified my measurements at the meter and noted a slight margin of error, that is the source of the error. The UPS may say that it is providing 183W of power (in the case of the SMP -advmethods load at reference clocks) but that 183W provided load correlates to a UPS efficiency of 93.2%. Thus the actual load my outlet (and eventually utility company) are providing to the UPS is 183 / 0.932 which is 196.35W.

Therefore, no corrections to my data need to be made to account for UPS efficiency or the lack thereof, as my data reflect actual component electrical demand, and not a "utility power" load case. (However, for a cost analysis of my system, I would have to correct accordingly for the UPS efficiency values.)

Note that in my case, "System Demand" is based on 850W PSU. For posterity's sake, here are the actual efficiencies of my UPS, as provided me by APC tech/engineering department, with a column added by me for Utility Load:

System Demand (850W = 100%)UPS Display Load (W)UPS EfficiencyUtility Load (W)
100.0%85096.9%877.19
75.0%637.596.9%657.89
50.0%42596.6%439.96
35.0%297.595.1%312.83
25.0%212.594.7%224.39
20.0%17093.2%182.40
10.0%8588.9%95.61

So I will not be providing any data to correct for UPS inefficiency, beyond the table above. The point of the data displayed are to give folders an actual idea of what sort of power comparable systems will demand. Should they choose to use an UPS, they will need to consult the manufacturer and generate a table as I have above to estimate actual utility costs.

An important discussion about the power supply data presented here:

The way that I calculated System Power Load (which again, is not affected by the UPS at all, but rather is raw system demand), for the 80 Plus Platinum certified PSU (Pt+) was the by the following iterative process:

(Power Demand from previous chart) * ( "Brnz+" efficiency) = Actual component demand.

(Actual Component Demand) / ( "Pt+" efficiency) = Pt+ Power Demand

Note that all of these new PPD/W values are derived values based on the efficiency curve of my PSU that I gathered from the manufacturers literature. The PSU is about two years old. Since I am using my own system, so long as I apply the same efficiency curve to all points of data universally, the derived data will still be scientifically accurate although not necessarily precise. But this only applies to the consistency of the "power shift" in my own 80 Plus Bronze Certified PSU.

I used a 860W Seasonic Platinum Plus Certified PSU as a comparison, as Corsair does not make a Platinum 850W and I wanted to keep the total power provided similar to be as accurate as possible.

Note that this method introduces at least a +/-1% margin of error into 50% to 100% power values and at 15% to 35% power levels the error increases to at least +/- 2%. The reason for this error is that I replaced every "Total Power" value that my system demanded using the above formla, one value at a time, but as I do not have an efficiency curve that is well-populated with data for the Seasonic unit, I had to interpolate intermediate values in two separate instances. Each interpolation injects some error into the equation. Furthermore, switching from a real-world measurement with my unit and correlating it to a theoretical curve introduces a small error. Using that value to derive another power value (with error stated above) and then attempt to fit it onto another manufacturer's efficiency list is the true source of error. The only way to get a truly representative comparison would be for me to spend the $200

Thus, in these results the 4.7 GHz clock edges out the 4.9 GHz clock for the top spot of PPD/W efficiency. The break-down of GPU vs CPU did not change. SMP clients are still considerably more efficient, both in terms of using less total power and also producing more PPD/W than the GPU clients are. The margins, if anything, are wider than before, as all of the PPD/W values are shifted up by a percentage.

I was a bit surprised how much difference the PSU made in terms of efficiency. It would be nice to back these data up with some actual comparisons by installing a Platinum Certified PSU in my system, but I can't afford to do so. I suspect that the margin of error in my iterative process may be more significant. An upgraded PSU would save me (at my "normal" SMP+GPU OC both folding setup) 39.45 KWh/month or 473.36 KWh/year which, using the national average, is a savings of $3.55 / month or $42.60 / year.

My estimate that the PSU would pay for itself (if I were 24/7 folding) after ten years was off. That PSU would pay for itself (If I were 24/7 folding) after 5.429 years. (47,591 hours of operation). Given that the manufacturer states a 50,000 Mean Hours to failure on my current PSU and my last PSU lasted for about six years, I'd say that I'm still better off letting my current PSU die and then replacing it. I'm simply surprised that my estimate was off by a factor of two. I underestimated the theoretical power savings by 50%.

I will add the following lines: (the order is simply by electrical savings)

For all tests performed:

  • CPU: i5-2500K
  • RAM: 2x PC3-12800 GmSkill CL8 / 800 MHz (1600 effective) / stock at 1.500V (as tested)
  • Motherboard: Asus P8P67 WS Revolution Rev B3 (92% platinum power efficiency rating) (as tested)
  • HDD: WDC WD2002FAEX-007BA0 (2 TB Caviar Black 7200 RPM Sata3 6.0GB/s) (as tested)
  • Graphics: evga GeForce GTX 580 SC (1.5 GB VRAM)
  • OS: Windows 7 64 bit Professional
  • PSU: Corsair TX850W (CMPSU-850TX) (25% load / 83% efficient) (50% load / 84% efficient) (100% load / 83% efficient) )
  • (Corrected to Platinum Plus values through iterative process detailed below to simulate SeaSonic Platinum-860 80+ Platinum Certified Modular Active PFC PSU, referred to as Pt+ in this post)

(current price $219.99 + $11.30 S/H = $231.29)

  • UPS: APC BX1500G 1500VA / 865W (25% load / 84% efficient) (50% load / 86% efficient) (100% load / 87% efficient - Efficiency irrelevant, as Power listed in data are system demand from UPS, not actual power at Wall Outlet)

  • Standard Windows 7 64 Professional background processes running, Open Hardware Monitor, evga precision for software controlled GPU fan speeds, and NVIDIA Inspector for multi-display power saver and overclock settings running as well, I tested the following clients:
  • Windows XP/2003/Vista/2008/7 SMP2 client console version 6.34 (32 bit) running -SMP 4 / -verbosity 9 / checkpoint=30 / nocpulock=1
  • Windows XP/2003/Vista/7 GPU3 (required for Fermi) no-nonsense console client version 6.41 (32 bit) -verbosity 9 / priority=96 (low) / checkpoint=30 / nocpulock=1

  • Total Power with Pt+ PSU, denoted by # symbol
  • PPD/W as above Pt+ PSU dented by @ symbol

For the following table: (Overclocks and OC values, changes from defaults, and best overall value in category are bold-faced.)

CPU Client is Core A4 (with normal download size in config)

CPU WU is Project 7200 (not advanced methods project, ClientType=0)

GPU Client is Core 15 (with normal download size in config)

GPU WU is Project 6803 (not advanced methods project, ClientType=0)

CPU ReferenceGPU ReferenceCPU ReferenceGPU OverclockCPU OverclockGPU ReferenceCPU OverclockGPU Overclock
CPU Clock (MHz)GPU Shader Clock (MHz)CPU Clock (MHz)GPU Shader Clock (MHz)CPU Clock (MHz)GPU Shader Clock (MHz)CPU Clock (MHz)GPU Shader Clock (MHz)
37001544370018564700154447001856
GPU Memory Clock (MHz) GPU Memory Clock (MHz) GPU Memory Clock (MHz) GPU Memory Clock (MHz)
2005 2106 2005 2106
CPU VcoreGPU VcoreCPU VcoreGPU VcoreCPU VcoreGPU VcoreCPU VcoreGPU Vcore
1.2481.051.2481.1381.341.051.341.138
CPU Only CPU Only CPU Only CPU Only
PPD12563.38PPD12563.38PPD18220.37PPD18220.37
Power (W)187Power (W)187Power (W)225Power (W)225
Pt+ Power (W) #174Pt+ Power (W) #174Pt+ Power (W) #206Pt+ Power (W) #206
PPD/W67.18PPD/W67.18PPD/W80.98PPD/W80.98
Pt+ PPD/W @72.20Pt+ PPD/W @72.20Pt+ PPD/W @88.45Pt+ PPD/W @88.45
GPU Only GPU Only GPU Only GPU Only
PPD16123.90PPD19080.00PPD16123.90PPD19080.00
Power (W)278Power (W)346Power (W)282Power (W)342
Pt+ Power (W) #255Pt+ Power (W) #313Pt+ Power (W) #259Pt+ Power (W) #310
PPD/W58.00PPD/W55.14PPD/W57.18PPD/W55.79
Pt+ PPD/W @63.23Pt+ PPD/W @60.96Pt+ PPD/W @62.25Pt+ PPD/W @61.55
GPU & CPU GPU & CPU GPU & CPU GPU & CPU
PPD CPU10503.14PPD CPU10236.12PPD CPU15659.48PPD CPU15018.62
PPD GPU16951.30PPD GPU19403.40PPD GPU16591.30PPD GPU19403.40
PPD Total27454.44PPD Total29639.52PPD Total32250.78PPD Total34422.02
Power (W) Total343Power (W) Total411Power (W) Total387Power (W) Total448
Pt+ Power (W) #310Pt+ Power (W) #375Pt+ Power (W) #355Pt+ Power (W) #409
PPD/W Total80.04PPD/W Total72.12PPD/W Total83.34PPD/W Total76.83
Pt+ PPD/W @88.56Pt+ PPD/W @79.04Pt+ PPD/W @90.85Pt+ PPD/W @84.16

For the following table: (Overclocks and OC values, changes from defaults, and best overall value in category are bold-faced.)

CPU Client is Core A4 (with normal download size in config)

CPU WU is Project 7200 (not advanced methods project, ClientType=0)

GPU Client is Core 15 (with big download size in config)

GPU WU is Project 7622 (-advmethods project, ClientType=3)

For the following table: (Overclocks and OC values, changes from defaults, and best overall value in category are bold-faced.)

CPU ReferenceGPU ReferenceCPU ReferenceGPU OverclockCPU OverclockGPU ReferenceCPU OverclockGPU Overclock
CPU Clock (MHz)GPU Shader Clock (MHz)CPU Clock (MHz)GPU Shader Clock (MHz)CPU Clock (MHz)GPU Shader Clock (MHz)CPU Clock (MHz)GPU Shader Clock (MHz)
37001544370018564700154447001856
GPU Memory Clock (MHz) GPU Memory Clock (MHz) GPU Memory Clock (MHz) GPU Memory Clock (MHz)
2005 2106 2005 2106
CPU VcoreGPU VcoreCPU VcoreGPU VcoreCPU VcoreGPU VcoreCPU VcoreGPU Vcore
1.2481.051.2481.1381.341.051.341.138
CPU Only CPU Only CPU Only CPU Only
PPD12563.38PPD12563.38PPD18220.37PPD18220.37
Power (W)187Power (W)187Power (W)225Power (W)225
Pt+ Power (W) #174Pt+ Power (W) #174Pt+ Power (W) #206Pt+ Power (W) #206
PPD/W67.18PPD/W67.18PPD/W80.98PPD/W80.98
Pt+ PPD/W @72.20Pt+ PPD/W @72.20Pt+ PPD/W @88.45Pt+ PPD/W @88.45
GPU Only GPU Only GPU Only GPU Only
PPD16975.60PPD20096.70PPD16975.60PPD20096.70
Power (W)331Power (W)413Power (W)335Power (W)430
Pt+ Power (W) #300Pt+ Power (W) #377Pt+ Power (W) #303Pt+ Power (W) #393
PPD/W51.29PPD/W48.66PPD/W50.67PPD/W46.74
Pt+ PPD/W @56.59Pt+ PPD/W @53.31Pt+ PPD/W @56.03Pt+ PPD/W @51.14
GPU & CPU GPU & CPU GPU & CPU GPU & CPU
PPD CPU12381.74PPD CPU12117.37PPD CPU17398.48PPD CPU17557.85
PPD GPU17236.80PPD GPU20463.80PPD GPU17236.80PPD GPU20463.80
PPD Total29618.54PPD Total32581.17PPD Total34635.28PPD Total38021.65
Power (W) Total408Power (W) Total491Power (W) Total453Power (W) Total551
Pt+ Power (W) #373Pt+ Power (W) #443Pt+ Power (W) #414Pt+ Power (W) #497
PPD/W Total72.59PPD/W Total66.36PPD/W Total76.46PPD/W Total69.00
Pt+ PPD/W @79.41Pt+ PPD/W @73.55Pt+ PPD/W @83.66Pt+ PPD/W @76.50

+CPU Overclock+GPU Overclock
CPU Clock (MHz)GPU Shader Clock (MHz)
49001856
GPU Memory Clock (MHz)
2106
CPU VcoreGPU Vcore
1.4161.138
CPU Only
PPD19295.50
Power (W)260
Pt+ Power (W) #239
PPD/W74.21
Pt+ PPD/W @80.73
GPU Only
PPD20096.70
Power (W)430
Pt+ Power (W) #393
PPD/W46.74
Pt+ PPD/W @51.14
GPU & CPU
PPD CPU18745.10
PPD GPU20463.80
PPD Total39208.90
Power (W) Total590
Pt+ Power (W) #538
PPD/W Total66.46
Pt+ PPD/W @72.88

CPU Client is Core A3 (with big download size in config)

CPU WU is Project (-advmethods project, ClientType=3) (note that this is not "-bigadv" or "-hugeadv" as some refer to them. This is the SMP advanced with a big download size.)

GPU Client is Core 15 (with big download size in config)

GPU WU is Project 7622 (-advmethods project, ClientType=3)

CPU ReferenceGPU ReferenceCPU ReferenceGPU OverclockCPU OverclockGPU ReferenceCPU OverclockGPU Overclock
CPU Clock (MHz)GPU Shader Clock (MHz)CPU Clock (MHz)GPU Shader Clock (MHz)CPU Clock (MHz)GPU Shader Clock (MHz)CPU Clock (MHz)GPU Shader Clock (MHz)
37001544370018564700154447001856
GPU Memory Clock (MHz) GPU Memory Clock (MHz) GPU Memory Clock (MHz) GPU Memory Clock (MHz)
2005 2106 2005 2106
CPU VcoreGPU VcoreCPU VcoreGPU VcoreCPU VcoreGPU VcoreCPU VcoreGPU Vcore
1.2481.051.2481.1381.341.051.341.138
CPU Only CPU Only CPU Only CPU Only
PPD14093.65PPD14093.65PPD19860.53PPD19860.53
Power (W)183Power (W)183Power (W)225Power (W)225
Pt+ Power (W) #171 (Lowest Power)Pt+ Power (W) #171Pt+ Power (W) #206Pt+ Power (W) #206
PPD/W77.01PPD/W77.01PPD/W88.27PPD/W88.27
Pt+ PPD/W @82.42Pt+ PPD/W @82.42Pt+ PPD/W @96.41 (Best PPD/W) @Pt+ PPD/W @96.41
GPU Only GPU Only GPU Only GPU Only
PPD16975.60PPD20096.70PPD16975.60PPD20096.70
Power (W)328Power (W)423Power (W)331Power (W)430
Pt+ Power (W) #297Pt+ Power (W) #386Pt+ Power (W) #300Pt+ Power (W) #393
PPD/W51.75PPD/W47.51PPD/W51.29PPD/W46.74
Pt+ PPD/W @57.16Pt+ PPD/W @52.06Pt+ PPD/W @56.59Pt+ PPD/W @51.14
GPU & CPU GPU & CPU GPU & CPU GPU & CPU
PPD CPU13436.34PPD CPU13029.89PPD CPU18510.24PPD CPU18959.70
PPD GPU17303.40PPD GPU20557.70PPD GPU17303.40PPD GPU20557.70
PPD Total30739.74PPD Total33587.59PPD Total35813.64PPD Total39517.40
Power (W) Total405Power (W) Total502Power (W) Total448Power (W) Total550
Pt+ Power (W) #370Pt+ Power (W) #453Pt+ Power (W) #409Pt+ Power (W) #496
PPD/W Total75.90PPD/W Total66.91PPD/W Total79.94PPD/W Total71.85
Pt+ PPD/W @83.08Pt+ PPD/W @74.14Pt+ PPD/W @87.56Pt+ PPD/W @79.67

+CPU Overclock+GPU Overclock
CPU Clock (MHz)GPU Shader Clock (MHz)
49001856
GPU Memory Clock (MHz)
2106
CPU VcoreGPU Vcore
1.4161.138
CPU Only
PPD21634.92
Power (W)245
Pt+ Power (W) #225
PPD/W88.31
Pt+ PPD/W @96.16 (Best PPD/W) @*
GPU Only
PPD20187.20
Power (W)430
Pt+ Power (W) #393
PPD/W46.95
Pt+ PPD/W @51.37
GPU & CPU
PPD CPU19860.53
PPD GPU20557.70
PPD Total40418.23
Power (W) Total574
Pt+ Power (W) #518
PPD/W Total70.42
Pt+ PPD/W @78.03

@* As stated above, this time, the 4.7 GHz SMP -advmethods big client edged out the 4.9 GHz SMP -advmethods big client for best PPD/W efficiency. Given that the values are so close to identical and unlike last time, the results are NOT measured values, but rather values obtained through an iterative process with a significant margin of error, I suspect that real-world data for the Pt+ PSU I simulated would correlate better with the PSU I currently own and the 4.9 GHz client would still edge out the 4.7 GHz client by a small margin.

I believe that I'm finally finished getting this thread under control and presented in a manner that, although rather verbose, is at least useful. I'm curious what other folder's thoughts and experiences are, now that they have a significant amount of data to review. I would love it if other folders could post their system specs and their measured power use for a few SMP and GPU WUs. The posts don't have to be nearly as exhaustive or all-inclusive as mine, but more data are always a good thing.

Your comments, opinions, discussions, raves, and rants are welcome. If you believe that I've overlooked anything (other than the UPS analysis, which I am not dealing with) by all means, post them!
 

·
Premium Member
Joined
·
8,368 Posts
Quote:
Originally Posted by shad0wfax View Post

I'm curious what other folder's thoughts and experiences are, now that they have a significant amount of data to review. I would love it if other folders could post their system specs and their measured power use for a few SMP and GPU WUs.
Very good post. I just wanted to add one thing; if you live in a location where your power is averaged monthly you may want to call your service provider and ask them to re-baseline you before you start folding 24/7. I learned this the hard way when I found out my company likes to add a surcharge per kWh when you go over your baseline by a certain percentage. In the end, a bill that was ~500kWh over my norm cost me nearly $600 with their tiered "overage" costs. The solution to this is asking them nicely to reset your baseline, saying you got some new appliances or something of that nature. Your bill in general will be higher but you will not suffer the overages. (Note: they may not allow this, but its worth a try and knowing that you may have to pay through the nose upfront rather than be surprised).

Here is a copy of the chart my power company currently uses, for reference (the actual charges/rates are in the state tariff handbook):

POWAH.jpg
 

·
Registered
Joined
·
966 Posts
Discussion Starter · #10 ·
Quote:
Originally Posted by Scorpion49 View Post

Very good post. I just wanted to add one thing; if you live in a location where your power is averaged monthly you may want to call your service provider and ask them to re-baseline you before you start folding 24/7. I learned this the hard way when I found out my company likes to add a surcharge per kWh when you go over your baseline by a certain percentage. In the end, a bill that was ~500kWh over my norm cost me nearly $600 with their tiered "overage" costs. The solution to this is asking them nicely to reset your baseline, saying you got some new appliances or something of that nature. Your bill in general will be higher but you will not suffer the overages. (Note: they may not allow this, but its worth a try and knowing that you may have to pay through the nose upfront rather than be surprised).

Here is a copy of the chart my power company currently uses, for reference (the actual charges/rates are in the state tariff handbook):

POWAH.jpg
Thank you for that input. That is a serious concern for anyone who lives in an area where this policy is in place.

Fortunately, I'm not facing anywhere near a $600 overage cost, nor do I have onerous penalties for exceeding an individualized baseline. The rates where I live do increase as your KWh increases, but there are only three tiers with fractions of a cent between each tier and the spread between the highest tier and lowest tier is less than 2 cents. The thresholds for each tier are the same for all customers in my area and even our highest tier is lower than the national average. Power is less expensive here than most places in the US, but burning an extra 409 KWh / month is still costing me much more than I'm prepared to pay.

The SMP alone might be feasible for me on an infrequent basis. Folding on my GPU is not something that I can continue to do.
 

·
Registered
Joined
·
966 Posts
Discussion Starter · #11 ·
With the release of the new, slightly lower power, more GPU efficient (less CPU use), and higher PPD value 8031, 8032, and 8033 GPU projects (on -advmethods) there are some additions to make to this thread.

CPU 4.7 GHz (same as in above tests.)

GPU 928 MHz (same as in above tests.)

GPU Project 8031 (-advmethods fermi)

CPU Project 6098 (-advmethods SMP)

(Unfortunately this is not the same CPU project 7200 as above, so this test is not an "apples to apples" comparison to the above as the CPU WU is different. The odds of me getting that specific CPU WU and this GPU 8031 WU at the same time are not good.)

GPU alone:

27,218.83 PPD (Frame times are 2:02)

426 W

63.89 PPD/W

Note that the power consumption is 1% lower than in previous Fermi WUs but that the PPD performance is 35% higher, which results in a significant PPD/W gain. (However, note that the power costs are still nearly double of running the SMP client, meaning that Fermi clients are still nowhere near as efficient as SMP clients.)

CPU+GPU combined:

27,762.48 PPD GPU (Frame times are 2:00 this way instead of the 2:02 without the SMP client on all four cores)

17,100.62 PPD CPU (Frame times are 9:59)

44,773.10 PPD combined

548 W

81.70 PPD/W

Although the overall PPD/W value has increased here, the PPD/W value of the SMP client alone in previous tests is still significantly higher, especially with a high overclock on the CPU.

I'm only going to report a CPU power consumption value for this WU and not a PPD value, as I do not have time to wait 30 or more minutes to get an average frame time for CPU alone and calculate PPD. In lieu of posting the actual PPD, I will assume a worst-case scenario and list the PPD for the CPU alone as the same value for CPU PPD in the combined tests above.

CPU alone:

17,100.62 PPD (This is a reduced estimate using the CPU/GPU combined values. Note that the real PPD would be considerably higher if running alone.)

225 W

76.00 PPD/W (Again, this is not actual performance. This is using a reduced performance estimate using the above CPU/GPU combined values.)

Even assuming the worst, of 17,100.62 PPD on the CPU with no GPU load (which is assuming the worst, as performance should be significantly better), this new and more PPD/W efficient GPU WU does not show any significant leaps in actual power efficiency as we saw only a 1% reduction in power consumption.
 

·
Registered
Joined
·
3,611 Posts
i think i need to have you come over and setup my 2700k dedicated folding rig.......
 

·
Registered
Joined
·
1,524 Posts
Wow. I need to read into this more when I'm not so drained....
Excellent job though.
eek.gif
+
thumb.gif
 

·
Registered
Joined
·
966 Posts
Discussion Starter · #14 ·
Quote:
Originally Posted by Samurai Batgirl View Post

Wow. I need to read into this more when I'm not so drained....
Excellent job though.
:eek:
+
:thumb:
Thanks. It's a rather long thread and I wasn't quite sure how to present the information in a more condensed format. I do hope that it helps people optimize for their own budgets, whether building a dedicated rig or folding as a hobby or during down-time on a work machine.
 
1 - 14 of 14 Posts
Top