Overclock.net banner
1 - 4 of 4 Posts

· Registered
Joined
·
211 Posts
Discussion Starter · #1 ·
(This is a copy-paste from a text document wherein I started out looking for a TEC system, trying to figure everything out before talking to anyone, but it's getting frustrating, so I'll just dump it here).

The goal of this thread is to figure out an optimal cooling solution emphasizing TEC for the entire system, whether that means a chiller or directly contacting the heat radiating components. One of my cooling theories is that the best system cooling would use TEC's on every component possible, and those not able to have TEC attached would be insulated from the others and cooled by being submerged in a liquid cooling solution. This thread does not take budget into consideration, however it is written without considering experimental scientific technology that is unavailable or can't be backwards engineered on a professional or consumer level. I can figure out some lesser form of a technology and will bring it up as a side note with each topic, but I really want to know what the best theoretically sustainable cooling solution outside of LN2 and other temporary exhaustible solutions. This thread has no budget--semi-educated conjecture and speculation have no price tag.

Starting out, I'd like to figure out the best thermoelectric pad for each application. The main parts that typically have some kind of heatsink applied to them are CPU's, GPU's, RAM modules (DIMMS, VRAM, etc.), voltage regulators (VRs), and onboard controllers and bridges. The CPU is the main subject of discussion, occasionally with kits made for GPUs. http://www.overclock.net/t/59153/nols-tec-guide-guide-1-basic-principles-of-tecs NoL mentions that the highest wattage and dimension seen in a thermoelectric pad is 62mm with a 720 watt rating. I'm assuming that the watt rating is accurate to its conductivity and current rating, and acknowledge that it can optimally run at 50% of its maximum watt rating.

I haven't seen anyone mention if stacking thermoelectric pads helps performance or if multi-stage pads are available at 62mm at 720w, or when performance starts to see diminishing returns (i.e. nine thousand and one thermoelectric pads with nine thousand and one stages each would perform no better than three thermoelectric pads with two stages each). dizzy4 proposed in one of the sticky threads using several inexpensive TE pads in a cascade. Based on the forum responses, I haven't seen a single user mention using a two-stage pad, or more than one pad on a single component. TE pads were running about $40 on eBay, but the custom solutions NoL mentioned had no links attached, so their prices are unknown.

Now for the best cold plate. http://www.peltier-info.com/tims.html The link mentions that the material of the cold plate is not the only consideration with a thermal interface material (which this isn't, but the link is there to provide a list of claimed thermal conductivity in materials). One consideration that I haven't seen in the forums yet are the dimensions of the coldplate--a pad would only be able to cool so much per Kg of material, so there is an upper limit to the dimensions of the material before the pad loses to the ambient temperature of the material. So how big should the coldplate be, and what is the best material? Pure silver runs about $750 USD/Kg, copper runs about $7.37/Kg. When a CPU is over $2k, spending $300 on a 41Kg chunk of raw copper doesn't seem unreasonable.

The power supply for the TE pad would depend on the pad's wattage, but apparently it doesn't need to hold up to the same stability standards motherboard PSU's must hold to (i.e. johnnyguru-style metrics). The link in the TEC forum sticky says a 320w PSU was modded for use on a 320w TE pad (probably run at 160w, if omaryunus was consistent with the typical "sweet spot" tuning). http://www.overclock.net/t/448415/guide-how-to-install-a-aux-psu-for-a-tec#post5402803

The hot-side cooling solution is the one I've seen others in the focus on. The oldest threads mention water-based cooling with in-line chillers (with the TEC used as the liquid cooler, instead of sitting directly on the CPU or component being cooled). There is an entire forum branch devoted to phase-change coolers, so I'd like to know if anyone is combining p/c with a TEC.

The third cooling method used in conjunction with the others is submerging everything that can be submerged without increasing the viscosity of the fluid to a point that requires hourly cleaning with a spatula and a pool net, and without submerging a component that will fail when submerged--I'd like to know what can and cannot be submerged aside from device that have moving parts (like an optical media drive which doesn't need cooling anyway, and probably any HDD).

The fourth is coldbox and wholeboard cooling solutions. Don't know a thing about it, and whether there's a difference. I think with a coldbox, the board is placed inside of a sealed chamber, with the board standing off from any of the chamber walls. A wholeboard from what I've seen sits exposed outside of any case on top of a flat surface cooled from beneath.

I'd like to find out the best cooling solution in every situation, but there are too many variables for the specialized needs of systems for that to be practical. The first scenario would be the best case scenario system, in which the system is stationary. The previous thread on the topic focused on the maximum theoretical IO. http://hardforum.com/showpost.php?p=1040209462&postcount=39. There's still unanswered questions on that thread regarding the different systems for different applications. Each gradation further from the CPU brings a lower maximum possible performance, and the closer the IO gets to the CPU, the faster it is. Several potential systems would be ones designed for network IO, network with graphics, graphics IO only, IO from boot to testing, IO during SATA IO testing, IO during PCIE IO testing, IO during RAM IO testing, and IO as a CPU weighted operation. CPU with RAM without a GPU is the one overclockers focus on with exotic cooling techniques.

Overclockers using Helium set a lower record temperature than they did with Nitrogen. The method of that record was a simple vat dump. Systems using multi-stage phase change compressor could push that further. Brooks makes three-stage Helium recirculation systems. Intel CPUs have a higher cold bug than AMD's, so unless there is a feature required by the system (e.g. PCIE 3.0 for a system with GPUs), it would use an unlocked CPU with the ability to reach lower temperatures--the Phenom. If it did require an Intel CPU, the system would still require Helium multi-stage phase change cooling to reach the minimum temperature, but would also use indirect cooling to avoid the cold bug. Brooks does not manufacture the parts required to cool a CPU, let alone parts that would indirectly cool it, and the same custom requirements goes for the other parts that would be cooled by the recirculation system. The parts would be unable to heat the Helium to a point that it would require more than one compressor to cool each component--it would be more than enough to cool each component to its lowest point. In addition to the Helium compressor, the board would be submerged in liquid coolant--I don't know enough about that topic to say which refrigerant would be best for a system with the hottest components cooled by Helium.

Some practical considerations: The compressor would be located in another room than the one the computer system is in. The air conditioning would also need to be of a grade that could cool the compressor, so most residential and corporate air conditioners would be out of consideration. A Brooks system with a cryopump, compressor, and refrigerator would be about $52,000 USD. This doesn't include the cost of the Helium, water, and water cooling system. The water cooler and container is not included in Brooks' compressor. Both rooms would use several hundred kilograms of copper shaped into acoustic-absorbing cones to minimize the temperature and sound generated by the Helium compressor, water cooler, and the computer system--the computer system wouldn't necessarily have to be in a room, but could be in a small chamber that would be closed during testing.

Since L2 cache isn't really generalized with the current generations from Intel and AMD, there isn't much point in a speed test with it. Registry IO is the fastest component of a CPU; the reasons I've seen for greater registry counts and size have been attributed to the expense of registries, but the raw materials are no more expensive for a CPU than they are for a heatsink, which has thousands of times more copper than a CPU does. There's not much use practically for a registry only utility, but it's the fastest in terms of IO--I mention this because there's faster ways of sending data, if data isn't considered something strictly practical--scientific research of quantum mechanics, namely the LHC score the fastest in sending a signal from point A to point B, aside from photon speed.

https://docs.google.com/spreadsheet/ccc?key=0ApzIz1YSh_e7dEZYb3RORFZxU21jUDh0MWRRSkhhcUE#gid=0
 

· Registered
Joined
·
399 Posts
One of the concerns of TECs is that the heat created will flow back, once the power is cut off. I've seen some people actually melt a TEC after stacking 3, running it for half an hour and then turning it off. Of course the TECs in question were purchased from some shady chinese dealer that didn't really give any specs. But 3 were able to reduce the temp enough that there was a thin layer ice forming. My friend made a bad decision in not putting a true cpu heat sink on top, he just used copper blocks, intended for mosfets.
 

· Registered
Joined
·
211 Posts
Discussion Starter · #4 ·
Quote:
Are you Howard Hughes or something?
No, I am not Howard Hughes.
I am one of the personalities that haunted him. Howard's imagination was so great that I survived beyond his very death.
I'm trying to skip all of the nerd-slapping and one-upping and figure out what is the lowest temperature sustainable by technology available today, including commissioning someone to put together equipment not intended for system cooling.

Please don't get sidetracked by the funding issue. This is a theoretical discussion for the most part, although I am interested in the different tiers of cooling (i.e. radiator water cooling; TEC chilling in loop; on-die TECs; on-die TEC stacking; single phase change refrigerant cooling; multi-phase change refrigerant cooling; hydrogen multi-phase change cooling; and helium phase-change cooling).
Even if I don't have the resources personally to put a system together, there are others interested in experimental cooling who might have some input on it who do have the financial resources and technical knowledge. The knowledge is harder to get than funding to put a system together. It's easy to get even silly things crowdfunded. There's also scientific grants one can get from research or corporate organizations. This could also be a productive venture (not for me, though, I don't have the engineering or scientific knowledge to pull it off): I'm sure there are scientific, military, or some rich people interested in nerd-slapping and one-upping their peers with the coldest sustainable solution. There's technology lag in consumer products for cooling--eventually the technology shows up as a hand-me down that doesn't require custom one-time building to make it happen.

Maybe I'll join Xtreme Systems' forum (they have an entrance fee, though); they seem to build the experimental systems at least as much as they discuss them. The whole helium recirculation in a multi-phase change system is here because of one of the users working for a cryogenics company. I'd like to ask that user why he's using refrigerant instead of Helium.
 
1 - 4 of 4 Posts
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top