Overclock.net › Forums › AMD › AMD - General › [ExpertReviews] HSA + AMD APUs means 500% increases in application performance
New Posts  All Forums:Forum Nav:

[ExpertReviews] HSA + AMD APUs means 500% increases in application performance - Page 2

post #11 of 30
Thread Starter 

The thing about full HSA is that it integrates a new technology called hUMA - heterogenuous unified memory architecture. This makes it so that the CPU & GPU access the very same memory pool. This makes it so that HSA-enabled applications are as easy to code as any other app.

 

That's the problem with current CPU + GPU-accelerated solutions. Theoretically, HSA-class performance already exists. It's not widespread in programs, however, because it requires very specialized coding that is costly to train and implement. CPUs and GPUs can work in tandem to some extent, but they're still separate entities in a way that takes more effort to optimize for. HSA and hUMA solves that problem. It makes optimizations acceptable and creates new types of optimizations that can aid a large amount of programs and make HSA-enabled hardware pull ahead in the computing market.

 

HSA takes an existing AMD APU and allows you to get more performance from it - without the higher cost and higher development time on the side of the program developers. It's the ultimate mass-boost to efficiency that's good for everyone, because it's good for developers (they can get more performance with the same effort and same computing power), good for AMD (increases in things like IPC do not matter as much anymore and this lowers development costs and offers a solution for the slow-down of Moore's Law), and good for consumers (higher performance everywhere will cost less for us, because it costs less for the developer and the chip-maker).

 

Intel has not been interested in HSA - I think the reason for this is that it sort of goes against what drives Intel, which is pure x86 coding and x86 performance - the reason for that being Intel's incapability of developing a really good integrated GPU.


Edited by xd_1771 - 5/25/13 at 12:58pm
post #12 of 30
Quote:
Originally Posted by xd_1771 View Post

HSA takes an existing AMD APU and allows you to get more performance from it - without the higher cost and higher development time on the side of the program developers. It's the ultimate mass-boost to efficiency that's good for everyone, because it's good for developers (they can get more performance with the same effort and same computing power), good for AMD (increases in things like IPC do not matter as much anymore and this lowers development costs and offers a solution for the slow-down of Moore's Law), and good for consumers (higher performance everywhere will cost less for us, because it costs less for the developer and the chip-maker).

So will this allow the APU to process anything that a normal CPU could, but with the parallelism of a GPU, or only certain tasks?

I'm a bit confused on the importance of this, and why they couldn't just use a GPU design as the CPU to accomplish the same thing.
Is it because a CPU is still best for some things, where a GPU would be best for others? Or would it be the CPUs job to break down the information and feed it across the GPU.
post #13 of 30
I'm looking forward to this. I'm hoping that some of the more CPU bound RTS games (Starcraft, WoW, and the like) are able to take advantage of this for some performance increases at higher resolutions... Also excited for the flash speed increase....
The Beast (7820x)
(16 items)
 
Wifes Rig
(15 items)
 
 
CPUMotherboardGraphicsRAM
7820x ASRock X299 Gaming Professional I9 Zotec 1080 TI Amp Extreme Corsair 32gb LPX 3600mhz 
Hard DriveCoolingCoolingCooling
Samsung 960 M.2 512gb Noctua D15 TY 143 TY 143 
CoolingCoolingOSMonitor
Noctua NF-120 200mm Kryonaut TIM Windows 10 Professional QNIX 2710 
KeyboardPowerCaseMouse
Razer Black widow Chroma Seasonic 850 Titanium Core X9 Razer Mamba TE 
CPUMotherboardGraphicsRAM
Intel 3570k Asus P8Z77-V EVGA GTX 480 Mushkin 993991 
Hard DriveHard DriveOptical DriveCooling
Western Digital 320GB Vertex 4 128gb Samsung 16x DVD Reader Noctua D14 
OSMonitorKeyboardPower
Windows 7 Home Edition Dell 2412 Logitech G510 Antec 650w Green 
CaseMouseMouse Pad
R4 Silent Razer Naga Rocketfish Mousepad 
  hide details  
Reply
The Beast (7820x)
(16 items)
 
Wifes Rig
(15 items)
 
 
CPUMotherboardGraphicsRAM
7820x ASRock X299 Gaming Professional I9 Zotec 1080 TI Amp Extreme Corsair 32gb LPX 3600mhz 
Hard DriveCoolingCoolingCooling
Samsung 960 M.2 512gb Noctua D15 TY 143 TY 143 
CoolingCoolingOSMonitor
Noctua NF-120 200mm Kryonaut TIM Windows 10 Professional QNIX 2710 
KeyboardPowerCaseMouse
Razer Black widow Chroma Seasonic 850 Titanium Core X9 Razer Mamba TE 
CPUMotherboardGraphicsRAM
Intel 3570k Asus P8Z77-V EVGA GTX 480 Mushkin 993991 
Hard DriveHard DriveOptical DriveCooling
Western Digital 320GB Vertex 4 128gb Samsung 16x DVD Reader Noctua D14 
OSMonitorKeyboardPower
Windows 7 Home Edition Dell 2412 Logitech G510 Antec 650w Green 
CaseMouseMouse Pad
R4 Silent Razer Naga Rocketfish Mousepad 
  hide details  
Reply
post #14 of 30
Quote:
Originally Posted by ComputerRestore View Post



So will this allow the APU to process anything that a normal CPU could, but with the parallelism of a GPU, or only certain tasks?

I'm a bit confused on the importance of this, and why they couldn't just use a GPU design as the CPU to accomplish the same thing.
Is it because a CPU is still best for some things, where a GPU would be best for others? Or would it be the CPUs job to break down the information and feed it across the GPU.

The code needs to be parallel for that to work.

An ALU , or the lowest functional unit in a CPU is a "core". GPUs have more ALUs but they can't do as many different tasks or switch between tasks quickly. I don't remember which site it was but they said a CPU core would be a smart worker and a GPU would be a simple (repetition type) worker (with no no out-of-order execution or branch prediction) but you can have many times the workers.

Also you have to keep in mind memory access , 6-7Ghz is attainable with GDDR5 but top stock speed for regular DRAM is only 2133MHz DDR3.

x86 cores are more powerful than GPU cores, but GPU cores are primarily for parallel workloads. I think that's why the floating point unit in Vishera is only 1 per module, they want to offload that to GPUs. At the end of the day I suspect parallel workloads like rendering/simulation of fluids or thermal / Monte Carlo simulations / or anything with "for loop" iterations that don't depend on prior results would be done by the "many core" solution.

The HSA implementation skips the step of the CPU sending the data from the RAM to the GPU,
Instead of RAM->CPU->GPU->vram you have CPU->(Shared)RAM and GPU->(shared)RAM

The only problem I see with this is viruses/malware accessing the GPU would have access to RAM itself , right now the path of infection is only through CPU since the CPU controls GPU function. I bet Intel is going to spread fear over this wink.gif. What's next? GPU antivirus? tongue.gif

AMD has already thought of something like that:
"HSA isn't just for CPUs with integrated GPUs. In principle, the other processors that share access to system memory could be anything, such as cryptographic accelerators, or programmable hardware such as FPGAs. They might also be other CPUs, with a combined x86/ARM chip often conjectured. Kaveri will in fact embed a small ARM core for creation of secure execution environments on the CPU. Discrete GPUs could similarly use HSA to access system memory."
hUMA! (Click to show)
hUMA's key features are as follows:

Bi-Directional Coherent Memory - This means that any updates made by one processing element will be seen by all other processing elements such as the CPU or GPU.

Pageable Memory - This allows the GPU to handle page faults the same way that the CPU does. This also removes restricted page locked memory.

Entire Memory Space - Both the CPU and GPU processes can dynamically allocate memory from the entire memory space. This means it has access to no only the physical memory, but it also has access to the entire virtual memory address space.

AMD's solution excels with parallel processing (for loop). Intel dominates in single threaded apps that branch (i.e. if then elseif).
Long explanation at arstechnica (Click to show)
The CPU and GPU have their own pools of memory. Physically, these might use the same chips on the motherboard (as most integrated GPUs carve off a portion of system memory for their own purposes). From a software perspective, however, these are completely separate.

This means that whenever a CPU program wants to do some computation on the GPU, it has to copy all the data from the CPU's memory into the GPU's memory. When the GPU computation is finished, all the data has to be copied back. This need to copy back and forth wastes time and makes it difficult to mix and match code that runs on the CPU and code that runs on the GPU.

The need to copy data also means that the GPU can't use the same data structures that the CPU is using. While the exact terminology varies from programming language to programming language, CPU data structures make extensive use of pointers: essentially, memory addresses that refer (or, indeed, point) to other pieces of data. These structures can't simply be copied into GPU memory, because CPU pointers refer to locations in CPU memory. Since GPU memory is separate, these locations would be all wrong when copied.

hUMA is the way AMD proposes to solve this problem. With hUMA, the CPU and GPU share a single memory space. The GPU can directly access CPU memory addresses, allowing it to both read and write data that the CPU is also reading and writing.

http://arstechnica.com/information-technology/2013/04/amds-heterogeneous-uniform-memory-access-coming-this-year-in-kaveri/

http://www.eecs.berkeley.edu/~sangjin/2013/02/12/CPU-GPU-comparison.html
Quote:
I am a huge fan of the ISCA 2010 paper, “Debunking the 100X GPU vs. CPU Myth”, and it was indeed a kind of guideline for our work to not repeat common mistakes. Some quick takeaways from the paper are:

100-1000x speedups are illusions. The authors found that the gap between a single GPU and a single multi-core CPU narrows down to 2.5x on average, after applying extensive optimization for both CPU and GPU implementations.

The expected speedup is highly variable depending on workloads.

For optimal performance, an implementation must fully exploit opportunities provided by the underlying hardware. Many research papers tend to do this for their GPU implementations, but not much for the CPU implementations.

In summary, for a fair comparison between GPU and CPU performance for a specific application, you must ensure to optimize your CPU implementation to the reasonably acceptable level. You should parallelize your algorithm to run across multiple CPU cores. The memory access should be cache-friendly as much as possible. Your code should not confuse the branch predictor. SIMD operations, such as SSE, are crucial to exploit the instruction-level parallelism.

Edited by AlphaC - 5/25/13 at 9:21pm
Workstation stuff
(407 photos)
SpecViewperf 12.0.1
(158 photos)
 
Reply
Workstation stuff
(407 photos)
SpecViewperf 12.0.1
(158 photos)
 
Reply
post #15 of 30
Quote:
Originally Posted by xd_1771 View Post

The thing about full HSA is that it integrates a new technology called hUMA - heterogenuous unified memory architecture. This makes it so that the CPU & GPU access the very same memory pool. This makes it so that HSA-enabled applications are as easy to code as any other app.

Yeah when I first heard of HSA, I wasnt impressed because I was figuring programmers would have to develop two different code paths to support HSA and non-HSA systems but the use of virtual pointers so that the GPU and CPU memory spaces appear separate but actaully point to the same space is pretty clever.
Spit in God's Eye
(16 items)
 
  
CPUMotherboardGraphicsRAM
i7-5960x @ 4.26 ghz core / 3.53 ghz uncore - 1.... AsRock X99 Extreme3 EVGA GTX980 Ti 16gb (4x4gb) Crucial DDR4 2133 CL15  
Hard DriveHard DriveHard DriveOptical Drive
A-Data SP600 256gb SSD (C:) Samsung 840 EVO 1TB SSD (D:) Seagate 2TB Hybrid Drive (E:) LITE-ON 24x DVDRW 
CoolingOSMonitorKeyboard
Corsair H110 Win 10 x64 HP 2511x ( 25" 1080p ) AZIO L70 
PowerCaseMouseAudio
Corsair TX750 Antec 300 (Modded) Logitech M100 Onboard 
  hide details  
Reply
Spit in God's Eye
(16 items)
 
  
CPUMotherboardGraphicsRAM
i7-5960x @ 4.26 ghz core / 3.53 ghz uncore - 1.... AsRock X99 Extreme3 EVGA GTX980 Ti 16gb (4x4gb) Crucial DDR4 2133 CL15  
Hard DriveHard DriveHard DriveOptical Drive
A-Data SP600 256gb SSD (C:) Samsung 840 EVO 1TB SSD (D:) Seagate 2TB Hybrid Drive (E:) LITE-ON 24x DVDRW 
CoolingOSMonitorKeyboard
Corsair H110 Win 10 x64 HP 2511x ( 25" 1080p ) AZIO L70 
PowerCaseMouseAudio
Corsair TX750 Antec 300 (Modded) Logitech M100 Onboard 
  hide details  
Reply
post #16 of 30
Quote:
Originally Posted by AlphaC View Post

The code needs to be parallel for that to work.

An ALU , or the lowest functional unit in a CPU is a "core". GPUs have more ALUs but they can't do as many different tasks or switch between tasks quickly. I don't remember which site it was but they said a CPU core would be a smart worker and a GPU would be a simple (repetition type) worker (with no no out-of-order execution or branch prediction) but you can have many times the workers. Warning: Spoiler! (Click to show)
Also you have to keep in mind memory access , 6-7Ghz is attainable with GDDR5 but top stock speed for regular DRAM is only 2133MHz DDR3.

x86 cores are more powerful than GPU cores, but GPU cores are primarily for parallel workloads. I think that's why the floating point unit in Vishera is only 1 per module, they want to offload that to GPUs. At the end of the day I suspect parallel workloads like rendering/simulation of fluids or thermal / Monte Carlo simulations / or anything with "for loop" iterations that don't depend on prior results would be done by the "many core" solution.

The HSA implementation skips the step of the CPU sending the data from the RAM to the GPU,
Instead of RAM->CPU->GPU->vram you have CPU->(Shared)RAM and GPU->(shared)RAM

The only problem I see with this is viruses/malware accessing the GPU would have access to RAM itself , right now the path of infection is only through CPU since the CPU controls GPU function. I bet Intel is going to spread fear over this wink.gif. What's next? GPU antivirus? tongue.gif

AMD has already thought of something like that:
"HSA isn't just for CPUs with integrated GPUs. In principle, the other processors that share access to system memory could be anything, such as cryptographic accelerators, or programmable hardware such as FPGAs. They might also be other CPUs, with a combined x86/ARM chip often conjectured. Kaveri will in fact embed a small ARM core for creation of secure execution environments on the CPU. Discrete GPUs could similarly use HSA to access system memory."
hUMA! (Click to show)
hUMA's key features are as follows:

Bi-Directional Coherent Memory - This means that any updates made by one processing element will be seen by all other processing elements such as the CPU or GPU.

Pageable Memory - This allows the GPU to handle page faults the same way that the CPU does. This also removes restricted page locked memory.

Entire Memory Space - Both the CPU and GPU processes can dynamically allocate memory from the entire memory space. This means it has access to no only the physical memory, but it also has access to the entire virtual memory address space.

AMD's solution excels with parallel processing (for loop). Intel dominates in single threaded apps that branch (i.e. if then elseif).
Long explanation at arstechnica (Click to show)
The CPU and GPU have their own pools of memory. Physically, these might use the same chips on the motherboard (as most integrated GPUs carve off a portion of system memory for their own purposes). From a software perspective, however, these are completely separate.

This means that whenever a CPU program wants to do some computation on the GPU, it has to copy all the data from the CPU's memory into the GPU's memory. When the GPU computation is finished, all the data has to be copied back. This need to copy back and forth wastes time and makes it difficult to mix and match code that runs on the CPU and code that runs on the GPU.

The need to copy data also means that the GPU can't use the same data structures that the CPU is using. While the exact terminology varies from programming language to programming language, CPU data structures make extensive use of pointers: essentially, memory addresses that refer (or, indeed, point) to other pieces of data. These structures can't simply be copied into GPU memory, because CPU pointers refer to locations in CPU memory. Since GPU memory is separate, these locations would be all wrong when copied.

hUMA is the way AMD proposes to solve this problem. With hUMA, the CPU and GPU share a single memory space. The GPU can directly access CPU memory addresses, allowing it to both read and write data that the CPU is also reading and writing.

http://arstechnica.com/information-technology/2013/04/amds-heterogeneous-uniform-memory-access-coming-this-year-in-kaveri/

http://www.eecs.berkeley.edu/~sangjin/2013/02/12/CPU-GPU-comparison.html

Thanks for the explanation. When you put it in that order (compared to reading bits from different articles) it makes a lot more sense. Even though I have no clue about programming I think I get it and although I was excited/interested in it before, now I'm even more so.
Quote:
Originally Posted by BinaryDemon View Post

Yeah when I first heard of HSA, I wasnt impressed because I was figuring programmers would have to develop two different code paths to support HSA and non-HSA systems but the use of virtual pointers so that the GPU and CPU memory spaces appear separate but actaully point to the same space is pretty clever.

So "virtual pointers" being one of those [if then else] that HSA would be able to use and non-HSA would be further slowed down by (more code to process)?
Edited by ComputerRestore - 5/26/13 at 1:45pm
post #17 of 30
Quote:
Originally Posted by ComputerRestore View Post

So "virtual pointers" being one of those [if then else] that HSA would be able to use and non-HSA would be further slowed down by (more code to process)?

I'm no programmer either but I got the impression that the virtual pointer was supported at a higher level of the OS, so that it would be invisible to programs. Thus programmers can still program the old way with separate GPU/CPU memory spaces and something like an HSA driver will intercept the calls to memory?
Spit in God's Eye
(16 items)
 
  
CPUMotherboardGraphicsRAM
i7-5960x @ 4.26 ghz core / 3.53 ghz uncore - 1.... AsRock X99 Extreme3 EVGA GTX980 Ti 16gb (4x4gb) Crucial DDR4 2133 CL15  
Hard DriveHard DriveHard DriveOptical Drive
A-Data SP600 256gb SSD (C:) Samsung 840 EVO 1TB SSD (D:) Seagate 2TB Hybrid Drive (E:) LITE-ON 24x DVDRW 
CoolingOSMonitorKeyboard
Corsair H110 Win 10 x64 HP 2511x ( 25" 1080p ) AZIO L70 
PowerCaseMouseAudio
Corsair TX750 Antec 300 (Modded) Logitech M100 Onboard 
  hide details  
Reply
Spit in God's Eye
(16 items)
 
  
CPUMotherboardGraphicsRAM
i7-5960x @ 4.26 ghz core / 3.53 ghz uncore - 1.... AsRock X99 Extreme3 EVGA GTX980 Ti 16gb (4x4gb) Crucial DDR4 2133 CL15  
Hard DriveHard DriveHard DriveOptical Drive
A-Data SP600 256gb SSD (C:) Samsung 840 EVO 1TB SSD (D:) Seagate 2TB Hybrid Drive (E:) LITE-ON 24x DVDRW 
CoolingOSMonitorKeyboard
Corsair H110 Win 10 x64 HP 2511x ( 25" 1080p ) AZIO L70 
PowerCaseMouseAudio
Corsair TX750 Antec 300 (Modded) Logitech M100 Onboard 
  hide details  
Reply
post #18 of 30
If read correctly in other places HSA, hUMA, will be used with C++ coding.
    
CPUMotherboardGraphicsRAM
amd Phenom II x6 1090T gigabye UD7 990FX 5870 G.skill flare 2 x 4gbs 2000mhz  
Hard DriveCoolingOSMonitor
westerdigital cooler master eisberg 240L Vista 64 bit spceptre 1920 x 1200 
KeyboardPowerCaseMouse
muli-media ftw lol 1200 watt silverstone none another cheap one $20 
Mouse PadOther
none ATi 650 pro theater  
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
amd Phenom II x6 1090T gigabye UD7 990FX 5870 G.skill flare 2 x 4gbs 2000mhz  
Hard DriveCoolingOSMonitor
westerdigital cooler master eisberg 240L Vista 64 bit spceptre 1920 x 1200 
KeyboardPowerCaseMouse
muli-media ftw lol 1200 watt silverstone none another cheap one $20 
Mouse PadOther
none ATi 650 pro theater  
  hide details  
Reply
post #19 of 30
I do have a question about how this will work... With me holding off on an upgrade, say I do get the upcoming APU of whatever type and there is a game that takes advantage of HSA. Now I have a dedicated GPU as well (say an AMD Radeon 7970 for the sake of simplicity). Will HSA extend to that GPU as well for possible performance increase or will only the APU be helping the main CPU cores in whatever tasks that are coded to take advantage of HSA? I would assume in the case of A nvidia GPU that only the APU graphics core is going to acccelerate as nvidia isn't part of the HSA foundation, but for a Radeon GPU I wonder what the limits will be. (I'm sorry if i'm not being very clear... I have a question but it's kinda difficult to put into words since we've no real world tests of this upcoming technology).
The Beast (7820x)
(16 items)
 
Wifes Rig
(15 items)
 
 
CPUMotherboardGraphicsRAM
7820x ASRock X299 Gaming Professional I9 Zotec 1080 TI Amp Extreme Corsair 32gb LPX 3600mhz 
Hard DriveCoolingCoolingCooling
Samsung 960 M.2 512gb Noctua D15 TY 143 TY 143 
CoolingCoolingOSMonitor
Noctua NF-120 200mm Kryonaut TIM Windows 10 Professional QNIX 2710 
KeyboardPowerCaseMouse
Razer Black widow Chroma Seasonic 850 Titanium Core X9 Razer Mamba TE 
CPUMotherboardGraphicsRAM
Intel 3570k Asus P8Z77-V EVGA GTX 480 Mushkin 993991 
Hard DriveHard DriveOptical DriveCooling
Western Digital 320GB Vertex 4 128gb Samsung 16x DVD Reader Noctua D14 
OSMonitorKeyboardPower
Windows 7 Home Edition Dell 2412 Logitech G510 Antec 650w Green 
CaseMouseMouse Pad
R4 Silent Razer Naga Rocketfish Mousepad 
  hide details  
Reply
The Beast (7820x)
(16 items)
 
Wifes Rig
(15 items)
 
 
CPUMotherboardGraphicsRAM
7820x ASRock X299 Gaming Professional I9 Zotec 1080 TI Amp Extreme Corsair 32gb LPX 3600mhz 
Hard DriveCoolingCoolingCooling
Samsung 960 M.2 512gb Noctua D15 TY 143 TY 143 
CoolingCoolingOSMonitor
Noctua NF-120 200mm Kryonaut TIM Windows 10 Professional QNIX 2710 
KeyboardPowerCaseMouse
Razer Black widow Chroma Seasonic 850 Titanium Core X9 Razer Mamba TE 
CPUMotherboardGraphicsRAM
Intel 3570k Asus P8Z77-V EVGA GTX 480 Mushkin 993991 
Hard DriveHard DriveOptical DriveCooling
Western Digital 320GB Vertex 4 128gb Samsung 16x DVD Reader Noctua D14 
OSMonitorKeyboardPower
Windows 7 Home Edition Dell 2412 Logitech G510 Antec 650w Green 
CaseMouseMouse Pad
R4 Silent Razer Naga Rocketfish Mousepad 
  hide details  
Reply
post #20 of 30
Quote:
Originally Posted by iamwardicus View Post

I do have a question about how this will work... With me holding off on an upgrade, say I do get the upcoming APU of whatever type and there is a game that takes advantage of HSA. Now I have a dedicated GPU as well (say an AMD Radeon 7970 for the sake of simplicity). Will HSA extend to that GPU as well for possible performance increase or will only the APU be helping the main CPU cores in whatever tasks that are coded to take advantage of HSA? I would assume in the case of A nvidia GPU that only the APU graphics core is going to acccelerate as nvidia isn't part of the HSA foundation, but for a Radeon GPU I wonder what the limits will be. (I'm sorry if i'm not being very clear... I have a question but it's kinda difficult to put into words since we've no real world tests of this upcoming technology).

My guess is that HSA would be limited to the APU since the CPU/GPU are literally sharing the same memory. If you add a discrete GPU then you have the same overhead of sending data over the PCI-E bus to the dedicated vram.
Spit in God's Eye
(16 items)
 
  
CPUMotherboardGraphicsRAM
i7-5960x @ 4.26 ghz core / 3.53 ghz uncore - 1.... AsRock X99 Extreme3 EVGA GTX980 Ti 16gb (4x4gb) Crucial DDR4 2133 CL15  
Hard DriveHard DriveHard DriveOptical Drive
A-Data SP600 256gb SSD (C:) Samsung 840 EVO 1TB SSD (D:) Seagate 2TB Hybrid Drive (E:) LITE-ON 24x DVDRW 
CoolingOSMonitorKeyboard
Corsair H110 Win 10 x64 HP 2511x ( 25" 1080p ) AZIO L70 
PowerCaseMouseAudio
Corsair TX750 Antec 300 (Modded) Logitech M100 Onboard 
  hide details  
Reply
Spit in God's Eye
(16 items)
 
  
CPUMotherboardGraphicsRAM
i7-5960x @ 4.26 ghz core / 3.53 ghz uncore - 1.... AsRock X99 Extreme3 EVGA GTX980 Ti 16gb (4x4gb) Crucial DDR4 2133 CL15  
Hard DriveHard DriveHard DriveOptical Drive
A-Data SP600 256gb SSD (C:) Samsung 840 EVO 1TB SSD (D:) Seagate 2TB Hybrid Drive (E:) LITE-ON 24x DVDRW 
CoolingOSMonitorKeyboard
Corsair H110 Win 10 x64 HP 2511x ( 25" 1080p ) AZIO L70 
PowerCaseMouseAudio
Corsair TX750 Antec 300 (Modded) Logitech M100 Onboard 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: AMD - General
Overclock.net › Forums › AMD › AMD - General › [ExpertReviews] HSA + AMD APUs means 500% increases in application performance