Overclock.net › Forums › Video Games › Console Gaming › Playstation › How effective will GDDR5 be for the CPU?
New Posts  All Forums:Forum Nav:

How effective will GDDR5 be for the CPU? - Page 3

post #21 of 34
Quote:
Originally Posted by Phenomanator53 View Post

The CPU can handle big bandwidth, but there is no point in doing so,

think of it like this: the GDDR as a commercial jet engine and DDR3 as a car engine; the jet engine doesn't need to be adjusting it throttle all the time, and has a HUGE delay in response time.( mainly becasue of the turbine spinning up/down) in fact, while in the air, the throttle remains at the level 99% of the time. while the car engine RPM needs to be changing all the time, and having huge delay in response isn't going to be good for it.

Interesting comparison. While I believe this is true, I don't think it's quite to that extent. GDDR5's latency may be higher, but the higher bandwidth will make up for a lot of that.

Think about it this way. 3000MHz DDR3 has much looser timings than 1600MHz DDR3, and yet it performs much better. The huge boost in frequency, and therefore bandwidth, makes up for it.
Quote:
Originally Posted by jsc1973 View Post

Yes, in that case, you would want to opt for DDR3 for the system RAM.

Well with DDR4 right around the corner, we may not have to worry about it. It'll provide more bandwidth with mostly the same tight timings as DDR3.
 
Iconia W700
(7 items)
 
 
CPUMotherboardGraphicsRAM
Core i5-2500k @ 4.7GHz ASRock Extreme3 Gen3 Z68 Gigabyte R9 290 Windforce 16GB PNY XLR8 1866MHz DDR3 
Hard DriveHard DriveOptical DriveCooling
Crucial M500 960GB WD My Book 4TB LG WH12LS38 Blu-Ray/CD/DVD Burner Corsair H100 
OSMonitorMonitorMonitor
Windows 10 Pro 28" Samsung U28D590D 4K 2x 21.5" Sceptre X226W 1080p 7" Proximus 480p USB 
KeyboardPowerCaseMouse
Das Model S Ultimate Seasonic S12II 620W NZXT Phantom (Black) Logitech G500s 
Mouse PadAudioAudioAudio
Razer Goliathus Extended Kef Q100 Front Bookshelves Teac LS-H265B Surround Bookshelves Micca MB42x Rear Bookshelves 
AudioAudioAudio
Outlaw Ultra-X12 Subwoofer Yamaha RX-V675 7.2 Receiver Phillips Fidelio X2 Headphones 
CPUGraphicsRAMHard Drive
Intel Core i3-3217u Intel HD 4000 4GB DDR3 64GB Toshiba SSD 
OSMonitorKeyboard
Windows 10 x64 11.6" 1080p Microsoft Wedge 
  hide details  
Reply
 
Iconia W700
(7 items)
 
 
CPUMotherboardGraphicsRAM
Core i5-2500k @ 4.7GHz ASRock Extreme3 Gen3 Z68 Gigabyte R9 290 Windforce 16GB PNY XLR8 1866MHz DDR3 
Hard DriveHard DriveOptical DriveCooling
Crucial M500 960GB WD My Book 4TB LG WH12LS38 Blu-Ray/CD/DVD Burner Corsair H100 
OSMonitorMonitorMonitor
Windows 10 Pro 28" Samsung U28D590D 4K 2x 21.5" Sceptre X226W 1080p 7" Proximus 480p USB 
KeyboardPowerCaseMouse
Das Model S Ultimate Seasonic S12II 620W NZXT Phantom (Black) Logitech G500s 
Mouse PadAudioAudioAudio
Razer Goliathus Extended Kef Q100 Front Bookshelves Teac LS-H265B Surround Bookshelves Micca MB42x Rear Bookshelves 
AudioAudioAudio
Outlaw Ultra-X12 Subwoofer Yamaha RX-V675 7.2 Receiver Phillips Fidelio X2 Headphones 
CPUGraphicsRAMHard Drive
Intel Core i3-3217u Intel HD 4000 4GB DDR3 64GB Toshiba SSD 
OSMonitorKeyboard
Windows 10 x64 11.6" 1080p Microsoft Wedge 
  hide details  
Reply
post #22 of 34
Thread Starter 
Quote:
Originally Posted by jsc1973 View Post

Yes, in that case, you would want to opt for DDR3 for the system RAM.

That's also what I think, here is a link to the differences in RAM speed and how much of a performance boost they give http://www.anandtech.com/show/6372/memory-performance-16gb-ddr31333-to-ddr32400-on-ivy-bridge-igp-with-gskill/9 it seems to really depend on the game but isn't the latency lower when you increase those speeds?

Don't worry about the question, that's been answered by the post above.
Edited by SteZHD - 6/23/13 at 12:47am
post #23 of 34
Actually it's been known that faster memory has much better performance scaling with the APUs.
Also depending on how the system it written, it might be pretty damn fast if the whole thing is utilizing parallel processing completely.

Food for thought thumb.gif
post #24 of 34
Thread Starter 
Quote:
Originally Posted by MuzicFreq View Post

Actually it's been known that faster memory has much better performance scaling with the APUs.
Also depending on how the system it written, it might be pretty damn fast if the whole thing is utilizing parallel processing completely.

Food for thought thumb.gif

Interesting, I don't have too much knowledge around APU's but I suppose it has to have advantages. It's difficult to compare the RAM setup between both consoles but while the XB1's DDR3 will run at 2133MHz, I suspected that the eSRAM will run faster, so I rearranged a calculation and correct me if I'm wrong:

(192GB/s x 8GB) / 256bit = 6GHz x 1000 = 6000MHz

I don't know if that's right but it might be, I'd think that the embedded SRAM would be running at a higher frequency than DDR3. I appreciate the added information, thank you.

By the way, the PS4 is at 5500MHz, just for comparison but I'm sure you know that.
post #25 of 34
Quote:
Originally Posted by SteZHD View Post

(192GB/s x 8GB) / 256bit = 6GHz x 1000 = 6000MHz

I'm not going to pretend that I understand how this calculation works, but why did you multiply the speed of the esram by 8GB? Isn't the esram only 32MB?
post #26 of 34
PS4 is a variant architecture of current x86 generation PC's that we know of. It is the first iteration of HSA Directcompute enabled hardware. Thus, it is the GPU that is going to be exploited to its fullest.
If it were Intel+Nvidia hardware underneath the PS4 banner, it would make sense to assess DDR3 as a more feasible option with a fast cpu instructing the memory as fast as possible. However, AMD is not planning on CPU speed to leverage the capabilities of the hardware. AMD is going to be demonstrating HSA and UMA on PS4 as best as possible. Its hardware is composed of a slower multithreaded chip that is going to work in unison with the gpu. The plot is that they are on the same chip the latency between them is effectively zero, hence if they can share data on a datapath, the gpu can lend a hand to the cpu in the mean time having eliminated the GDDR5 memory latency bottleneck altogether. The gpu, according to the PS4 presentations that I somehow cannot find anymore, will be able to utilize 64-128 different streams of code at once. So, with the issues at hand being DDR3-GDDR5 memory latency(the biggest inhibitor of current gen), cpu and gpu bottlenecks, I think PS4 is going to elucidate the future of computing as its software side gets stronger.
The Machine
(14 items)
 
Nexus 7 2013
(11 items)
 
 
CPUMotherboardGraphicsRAM
A10 6800K Asus F2A85-V VTX3D 5770, AMD HD6950(RIP), MSI 6870 Hawx(RIP)... G.skill Ripjaws PC12800 6-8-6-24 
Hard DriveOptical DriveOSMonitor
Seagate 7200.5 1TB NEC 3540 Dvd-Rom Windows 7 x32 Ultimate Samsung P2350 23" 1080p 
PowerCaseMouseAudio
Seasonic s12-600w CoolerMaster Centurion 5 Logitech G600 Auzen X-Fi Raider 
CPUMotherboardGraphicsRAM
Quad Krait 300 at 1.5Ghz Qualcomm APQ8064-1AA SOC Adreno 320 at 400mhz 2GB DDR3L-1600 
Hard DriveOSMonitorKeyboard
32GB Internal NAND Android 5.0 7" 1920X1200 103% sRGB & 572 cd/m2 LTPS IPS Microsoft Wedge Mobile Keyboard 
PowerAudio
3950mAh/15.01mAh Battery Stereo Speakers 
  hide details  
Reply
The Machine
(14 items)
 
Nexus 7 2013
(11 items)
 
 
CPUMotherboardGraphicsRAM
A10 6800K Asus F2A85-V VTX3D 5770, AMD HD6950(RIP), MSI 6870 Hawx(RIP)... G.skill Ripjaws PC12800 6-8-6-24 
Hard DriveOptical DriveOSMonitor
Seagate 7200.5 1TB NEC 3540 Dvd-Rom Windows 7 x32 Ultimate Samsung P2350 23" 1080p 
PowerCaseMouseAudio
Seasonic s12-600w CoolerMaster Centurion 5 Logitech G600 Auzen X-Fi Raider 
CPUMotherboardGraphicsRAM
Quad Krait 300 at 1.5Ghz Qualcomm APQ8064-1AA SOC Adreno 320 at 400mhz 2GB DDR3L-1600 
Hard DriveOSMonitorKeyboard
32GB Internal NAND Android 5.0 7" 1920X1200 103% sRGB & 572 cd/m2 LTPS IPS Microsoft Wedge Mobile Keyboard 
PowerAudio
3950mAh/15.01mAh Battery Stereo Speakers 
  hide details  
Reply
post #27 of 34
Thread Starter 
Quote:
Originally Posted by Tempest2000 View Post

I'm not going to pretend that I understand how this calculation works, but why did you multiply the speed of the esram by 8GB? Isn't the esram only 32MB?

I considered that actually but it wouldn't be right, it's why I'm wondering if someone that knows how embedded SRAM works can check it out.

(192GB/s x 0.032GB) / 256bit = 0.024GHz x 1000 = 24MHz

So yeah, that would be pretty slow and I think that eSRAM is capable of a higher frequency than GDDR5, it would make sense with all the transistors required.
post #28 of 34
Thread Starter 
Quote:
Originally Posted by mtcn77 View Post

PS4 is a variant architecture of current x86 generation PC's that we know of. It is the first iteration of HSA Directcompute enabled hardware. Thus, it is the GPU that is going to be exploited to its fullest.
If it were Intel+Nvidia hardware underneath the PS4 banner, it would make sense to assess DDR3 as a more feasible option with a fast cpu instructing the memory as fast as possible. However, AMD is not planning on CPU speed to leverage the capabilities of the hardware. AMD is going to be demonstrating HSA and UMA on PS4 as best as possible. Its hardware is composed of a slower multithreaded chip that is going to work in unison with the gpu. The plot is that they are on the same chip the latency between them is effectively zero, hence if they can share data on a datapath, the gpu can lend a hand to the cpu in the mean time having eliminated the GDDR5 memory latency bottleneck altogether. The gpu, according to the PS4 presentations that I somehow cannot find anymore, will be able to utilize 64-128 different streams of code at once. So, with the issues at hand being DDR3-GDDR5 memory latency(the biggest inhibitor of current gen), cpu and gpu bottlenecks, I think PS4 is going to elucidate the future of computing as its software side gets stronger.

I see, so the latency factor goes between the CPU and GPU since they're on the same chip, that's perfect for gaming but what about CPU related tasks? If it were to multitask like the XB1, if I were gaming and watching TV with the screen split, wouldn't latency then be a factor if the PS4 had to do that? not to mention the background tasks like recording footage etc... (I know, the PS4 has a secondary processor for this).

I hope my question wasn't confusing, I can see it work on the gaming side of things, it's just tasks in the background that don't involve the GPU pushing out pixels is what has me wondering headscratch.gif. Hopefully I haven't turned my questions into an enigma.
Edited by SteZHD - 7/10/13 at 6:27am
post #29 of 34
Quote:
Originally Posted by SteZHD View Post

I see, so the latency factor goes between the CPU and GPU since they're on the same chip, that's perfect for gaming but what about CPU related tasks? If it were to multitask like the XB1, if I were gaming and watching TV with the screen split, wouldn't latency then be a factor if the PS4 had to do that? not to mention the background tasks like recording footage etc... (I know, the PS4 has a secondary processor for this).

I hope my question wasn't confusing, I can see it work on the gaming side of things, it's just tasks in the background that don't involve the GPU pushing out pixels is what has me wondering headscratch.gif. Hopefully I haven't turned my questions into an enigma.
They rely on the same hardware, so essentially they are both as good in theory. I suppose HSA UMA will primarily help CPU's tasks, so games will run as they still do, yet memory swap operations between cpu and gpu will run instantaneously. So, you will switch between screens without any lag what so ever. When speaking of latency, you are still comparing the consoles to PC's. Hence the consoles will have approximately 10 nanoseconds of latency advantage between transfers compared to PC's, if I'm guessing right.
The Machine
(14 items)
 
Nexus 7 2013
(11 items)
 
 
CPUMotherboardGraphicsRAM
A10 6800K Asus F2A85-V VTX3D 5770, AMD HD6950(RIP), MSI 6870 Hawx(RIP)... G.skill Ripjaws PC12800 6-8-6-24 
Hard DriveOptical DriveOSMonitor
Seagate 7200.5 1TB NEC 3540 Dvd-Rom Windows 7 x32 Ultimate Samsung P2350 23" 1080p 
PowerCaseMouseAudio
Seasonic s12-600w CoolerMaster Centurion 5 Logitech G600 Auzen X-Fi Raider 
CPUMotherboardGraphicsRAM
Quad Krait 300 at 1.5Ghz Qualcomm APQ8064-1AA SOC Adreno 320 at 400mhz 2GB DDR3L-1600 
Hard DriveOSMonitorKeyboard
32GB Internal NAND Android 5.0 7" 1920X1200 103% sRGB & 572 cd/m2 LTPS IPS Microsoft Wedge Mobile Keyboard 
PowerAudio
3950mAh/15.01mAh Battery Stereo Speakers 
  hide details  
Reply
The Machine
(14 items)
 
Nexus 7 2013
(11 items)
 
 
CPUMotherboardGraphicsRAM
A10 6800K Asus F2A85-V VTX3D 5770, AMD HD6950(RIP), MSI 6870 Hawx(RIP)... G.skill Ripjaws PC12800 6-8-6-24 
Hard DriveOptical DriveOSMonitor
Seagate 7200.5 1TB NEC 3540 Dvd-Rom Windows 7 x32 Ultimate Samsung P2350 23" 1080p 
PowerCaseMouseAudio
Seasonic s12-600w CoolerMaster Centurion 5 Logitech G600 Auzen X-Fi Raider 
CPUMotherboardGraphicsRAM
Quad Krait 300 at 1.5Ghz Qualcomm APQ8064-1AA SOC Adreno 320 at 400mhz 2GB DDR3L-1600 
Hard DriveOSMonitorKeyboard
32GB Internal NAND Android 5.0 7" 1920X1200 103% sRGB & 572 cd/m2 LTPS IPS Microsoft Wedge Mobile Keyboard 
PowerAudio
3950mAh/15.01mAh Battery Stereo Speakers 
  hide details  
Reply
post #30 of 34
Thread Starter 
Quote:
Originally Posted by mtcn77 View Post

They rely on the same hardware, so essentially they are both as good in theory. I suppose HSA UMA will primarily help CPU's tasks, so games will run as they still do, yet memory swap operations between cpu and gpu will run instantaneously. So, you will switch between screens without any lag what so ever. When speaking of latency, you are still comparing the consoles to PC's. Hence the consoles will have approximately 10 nanoseconds of latency advantage between transfers compared to PC's, if I'm guessing right.

Yes, I should stop making direct comparisons with PC, still interesting to know where they stand in comparison to the highly diverse range of PC's. The advantages are becoming much more clear to me, it's just the little things like the CPU not having to copy data to GPU memory and copying it back after it's done computing.

Despite the lack of raw power, I still like the PS4's design, it''s very well done with the budget that was allocated, Mark Cerny did a great job. Thanks for the input thumb.gif, I appreciate added information on the topic.

You probably knew but Mark Cerny had a choice to use use embedded DRAM, it would have had a bandwidth of 1088GB/s, you'd have to use four 7970's or two 7990's in QuadFire to get that much but the memory bus would have been 128bit, compared to 256bit with GDDR5. That would have also been an interesting design but I think using GDDR5 would be much easier overall, I suppose they would have had to spend more cost and time to best figure out how to utilize eDRAM and I don't remember if he said anything about it's memory size but I'd guess 32MB if he implemented it.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Playstation
Overclock.net › Forums › Video Games › Console Gaming › Playstation › How effective will GDDR5 be for the CPU?