Overclock.net banner

[Guru3D] SK Hynix Launches Fastest 8Gb Graphics DRAM (GDDR6) - Adopted by 2018

8K views 107 replies 50 participants last post by  guttheslayer 
#1 ·
http://www.guru3d.com/news-story/sk-hynix-launches-fastest-8gb-graphics-dram-(gddr6).html
Quote:
SK Hynix introduced the world's fastest 2Znm 8Gb(Gigabit) GDDR6(Graphics DDR6) DRAM. The product operates with an I/O data rate of 16Gbps(Gigabits per second) per pin, which is the industry's fastest. With a forthcoming high-end graphics card of 384-bit I/Os, this DRAM processes up to 768GB(Gigabytes) of graphics data per second.

SK Hynix has been planning to mass produce the product for a client to release high-end graphics card by early 2018 equipped with high performance GDDR6 DRAMs.[/QUOTE

Yummy.]
 
#6 ·
Funny how all these new things appear when HBM was the tech of the future
biggrin.gif
. Yet again, AMD has no power here and NVIDIA is pushing for tech where they do not need to pay royalties to AMD.
 
#8 ·
Quote:
Originally Posted by Wishmaker View Post

Funny how all these new things appear when HBM was the tech of the future
biggrin.gif
. Yet again, AMD has no power here and NVIDIA is pushing for tech where they do not need to pay royalties to AMD.
There is no comparison between HBM and Gddr5x or 6. If we used GDDR6 on, let's say on GTX 1080, we would get 512 GB/s of bandwidth, just as much as Radeon R9 Fury X.

HBM has no royalties
Quote:
"AMD is not involved in collecting any royalties for HBM," said Iain Bristow, a spokesman for AMD. "We are actively encouraging widespread adoption of all HBM associated technology on [Radeon R9] Fury products and there is no IP licensing associated."
That's why Nvidia is using them too.

HBM3 (2019) will have double the bandwidth of HBM2.
 
#9 ·
Quote:
Originally Posted by Arturo.Zise View Post

So all those people who bought Titan Xp and 1080ti better savor every minute you have until these release and wipe the floor for the same price
biggrin.gif
You do realise this happens with every generation ? Only way to avoid it is not buying graphics cards or any PC hardware at all.
Besides most of the people who buy 1080Ti or Titans want fastest cards available and just upgrade again, I know I will.
 
#10 ·
Quote:
Originally Posted by RobotDevil666 View Post

You do realise this happens with every generation ? Only way to avoid it is not buying graphics cards or any PC hardware at all.
Besides most of the people who buy 1080Ti or Titans want fastest cards available and just upgrade again, I know I will.
This^^^ We all know there are going to be faster cards in the pipeline. To circle back to the main point here is that while gddr memory continues to make gains in speed and therefore bandwidth power useage has been getting out of control. I would be very interested in knowing the power draw on the new chips.
 
#11 ·
Quote:
Originally Posted by RobotDevil666 View Post

You do realise this happens with every generation ? Only way to avoid it is not buying graphics cards or any PC hardware at all.
Besides most of the people who buy 1080Ti or Titans want fastest cards available and just upgrade again, I know I will.
Pretty sure that's the point. Keep those consumers chasing the end of the rainbow = $$$.
 
#12 ·
Quote:
Originally Posted by Arturo.Zise View Post

Pretty sure that's the point. Keep those consumers chasing the end of the rainbow = $$$.
What rainbow ? as far as I can see i have a very good card for the money I paid, what is your solution here ? spend millions on R&D produce video cards and give them away for free ?
Is that your business plan ?
In few months there will be a new card that is faster then mine for the same money, this is how everything works, if you can't afford the performance now you can wait and get it cheaper later.
It's the same with cars, for example I couldn't afford new BMW M4 when it came out but now I can, so the guy who bought it 3 years ago will upgrade to a new gen coming next year and ill buy this one.
Exact the same thing.
I really fail to see your point here.
 
#14 ·
Quote:
Originally Posted by Just a nickname View Post

2018 seems so far away until I realise we're in 2017 already.
Quite the observation!
 
#15 ·
Quote:
Originally Posted by jarble View Post

This^^^ We all know there are going to be faster cards in the pipeline. To circle back to the main point here is that while gddr memory continues to make gains in speed and therefore bandwidth power useage has been getting out of control. I would be very interested in knowing the power draw on the new chips.
Yup, new GPU generations still seem to outpace CPUs.

I still don't get all the crap NV gets when AMD released a whole new series with a BIOS update. To me thats so much worse.
 
#16 ·
Quote:
Originally Posted by mmonnin View Post

Yup, new GPU generations still seem to outpace CPUs.

I still don't get all the crap NV gets when AMD released a whole new series with a BIOS update. To me thats so much worse.
Yea that is so true, if AMD is such a good guy this should be available free of charge, it's just a new bios really.
 
#17 ·
Quote:
Originally Posted by czin125 View Post

They could release a Titan Xp with these to max out the bandwidth. GP100 can be 3-20% faster than the GP102 despite lower mhz and lower core count.
P100T doesn't have a frame-buffer and no matter what nVidia has said, its the CPU's that make the images from the component bits in the DGX-1. Additionally the P100T contains further intercommunications channels and other features that are uselessly expensive and wouldn't be functional in any way in a graphics card. There is not and won't ever be a "GP100" chip no matter how many times nVidia or other webdiots type it that way. There may be a GP101 tho.

That's the problem with these kinds of forums, they're filled with well meaning but misinformed posts full of inaccurate jargon.

The ram speed on the Titan Xp is limited to the GDDR5X clock range and would require a different memory handler in the chip - thus not be a GP102 any more. GDDR6 simply isn't ready for market yet and won't be for more than half a year at the shortest, we'll be well into Pascal2 before we see this ram implemented if it can even survive against the ramped-up HBM2 production and capabilities since array math can be done directly inside HBM2 stacks and they have full bi-directional active communication.

The thing to remember here is that GDDRn memory can either be read or written to and both operations must occur at 256 or 384 bits width, whereas HBM2 operates with two bi-directional 64bit channels per HBM2 stack that operate at absurdly high speeds and each stack can be read or written at 128bits wide OR read AND written at 64bits wide concurrently. This means that unlike all of the GDDRn memory cards the HBM2 cards (and in part the HBM1 Fury which was limited to only the 128 send or receive mode) can process large amounts of information in a streaming manner instead of having to pull then push data in oscillation. There are little cache chunks on the bottoms of the HBM2 stacks with the actual HBM2 memory controllers connected to them that can s/r from the GPU and the PCIe bus at the same time.

Additionally at this time we're only utilizing about 25% of the bandwidth that HBM2 can achieve. (it is technically possible to place 8 stacks of HBM2 onto a single GPU substrate/interposer) HBM2 is rated for 256GB/s per stack... Bytes, not bits... per stack. The Vega10 coming out has 512GB/s (4096Gb/s) native and the theoretical on GDDR6 is 448GB/s on the same 256 bit bus as two stacks of HBM2 ( 2x64 times 2 ) but that bandwidth cannot operate in bi-directional mode nor can it operate as four or eight separate banks depending on task. This is important since graphics is actually done in 32bits. Even at 384 bit bus the GDDR6 at 672GB/s theoretical is still not going to be any more effective than two stacks of HBM2... Memory bandwidth isn't the only measure of how good a type of memory solution can be... look at LGA2011v1 vs v3 with the HUGE difference in theoretical bandwidth you can get with DDR4 3200 over DDR3 2166 - under load in a multi-task environment the difference all but evaporates but quad channel DDR3 will spank dual channel DDR4 all day long in a high thread multiple task workload. My i7-3960x with 32gb DDR3 1866 even keeps right up with my wife's i7-5930 with 16gb DDR4-3200 on these kinds of workloads.

So GDDR6 isn't that great of a solution to "increase" the power of graphics cards. nV and AMD will likely both use it to the hilt because in the end it's less expensive than HBM2 -

... but simply adding it to an existing GPU type isn't going to do more than lower the TDP of the card a little bit *****IF***** it uses the same communication voltages as GDDR5x.

I've no doubt that nV has plans for making it bark like a raving hellhound, but it's not gonna be on the current generation of cards.
 
#18 ·
Quote:
Originally Posted by prjindigo View Post

Volta is HBM structured, which is more than just where the wires go and requires a large change in the architecture of the gpu's pattern.
An architecture is only the rough layout of the components. There are different patterns like you already described. Compare current Pascal GPUs. Workstaion GP100 is HBM2 with a different memory controller. GP102 of the Titan XP / 1080 Ti is GDDR5X.

To this point it is not clear what memory configuration Volta will exactly use. If the scheme of Pasal repeats then the biggests Volta will have HBM2 and the rest either GDDR5X again or the new GDDR6 in the news. The config should depend on the class of the card.

All in all this change will bring a nice jump. Comin from 9 -> 11 Gbps in the smaller perfromance classes and from 11 -> 16 Gbps is godly.
 
#19 ·
Quote:
Originally Posted by Wishmaker View Post

Funny how all these new things appear when HBM was the tech of the future
biggrin.gif
. Yet again, AMD has no power here and NVIDIA is pushing for tech where they do not need to pay royalties to AMD.
Nah, HBM is a JEDEC standard. AMD helped create it, and I believe SK Hynix as well, but it's a standard. They got priority shipments on HBM1 for a bit was all.

Quote:
Originally Posted by czin125 View Post

They could release a Titan Xp with these to max out the bandwidth. GP100 can be 3-20% faster than the GP102 despite lower mhz and lower core count.
Probably not. It's different enough that the IMC would need a redesign.
 
#20 ·
Quote:
Originally Posted by Wishmaker View Post

Funny how all these new things appear when HBM was the tech of the future
biggrin.gif
. Yet again, AMD has no power here and NVIDIA is pushing for tech where they do not need to pay royalties to AMD.
It very much still is. If there weren't manufacturing issues in the way, you'd be seeing HBM2 being used on high end graphics products right now. HBM1 isn't very attractive right now due to its low density. It's a first generation technology in most senses.

It's very hard for traditional memory to compete when the HBM family provides:
-> Much higher bandwidth if you want it
-> Much lower power consumption
-> Much smaller PCB footprints

Its one primary downside is increased cost, and cost is something that is eroded by time and volume.

It's just another step in the long evolution of integrating more and more stuff together. That it follows the evolutionary pattern that has been the cornerstone of computer technology since the invention of the integrated circuit should lead one to predict that it has lasting potential, being widely deployed in the future on high performance computing devices in the form of memory arrays and high capacity caches.
 
#21 ·
Quote:
Originally Posted by mmonnin View Post

Quote:
Originally Posted by jarble View Post

This^^^ We all know there are going to be faster cards in the pipeline. To circle back to the main point here is that while gddr memory continues to make gains in speed and therefore bandwidth power useage has been getting out of control. I would be very interested in knowing the power draw on the new chips.
Yup, new GPU generations still seem to outpace CPUs.

I still don't get all the crap NV gets when AMD released a whole new series with a BIOS update. To me thats so much worse.
You must be new here.

The 3200 posts and join date seem to contradict that, but there is simply no other explanation on how you think the R9-500 series is the first to do that *cough390Xcough770cough* or that it's somehow an AMD exclusive.

There is also no other explanation on how you think a rebrand/refresh is worse than the things nVidia has been doing lately besides potentially just not reading any news articles on OCN. 970 anyone?
Quote:
Originally Posted by Wishmaker View Post

Funny how all these new things appear when HBM was the tech of the future
biggrin.gif
. Yet again, AMD has no power here and NVIDIA is pushing for tech where they do not need to pay royalties to AMD.
AMD doesn't get royalties, it's a JEDEC standard. And nVidia does use HBM.

Either way Samsung announced GDDR6 for 2018 back in August and Micron in February, soooooooooo this isn't actually news aside from Hynix joining the game which we all knew they would.

GDDR6 is not impressive btw, the only thing it has going for it over HBM2 is >16GB VRAM. It ties or loses in every other metric.
 
#22 ·
Quote:
Originally Posted by KyadCK View Post

You must be new here.

The 3200 posts and join date seem to contradict that, but there is simply no other explanation on how you think the R9-500 series is the first to do that *cough390Xcough770cough* or that it's somehow an AMD exclusive.

There is also no other explanation on how you think a rebrand/refresh is worse than the things nVidia has been doing lately besides potentially just not reading any news articles on OCN. 970 anyone?
AMD doesn't get royalties, it's a JEDEC standard. And nVidia does use HBM.

Either way Samsung announced GDDR6 for 2018 back in August and Micron in February, soooooooooo this isn't actually news aside from Hynix joining the game which we all knew they would.

GDDR6 is not impressive btw, the only thing it has going for it over HBM2 is >16GB VRAM. It ties or loses in every other metric.
Besides cost. In a segment devoid of massive bandwidth constraints
 
#23 ·
Quote:
Originally Posted by Silent Scone View Post

Quote:
Originally Posted by KyadCK View Post

You must be new here.

The 3200 posts and join date seem to contradict that, but there is simply no other explanation on how you think the R9-500 series is the first to do that *cough390Xcough770cough* or that it's somehow an AMD exclusive.

There is also no other explanation on how you think a rebrand/refresh is worse than the things nVidia has been doing lately besides potentially just not reading any news articles on OCN. 970 anyone?
AMD doesn't get royalties, it's a JEDEC standard. And nVidia does use HBM.

Either way Samsung announced GDDR6 for 2018 back in August and Micron in February, soooooooooo this isn't actually news aside from Hynix joining the game which we all knew they would.

GDDR6 is not impressive btw, the only thing it has going for it over HBM2 is >16GB VRAM. It ties or loses in every other metric.
Besides cost. In a segment devoid of massive bandwidth constraints
Your same argument could be said for GDDR5(X) vs GDDR6 as well. They're charging you $700+, whether it has HBM or GDDR6 does not make a difference in your costs, only theirs.
 
#24 ·
Quote:
Originally Posted by Hardware Hoshi View Post

An architecture is only the rough layout of the components. There are different patterns like you already described. Compare current Pascal GPUs. Workstaion GP100 is HBM2 with a different memory controller. GP102 of the Titan XP / 1080 Ti is GDDR5X.

To this point it is not clear what memory configuration Volta will exactly use. If the scheme of Pasal repeats then the biggests Volta will have HBM2 and the rest either GDDR5X again or the new GDDR6 in the news. The config should depend on the class of the card.

All in all this change will bring a nice jump. Comin from 9 -> 11 Gbps in the smaller perfromance classes and from 11 -> 16 Gbps is godly.
Please do not expect such godly jump within a short time span. 12Gbps from Micron have yet to be utilised.

TBH we will see a 12Gbps first before we start seeing 14 & 16 rated speed. They will be slowly stepping up while they rolled out more and more powerful GPUs. HBM will mostly likely be Titan Exclusive, which could be based on GV100, GV110

16GB HBM2 Titan Volta, 12GB G5X (12gpbs) GTX 2080 will be my best bet at this point.
 
#25 ·
Quote:
Originally Posted by Particle View Post

It very much still is. If there weren't manufacturing issues in the way, you'd be seeing HBM2 being used on high end graphics products right now. HBM1 isn't very attractive right now due to its low density. It's a first generation technology in most senses.

It's very hard for traditional memory to compete when the HBM family provides:
-> Much higher bandwidth if you want it
-> Much lower power consumption
-> Much smaller PCB footprints

Its one primary downside is increased cost, and cost is something that is eroded by time and volume.
Ironically costs are the main downfall of the tech. SK Hynix tried to trim the costs by offering only 2 stacks in a downgraded bandwidth configuration. The outcome is still too expensive because of the manufacturing limitations. The yields go down because failur somewhere in the package corrupts the whole product entirely. All the benefits are in vain if the final product is not ready for reliable mass-prodcution.

Thing is, HBM2 has the same limitations as HBM1. With only 2 stacks you can not go higher than 4/8GB or the costs increase drastically. Maybe it is just me, but I have the feeling HBM or similar stacking memory types are not ready for consumer products. The industry needs at least 2 years until they have fixed all these drawbacks.

Quote:
Originally Posted by KyadCK View Post

Either way Samsung announced GDDR6 for 2018 back in August and Micron in February, soooooooooo this isn't actually news aside from Hynix joining the game which we all knew they would.
Samsung and SK Hynix are actually late or .. within their old plans. Micron on the other hand is said to be ahead of schedule and can according to rumours deliver mass production until the end of 2017.Micron did develop the GDDR6, so they don't have to wait for verification of the JEDEC to start manufacturing. What wonders me is why everyone is telling Nvidia would chose Sk Hynix for their chips. Doesn't make much sense. Only reason I could think is that Nvidia is ordering from all available sources to ensure availability. If the preductions for memory are correct, the newer Geforces will have 12-16GB VRAM. Multiplied by millions of potential cards that is quite the amount they need.

Quote:
Originally Posted by KyadCK View Post

GDDR6 is not impressive btw, the only thing it has going for it over HBM2 is >16GB VRAM. It ties or loses in every other metric.
Don't think in such limiting perspectives. Everyone knows HBM is only for the upper echelon of cards. The majority of gamers will never see this tech in the next years. GDDR on the other hand is the standard for all graphic devices. If the standard moves up, everyone benefits. In short term it will dwindle down even to the low-end. Everyone will profit from this new memory, so it is not just the capacity, but also the avilability. Costs and easiness to assemble are another benefit. GDDR6 has 10% less energy costs than GDDR5. That brings the memory types closer together.

The footprint and memory controller may be a factor, but with cards that consume 200W+ TDP, a smaller card is bad for cooling. If you ignore that, cards like the NANO will throttle like no tomorrow or need an AIC, which negates all the benefits with a big-whop watercooling-sytem again.

HBM2 is not ready for the big show yet, maybe in a few years, but Nvidia already said the energy costs are rising to dramatically with future stacking plans. It's better to have more than one horse in the race. Same applies to memory technologies.

Quote:
Originally Posted by Wishmaker View Post

Funny how all these new things appear when HBM was the tech of the future
biggrin.gif
. Yet again, AMD has no power here and NVIDIA is pushing for tech where they do not need to pay royalties to AMD.
HBM was not the only stacking technology. Nvidia and Micron / Samsung had contracts with Hybrid Memory Cube (HMC) before. The roadmap told this since 2014 and would mainly benefit the professional line-up, which could be improved massively the network stacking specialization. AMD and SK Hynix then developed HBM and Nvidia switched to not be too exotic. Samsung would produce this HBM2 and the Micron contracts may have been switched to GDDR5X and in future then GDDR6.

AMD never got any royalties and never will. It is a JEDEC standard now. Same with the production priority hoax that is still floating int he heads of so many people. It was never true and even if it would have only influenced the prodcution of SK Hynix, not everyone else. AMD is obsessed with HBM because it is the only way to help their bandwidth starved archictectures to shine.
 
#26 ·
Quote:
Originally Posted by KyadCK View Post

You must be new here.

The 3200 posts and join date seem to contradict that, but there is simply no other explanation on how you think the R9-500 series is the first to do that *cough390Xcough770cough* or that it's somehow an AMD exclusive.

There is also no other explanation on how you think a rebrand/refresh is worse than the things nVidia has been doing lately besides potentially just not reading any news articles on OCN. 970 anyone?
AMD doesn't get royalties, it's a JEDEC standard. And nVidia does use HBM.

Either way Samsung announced GDDR6 for 2018 back in August and Micron in February, soooooooooo this isn't actually news aside from Hynix joining the game which we all knew they would.

GDDR6 is not impressive btw, the only thing it has going for it over HBM2 is >16GB VRAM. It ties or loses in every other metric.
Never did I say this was the first time. You must not know what year it is or must group every instance together as if it happened yesterday. What company gets complained about from year to year changes over time. Right now it's constantly NV but I'm not seeing the same reaction from fanboys in the other camp.
 
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top