SK Hynix introduced the world's fastest 2Znm 8Gb(Gigabit) GDDR6(Graphics DDR6) DRAM. The product operates with an I/O data rate of 16Gbps(Gigabits per second) per pin, which is the industry's fastest. With a forthcoming high-end graphics card of 384-bit I/Os, this DRAM processes up to 768GB(Gigabytes) of graphics data per second.
SK Hynix has been planning to mass produce the product for a client to release high-end graphics card by early 2018 equipped with high performance GDDR6 DRAMs.[/QUOTE
Yummy.]
Yep, hopefully no delays. Sounds great. GTX 2080 is said to come out before end of year, so hopefully 2080 ti is early 2018 with gddr6.
There is no comparison between HBM and Gddr5x or 6. If we used GDDR6 on, let's say on GTX 1080, we would get 512 GB/s of bandwidth, just as much as Radeon R9 Fury X.
That's why Nvidia is using them too."AMD is not involved in collecting any royalties for HBM," said Iain Bristow, a spokesman for AMD. "We are actively encouraging widespread adoption of all HBM associated technology on [Radeon R9] Fury products and there is no IP licensing associated."
You do realise this happens with every generation ? Only way to avoid it is not buying graphics cards or any PC hardware at all.
This^^^ We all know there are going to be faster cards in the pipeline. To circle back to the main point here is that while gddr memory continues to make gains in speed and therefore bandwidth power useage has been getting out of control. I would be very interested in knowing the power draw on the new chips.
Pretty sure that's the point. Keep those consumers chasing the end of the rainbow = $$$.
What rainbow ? as far as I can see i have a very good card for the money I paid, what is your solution here ? spend millions on R&D produce video cards and give them away for free ?
Yup, new GPU generations still seem to outpace CPUs.Originally Posted by jarble
This^^^ We all know there are going to be faster cards in the pipeline. To circle back to the main point here is that while gddr memory continues to make gains in speed and therefore bandwidth power useage has been getting out of control. I would be very interested in knowing the power draw on the new chips.
Yea that is so true, if AMD is such a good guy this should be available free of charge, it's just a new bios really.
P100T doesn't have a frame-buffer and no matter what nVidia has said, its the CPU's that make the images from the component bits in the DGX-1. Additionally the P100T contains further intercommunications channels and other features that are uselessly expensive and wouldn't be functional in any way in a graphics card. There is not and won't ever be a "GP100" chip no matter how many times nVidia or other webdiots type it that way. There may be a GP101 tho.
An architecture is only the rough layout of the components. There are different patterns like you already described. Compare current Pascal GPUs. Workstaion GP100 is HBM2 with a different memory controller. GP102 of the Titan XP / 1080 Ti is GDDR5X.
Nah, HBM is a JEDEC standard. AMD helped create it, and I believe SK Hynix as well, but it's a standard. They got priority shipments on HBM1 for a bit was all.
Probably not. It's different enough that the IMC would need a redesign.
It very much still is. If there weren't manufacturing issues in the way, you'd be seeing HBM2 being used on high end graphics products right now. HBM1 isn't very attractive right now due to its low density. It's a first generation technology in most senses.
You must be new here.Originally Posted by mmonnin
Quote:
Yup, new GPU generations still seem to outpace CPUs.Originally Posted by jarble
This^^^ We all know there are going to be faster cards in the pipeline. To circle back to the main point here is that while gddr memory continues to make gains in speed and therefore bandwidth power useage has been getting out of control. I would be very interested in knowing the power draw on the new chips.
I still don't get all the crap NV gets when AMD released a whole new series with a BIOS update. To me thats so much worse.
AMD doesn't get royalties, it's a JEDEC standard. And nVidia does use HBM.
Besides cost. In a segment devoid of massive bandwidth constraintsOriginally Posted by KyadCK
You must be new here.
The 3200 posts and join date seem to contradict that, but there is simply no other explanation on how you think the R9-500 series is the first to do that *cough390Xcough770cough* or that it's somehow an AMD exclusive.
There is also no other explanation on how you think a rebrand/refresh is worse than the things nVidia has been doing lately besides potentially just not reading any news articles on OCN. 970 anyone?
AMD doesn't get royalties, it's a JEDEC standard. And nVidia does use HBM.
Either way Samsung announced GDDR6 for 2018 back in August and Micron in February, soooooooooo this isn't actually news aside from Hynix joining the game which we all knew they would.
GDDR6 is not impressive btw, the only thing it has going for it over HBM2 is >16GB VRAM. It ties or loses in every other metric.
Your same argument could be said for GDDR5(X) vs GDDR6 as well. They're charging you $700+, whether it has HBM or GDDR6 does not make a difference in your costs, only theirs.Originally Posted by Silent Scone
Quote:
Besides cost. In a segment devoid of massive bandwidth constraintsOriginally Posted by KyadCK
You must be new here.
The 3200 posts and join date seem to contradict that, but there is simply no other explanation on how you think the R9-500 series is the first to do that *cough390Xcough770cough* or that it's somehow an AMD exclusive.
There is also no other explanation on how you think a rebrand/refresh is worse than the things nVidia has been doing lately besides potentially just not reading any news articles on OCN. 970 anyone?
AMD doesn't get royalties, it's a JEDEC standard. And nVidia does use HBM.
Either way Samsung announced GDDR6 for 2018 back in August and Micron in February, soooooooooo this isn't actually news aside from Hynix joining the game which we all knew they would.
GDDR6 is not impressive btw, the only thing it has going for it over HBM2 is >16GB VRAM. It ties or loses in every other metric.
Please do not expect such godly jump within a short time span. 12Gbps from Micron have yet to be utilised.Originally Posted by Hardware Hoshi
An architecture is only the rough layout of the components. There are different patterns like you already described. Compare current Pascal GPUs. Workstaion GP100 is HBM2 with a different memory controller. GP102 of the Titan XP / 1080 Ti is GDDR5X.
To this point it is not clear what memory configuration Volta will exactly use. If the scheme of Pasal repeats then the biggests Volta will have HBM2 and the rest either GDDR5X again or the new GDDR6 in the news. The config should depend on the class of the card.
All in all this change will bring a nice jump. Comin from 9 -> 11 Gbps in the smaller perfromance classes and from 11 -> 16 Gbps is godly.
Ironically costs are the main downfall of the tech. SK Hynix tried to trim the costs by offering only 2 stacks in a downgraded bandwidth configuration. The outcome is still too expensive because of the manufacturing limitations. The yields go down because failur somewhere in the package corrupts the whole product entirely. All the benefits are in vain if the final product is not ready for reliable mass-prodcution.Originally Posted by Particle
It very much still is. If there weren't manufacturing issues in the way, you'd be seeing HBM2 being used on high end graphics products right now. HBM1 isn't very attractive right now due to its low density. It's a first generation technology in most senses.
It's very hard for traditional memory to compete when the HBM family provides:
-> Much higher bandwidth if you want it
-> Much lower power consumption
-> Much smaller PCB footprints
Its one primary downside is increased cost, and cost is something that is eroded by time and volume.
Samsung and SK Hynix are actually late or .. within their old plans. Micron on the other hand is said to be ahead of schedule and can according to rumours deliver mass production until the end of 2017.Micron did develop the GDDR6, so they don't have to wait for verification of the JEDEC to start manufacturing. What wonders me is why everyone is telling Nvidia would chose Sk Hynix for their chips. Doesn't make much sense. Only reason I could think is that Nvidia is ordering from all available sources to ensure availability. If the preductions for memory are correct, the newer Geforces will have 12-16GB VRAM. Multiplied by millions of potential cards that is quite the amount they need.
Don't think in such limiting perspectives. Everyone knows HBM is only for the upper echelon of cards. The majority of gamers will never see this tech in the next years. GDDR on the other hand is the standard for all graphic devices. If the standard moves up, everyone benefits. In short term it will dwindle down even to the low-end. Everyone will profit from this new memory, so it is not just the capacity, but also the avilability. Costs and easiness to assemble are another benefit. GDDR6 has 10% less energy costs than GDDR5. That brings the memory types closer together.
HBM was not the only stacking technology. Nvidia and Micron / Samsung had contracts with Hybrid Memory Cube (HMC) before. The roadmap told this since 2014 and would mainly benefit the professional line-up, which could be improved massively the network stacking specialization. AMD and SK Hynix then developed HBM and Nvidia switched to not be too exotic. Samsung would produce this HBM2 and the Micron contracts may have been switched to GDDR5X and in future then GDDR6.
Never did I say this was the first time. You must not know what year it is or must group every instance together as if it happened yesterday. What company gets complained about from year to year changes over time. Right now it's constantly NV but I'm not seeing the same reaction from fanboys in the other camp.Originally Posted by KyadCK
You must be new here.
The 3200 posts and join date seem to contradict that, but there is simply no other explanation on how you think the R9-500 series is the first to do that *cough390Xcough770cough* or that it's somehow an AMD exclusive.
There is also no other explanation on how you think a rebrand/refresh is worse than the things nVidia has been doing lately besides potentially just not reading any news articles on OCN. 970 anyone?
AMD doesn't get royalties, it's a JEDEC standard. And nVidia does use HBM.
Either way Samsung announced GDDR6 for 2018 back in August and Micron in February, soooooooooo this isn't actually news aside from Hynix joining the game which we all knew they would.
GDDR6 is not impressive btw, the only thing it has going for it over HBM2 is >16GB VRAM. It ties or loses in every other metric.