Overclock.net › Forums › Industry News › Rumors and Unconfirmed Articles › [OC3D/TF] Pascal GP100 Titan coming as early as April and the GP104 GTX 1080/1070 to launch in June
New Posts  All Forums:Forum Nav:

[OC3D/TF] Pascal GP100 Titan coming as early as April and the GP104 GTX 1080/1070 to launch in June - Page 7  

post #61 of 293
Quote:
Originally Posted by BiG StroOnZ View Post

Tell me, what looks better on the box to a consumer? 6GB of memory or 4GB of memory? Regardless of whether it says "HBM"

Yes there was a GTX 960 2GB but they quickly attempted to remove all of those units from stock to be replaced with the 4GB models to compete with AMD or do you not remember that news article:

http://www.kitguru.net/components/graphic-cards/matthew-wilson/nvidia-might-phase-out-2gb-gtx-960-in-favour-of-4gb-version/

the 2GB variant price points are still quite attractive, most of those who uses 1080P monitors can still go with the 2GB variant without too much of an issue.

while games are starting to use more than 4GB, that doesn't mean 4GB is already insufficient, even the GTX970 doesn't have it's full 4GB yet its still quite a popular card.
Edited by epic1337 - 1/26/16 at 1:27am
post #62 of 293
Quote:
Originally Posted by BiG StroOnZ View Post

May I ask, how did the 900 series go? What makes you think they would suddenly go back to an old formula? Why would the 1080 be the big die, when that wasn't the case with the 980 on the 900 series? They figured out a new formula to maximize profits, they aren't suddenly going to change that.

It's very illogical, if you look to the most recent history of how their series of cards were released.

It's most likely going to be:

GTX 1050 - GP 106 (or 107) - GDDR5 4GB
GTX 1060 - GP 106 - GDDR5 4GB
GTX 1070 - GP 104 - GDDR5X 6GB
GTX 1080 - GP 104 - GDDR5X 6GB
GTX 1080 Ti - GP 100 - HBM2 8GB
GTX Titan Y - GP100 - HBM2 16GB

Their formula worked because AMD never breached 400mm2 in die size before Hawaii. With Fiji at ~600mm2, nvidia can't afford the same luxury as before because AMD are not reluctant to go big die this time around.

It would require collusion from both sides to repeat the formula. Maximizing profits would be a good reason though.
    
CPUMotherboardGraphicsRAM
e2140@3.2Ghz abit IP35-E HIS IceQ4 4850 4GB 667@800 5-5-5-15 
OSPower
win xp 32 bit Corsair 450VX@stock 
  hide details  
    
CPUMotherboardGraphicsRAM
e2140@3.2Ghz abit IP35-E HIS IceQ4 4850 4GB 667@800 5-5-5-15 
OSPower
win xp 32 bit Corsair 450VX@stock 
  hide details  
post #63 of 293
Thread Starter 
Quote:
Originally Posted by epic1337 View Post

the 2GB variant price points are still quite attractive, most of those who uses 1080P monitors can still go with the 2GB variant without too much of an issue.

while games are starting to require more than 4GB, that doesn't mean 4GB is already insufficient, even the GTX970 doesn't have it's full 4GB yet its still quite a popular card.

2GB will be useless for 1080p from 2016-2017. 4GB is quickly becoming the standard. 3GB is about the bare minimum you can have @ 1080p today, and even then it is quickly becoming a bottleneck.

If games are starting to require up to 4GB and sometimes even more, obviously this means it is a bare minimum number that you want to aim for being a company like NVIDIA (or even AMD). I'm not saying 4GB is insufficient, just that for the next generation of cards it will be the lower end number that they aim for with lower end cards like the 1050 and 1060 ("optimized for 1080p"). I believe the 1070 and 1080 will need 6GB ("optimized for 1440p") and anything above that level, will need more from a marketing standpoint, like the 1080 Ti and Titan Y ("optimized for 4K and VR")

Quote:
Originally Posted by gamervivek View Post

Their formula worked because AMD never breached 400mm2 in die size before Hawaii. With Fiji at ~600mm2, nvidia can't afford the same luxury as before because AMD are not reluctant to go big die this time around.

It would require collusion from both sides to repeat the formula. Maximizing profits would be a good reason though.


I'm not sure both need to be colluding to repeat the formula, I do see however both companies, AMD and NVIDIA, wanting to maximize profits again so whatever is necessary. People forget these are companies we are dealing with sometimes. Not saying they don't put out great products, but we cannot forget about the minds that fuel these companies and what their intentions are.
post #64 of 293
Quote:
Originally Posted by BiG StroOnZ View Post

2GB will be useless for 1080p from 2016-2017. 4GB is quickly becoming the standard. 3GB is about the bare minimum you can have @ 1080p today, and even then it is quickly becoming a bottleneck.

If games are starting to require up to 4GB and sometimes even more, obviously this means it is a bare minimum number that you want to aim for being a company like NVIDIA (or even AMD). I'm not saying 4GB is insufficient, just that for the next generation of cards it will be the lower end number that they aim for with lower end cards like the 1050 and 1060 ("optimized for 1080p"). I believe the 1070 and 1080 will need 6GB ("optimized for 1440p") and anything above that level, will need more from a marketing standpoint, like the 1080 Ti and Titan Y ("optimized for 4K and VR")

i'm not arguing with you whether more VRAM is better, rather i'm saying that 4GB VRAM isn't too anemic even on the newer games that can utilize VRAM upwards of 8GB.
and not everyone plans on upgrading their old cards, some people would stick with their GTX700/GTX900/HD7000/R9-200 series cards for a few more years.
rendering these "low VRAM" cards unusable on new games will only make game developers lose buyers.


plus, 3GB VRAM for a mainstream card, where 1080P is still the norm, will let them keep the price point at a tighter spot.
e.g. if going 3GB VRAM lets them shave off an additional $20 without affecting their margin then that would be more profitable than insisting in going with 4GB VRAM.

also, i did mention that the buswidth didn't match the VRAM, if 6GB would be the norm, then what would their buswidth be? 192bit? 384bit? i doubt they'd go for the latter.
Edited by epic1337 - 1/26/16 at 1:30am
post #65 of 293
neither 2GB nor 4GB on 960 make sense, since 960 makes no sense. Back in late 2013 I remember 7950s selling for what 960s cost now.
post #66 of 293
Thread Starter 
Quote:
Originally Posted by epic1337 View Post

i'm not arguing with you whether more VRAM is better, rather i'm saying that 4GB VRAM isn't too anemic even on the newer games that can utilize VRAM upwards of 8GB.
and not everyone plans on upgrading their old cards, some people would stick with their GTX700/GTX900/HD7000/R9-200 series cards for a few more years.
rendering these "low VRAM" cards unusable on new games will only make game developers lose buyers.

also, i did mention that the buswidth didn't match the VRAM, if 6GB would be the norm, then what would their buswidth be? 192bit? 384bit? i doubt they'd go for the latter.

It's not about the older users, it's about the newer users, who make the company MONEY. Who does the company care about the guy who upgrades his graphics card every year or the guy who holds onto it for four years? Obviously they care about intriguing new buyers and even enticing people who weren't planning to upgrade, to actually upgrade.

The way I see if, if the higher tier cards are using HBM2 it is going to have some ridiculous Bus Width like we saw on Fiji (4096), therefore if the Titan Y and 1080 Ti have HBM2 they are going to look to be on whole other level compared to the cards below it. Meaning even if the 1070 and 1080 have a 384-bit Bus Width, with 6GB of GDDR5X, it still will be much smaller in comparison to the 1080 Ti and Titan Y with HBM2. While the lower end cards with GDDR5 can still have a normal 256-bit Bus Width.
Edited by BiG StroOnZ - 1/26/16 at 1:37am
post #67 of 293
do you know why these people keep their cards for so long?

two issues are at hand:
1) newer cards aren't fast enough
2) newer cards are too expensive

the number 2 reason is on of the most widespread reason for the lower tier card users.
so keeping the price points as low as possible, with maximum margins in mind, will let them pull in the old card users to actually upgrade.

but we're talking about GP104 here, 104 always had inferior buswidth than 100.
GP100 will without a doubt have HBM, so that excludes it from the buswidth issues at hand.
where as GP104 having 384bit, when Nvidia had been shrinking the buswidth ever since fermi, is highly unlikely.

GM104 had 256bit
GK104 had 256bit
GF104 had 256bit
GF114 had 256bit

theres no point in increasing the buswidth when GDDR5X already provides a massive increase in bandwidth.
rather it would be much more logical to "shrink" the buswidth to save power and die space, but 192bit may as well be "too narrow" for a card that can rival GTX980Ti.
Edited by epic1337 - 1/26/16 at 1:47am
post #68 of 293
Thread Starter 
Quote:
Originally Posted by epic1337 View Post

do you know why these people keep their cards for so long?

two issues are at hand:
1) newer cards aren't fast enough
2) newer cards are too expensive

the number 2 reason is on of the highest reasons for the lower tier card users.
so keeping the price points as low as possible, with maximum margins in mind, will let them pull in the old card users to actually upgrade.

but we're talking about GP104 here, 104 always had inferior buswidth than 100.
GP100 will without a doubt have HBM, so that excludes it from the buswidth issues at hand.
where as GP104 having 384bit, when Nvidia had been shrinking the buswidth ever since fermi, is highly unlikely.

GM1204 had 256bit
GK104 had 256bit
GF104 had 256bit
GF114 had 256bit

It doesn't really matter if people want to upgrade or not, there comes a point in modern AAA gaming when you have to actually upgrade. Price points will be similar to what we have seen on the 900 series, they aren't going to be cheaper, that is a given; if anything they will be more expensive.

GM204 was the 980
GK104 was the 680

GF104 was a GTX 460, not a GTX 480 which was GF100
GF114 was again a GTX 460 (and 560).

Nevertheless, none of this really matters, they are going to try to get more than 4GB of memory on the 1070 and 1080. It will be a requirement for marketing, whether this is directly from the Bus Width or some other means - it will happen. Just take your time and go look at the video memory size increase on NVIDIA's side for the x70 and x80 GPUs as time progressed. After you go do that, then you can come to the realization, that the 1070 and 1080 will indeed have more than 4GB of memory.
post #69 of 293
Quote:
Originally Posted by BiG StroOnZ View Post

Nevertheless, none of this really matters, they are going to try to get more than 4GB of memory on the 1070 and 1080. It will be a requirement for marketing, whether this is directly from the Bus Width or some other means - it will happen. Just take your time and go look at the video memory size increase on NVIDIA's side for the x70 and x80 GPUs as time progressed. After you go do that, then you can come to the realization, that the 1070 and 1080 will indeed have more than 4GB of memory.

GTX670 = 4GB
GTX770 = 4GB
GTX970 = 4GB (3.5+0.5)

GTX680 = 4GB
GTX780 = 3GB GK110* (6GB in latter revisions)
GTX980 = 4GB

aside fermi cards and 780 being a full die, most of them are of the 104 variations with 4GB VRAM, i don't see what you're seeing.
Edited by epic1337 - 1/26/16 at 2:03am
post #70 of 293
This is fantastic news and it makes sense that nvidia beats AMD to the market by 2-3 months since they had time to work on Pascal for a long time.

No more waiting for high end until a year after the GP104 chips.
Buying this GPU the month it hits the market probably
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Rumors and Unconfirmed Articles
This thread is locked  
Overclock.net › Forums › Industry News › Rumors and Unconfirmed Articles › [OC3D/TF] Pascal GP100 Titan coming as early as April and the GP104 GTX 1080/1070 to launch in June