Originally Posted by 2010rig
GK104 = Mid-Range Chip.
GK110 = High End Chip.
For whatever reasons that NVIDIA didn't release a 680 based on GK110, doesn't negate the fact that the GK104 is a mid-range part in NVIDIA's line up. Is that really so hard to accept?
I doubt NVIDIA wanted a Fermi 2.0 on their hands, why rush GK110, when GK104 was good enough?
GK104 was pushed to its near absolute limits to deliver high end performance, that didn't overtake the 7970, but it was "Good Enough" to compete, and turned out to be a much more profitable strategy. NVIDIA sold a part they normally sell for $300 for $500.
GK110 is already being sold as Tesla's for $3000+, and now soon as Titan. Not that long ago, "some people" were claiming that GK110 was a myth, it didn't exist, and GK104 was all NVIDIA had in their arsenal.
All I've been saying this whole time is that it's damn obvious that based on the 680 specs ( GK104, 256 bit bus, crippled compute, etc ) that this was NOT NVIDIA's high end chip. YES, it was sold as "high end" for "high end prices" thanks to its performance, that doesn't make it NVIDIA's high end CHIP
I knew that GK110 would one day come out with its 384 bit bus, a BIG ~550 mm2 Die, and be a compute beast in pure NVIDIA high end tradition.
Originally Posted by DzillaXx
The 680 really wasnt much smaller of a die, add in a bigger memory bus and all the extra compute power and you could easily see the size of the 680 get near or even larger then the 7970's die. So you gonna call the 7970 mid range as well?
Plan and simple GK100 was not ready, they had to turn it into GK110 for it to be ready anyways. Therefor the GK104 was the highest end chip of the GK10x series, AKA the highest end chip of that series.
Just because a company states they have been making a 550mm2 chip doesnt mean anything if it was never fully complete into a state you could manufacture them. Really their is nothing stopping AMD from releasing a 550mm2 chip to counter, other then AMD not having enough money to waste on a niche card.
The difference being nVidia has a higher end, bigger GPU than GK104 in GK110, AMD doesn't have a bigger more higher end GPU than Tahiti apart from having engineering samples of the HD8k and possibly even very early HD9k samples, it's AMDs fastest GPU that they have out of prototyping, nVidia has GK110 out of prototyping but is only using it now, and for the most part GK110 is just that; GK104 with more shaders, a larger memory bus and the compute performance not gimped, GK100 was probably ready, but if you remember the 28nm delays...well, GK100 wouldn't have been pretty with those, it'd probably have sub-20% yields at first.
AMDs architecture is generally more efficient at scaling up than nVidia's, at least with the VLIW series which is why you had much smaller AMD cards going up against massive nVidia GPUs and not doing too badly.
Originally Posted by GingerJohn
Really it was a stroke of genius from nVidia, they certainly made the best out of a potentially bad situation.
nVidia is one of the best companies to handle bad times, they handled HD4k extremely well and that was the Conroe of GPUs. (Much faster than the previously, just meh cards while being ultra efficient for its time)
Originally Posted by zGunBLADEz
32ich? at that size ?? 4k AT 32INCH
Yeah sure ill be there lol
Really people what we are smoking around here???
I dont use monitors/hdtvs that small anymore thank you very much 55 as a minimum...
Im not follower of the 120hz bullcrap also... Btw.
Uh, ever heard of PPI? It's why my 21.5" monitor that cost me $200 a few years back looks a hell of a lot better than my much more expensive 55" 1080p TV when I'm not sitting 5m away from it.
As for 120Hz, it's undeniably more smooth..You know, like TV shows shot at 48 or 60Hz instead of 24?
Originally Posted by Rubers
Originally Posted by raghu78
GK104 was Nvidia's high end Geforce chip for 2012. For 2013 the high end chip is GK110. simple as that. nothing else matters. Nvidia did not and could not release a GK100. the GK110 taped out in Jan 2012 and as is the usual case with semiconductor chips took 9 - 10 months from tapeout to volume production.
It was launched in late Oct - mid Nov for the Tesla market because thats where Nvidia has its highest margins. now after 4 months in late Feb they are going to release the Geforce version. When AMD releases HD 8970 we can make a comparison of perf, perf/watt and perf/sq mm between these 2 products.
You can't use semiaccurate to back up your posts. It's like quoting the national inquirer.
Semi-Accurate is generally correct with its rumours, there's been a few fudge ups but..well, who told us Fermi wasn't being launched in November 2009, but March 2010 first? They're correct with nVidia, generally.
Originally Posted by EfemaN
I tried searching, but since the current OCN post search tool is awful, maybe someone could give me an answer. Will the 384-bit bus be a limiting factor if the card does indeed have 6GB of VRAM? I'm not familiar with the relationship there.
Nope, a 384bit bus is quite fast with GDDR5, 6GB is probably overkill now but I can see 3GB being a limitation in ~2-3 years.
Originally Posted by rcfc89
For me reliability and stability are two major factors in anything I buy. Those two are lost when overclocking most gpu's. All for an extra 5-10fps.
That's why you don't stay at the max OC your card can reach, for example I just OCed my HD7950 until I stopped dipping below 30fps anywhere in Skyrim (1010Mhz Core/1337Mhz memory) and I did a similar thing on my GTX 470, while I could hit ~850Mhz stable I mainly kept it at 830Mhz or so because it meant that if the chip does degrade a bit, 830Mhz is still stable and reliable.