Originally Posted by 12Cores
It s going need to get to around ~30K graphics score to run most modern games at 4k@60fps, regardless the performance will be other worldly for the few people who can afford it.
The Bugatti Veyron of GPU's it will be(Yoda Voice)
That's not gonna be a problem. It's at 26,660 at stock GPU boost performance which is likely around ~1,550mhz or so (maybe 1,575 - 1,590mhz at most, and that's pushing it honestly. Assuming no manual OC of course which it highly looks like stock judging from the paper specs compared to 1080)
If you factor in that every big chip TITAN/x80 TI card has always overclocked within 100mhz of the x80 card if not closer (i.e. GTX 980 reference on average hit maybe ~1,550mhz max. Most reference 980 TI could hit 1,500mhz no problem, and even the 256 more core count Maxwell TITAN X could easily do 1,450mhz and 1,475 - 1,500mhz wasn't that rare either especially with some voltage and/or bios tweaking)
I also have spoken to some people in touch with Nvidia and they were told 1,900mhz will be the "easy peasy OC" that everyone can hit basically. So we're looking at ~1,950 - 2,000mhz or so for a max OC most likely. Which makes perfect sense when you compare to the 1080 which has a max common OC of about 2,100mhz (100mhz higher just like when comparing 980 and 980 TI/TITAN X Maxwell).
If you compare that, you get: 2,000mhz = 30.63% more than the GPU boost speed of 1,531mhz. So we're likely looking at something like ~24% performance gain from overclocking, at least around 22% i'd say. (this is based on the fact that the 2,100mhz OC on the 1080 is a 21.17% increase over the 1,733mhz GPU boost speed, and we saw typically around ~15% gains on the 1080 from that kind of OC; this is a pretty comparable estimate for a ~30% clock speed gain.) So 2,660 points in FS multiplied by 1.24 = 33,058.4 points! Which is significantly more than 30,000.
Just to ensure that nobody scoffs at my math, even if a 30.63% gain in clock speed only gives a paltry 20% actual real world gain (this is very unlikely as big die chips almost always get a significantly better raw performance gain from overclocking due to the lower starting frequency making each mhz into a bigger percentage of the whole) we get the following: 26,660 points * 1.20 = 31,992 which is still ~2K more than 30,000.
We can also use comparisons such as the fact that a single founders edition GTX 1080 tends to get roughly ~42fps average in maxed out The Witcher 3 with Hairworks etc.. all on. If we factor in a 30% gain over stock 1080 (which makes sense given we are seeing 29.7% gain at 4K Firestrike Ultra) and then another 20% from the overclock (which again, is being VERY conservative as it's likely much closer to 25% gain from the OC) we get 42fps * 1.5 = 63fps. And this is 63fps at full 4K native resolution on basically THE most demanding game in existance for pure GPU intensivity! This means basically EVERY other game will be getting even higher fps than this. So i have no doubt this card can play any game at 4K 60fps Ultra settings.
Originally Posted by GenoOCAU
Looks like im holding onto my 980ti.. it does 24k graphics in firestrike no problem.
Im skeptical of what TX can do under water given everything is so locked down with GPU Boost 3.0.
No offense, but i don't really believe that. A 980 TI at stock is ~4-5% slower than a stock 1070; and even fully overclocked to ~1,550mhz a 980 TI is still ~10-12% slower than a ~1,800mhz stock GPU boost speed 1080. Perhaps you just got one of the not so uncommon oddball scores that happen fairly often. (i've ended up with my real old i5 3570K + GTX 660 system scoring just as high as a system with an i7 4820K and a Gigabyte G1 GTX 980 before on the same firestrike version and update number etc..etc.. it was crazy. I've seen some people with like R9 280X end up with like 20K scores before for no apparent reason, presumably due to wonky driver detection and whatnot.
Originally Posted by zealord
My own point of view regarding the new Titan X and Nvidia pricing structure since March 2012. I try to be objective and not call people stupid for what they might buy. If you are not interested in hearing my opinion (a guy that usually likes to buy a 500$ and be settled) then stop reading here thx
Back before the 7970, GTX 680 and Vanilla Titan we used to get really good cards and big chips for around 500$. Yes there were a few exceptions that is true, but mostly you could get a bangin' GPU for around 500$. Since then money hasn't change much in value. People don't even need to start with "inflation".
R & D costs on the other hand have probably increased. Generally prices always increase. We are aware of that. No one would expect to get a Titan for 350$ - 500$.
The reality is the Titan cards are not really new products. They are basically using a new naming structure to make people associate a higher price with a more premium product, which no card so far truly deserved. I could understand if they came out in May 2016 with a full fat 3840 cuda core Pascal Titan card with 16+GB HBM2 memory that blows everything out of the water and is like 1000$. Completely ahead of its time and bringing the absolute best without any doubt. A card where you know : "This is the best for 1-2 years" and not a card where you say "oh this cut thing is the best for a couple of months then the full fat boy comes out to play".
With Nvidia now you pay a premium on all products. Even on your monitor you pay a premium. Aswell on your HB SLI Bridge. A GTX 1080 is not what previous X80 cards prior to 2012 used to be. It's basically a 700$ GTX 460.
Objectively speaking and going by the specifications the new Titan X Pascal (1200$) is basically the same (in terms of GPU hierachy within their generation based on technical specs) as a GTX 570 (329$) back in 2011. Both are a cut chip. The new Titan X Pascal will be likely bettered soon.
Yes I am fully aware that competition is different and price increase. Nvidia would be stupid to sell a Titan X Pascal at 329$. They would maybe even make a loss although I doubt that. The profit for them would be too small at that price.
What it means for me personally : I am sticking to my 2500K + R9 290X another year. I am not giving my money to the PC market. I feel alienated from PC gaming. The games are mostly badly optimized and 90% of games I play are too old or not graphically intense enough to even benefit from a 1200$ GPU. Of course I play Witcher 3, ROTR and other demanding games aswell, but they are far and few between. I want to buy a new rig (I said it numerous times before, some people are probably tired of hearing me say this). In 2014 I said "yeah 2015 is gonna be the year. 20nm GPUs with HBM lets' rock" and in 2015 I said "2016 is gonna be the year with 14nm and performance increases north of 50% of previous gen". Now we are in 2016 and we have 1200$ GPUs that are not even as good as I hoped 600$ GPUs would be. So for me now it is "wait for 2017 or screw PC gaming and grab a PS4 NEO and Nintendo NX".
I know there are other options beside a Titan X, but are they any better? Is a 500€ GTX 1070 really a good upgrade from a 3 year old 290X ? Is a 700€ GTX 1080 mid-range card really worth the money? You can get a PS4 + Xbox One + 3DS and games for that kinda money you need for 1 GPU.
To be fair it is largely AMDs fault. They basically have nothing to compete and if they do they manage to screw it up somehow. In hindsight all launched were screw-ups in the last 4 years by AMD. The 7970 was underclocked and bad drivers made it look bad against the GTX 680. The reference 290X we don't even need to talk about that. The nice Fury and Nano cards that came too little too late vs the 980 Ti. I know that things are much more complicated, but I don't want to go too much into the details. It would lead nowhere.
Nvidia is doing things right from their perspective. They are basically celebrating a new record profit every quarter now.
And on the other hand what good games are there left? What is coming? The huge good looking blockbuster games are all on PS4 : Uncharted 4, Horizon Zero Dawn, Final Fantasy XV, Gran Turismo Sports, God of War IV, Detroit : Become Human, Final Fantasy VIII Remake, etc.
What do we have coming for PC ? I can't even name 3 PC games that need a monster rig. Star Citizen?
One thing has to be said, which I also said before : If I had money, and I mean like 6 digits on my bank, then I would buy a Titan X aswell, but why do people (directed at those those that aren't well off) do want to purchase GPUs that expesinve?
(This is no post to make you feel bad. Buy what you want. I only know that people like me are alienated from PC gaming by the ever increasing prices of good GPUs. And those increasing GPU prices also lead to me not buying a new CASE, RAM, CPU etc. What good would a Titan X do with a 2500K and 1333mhz DDR3 RAM at 1080p?)
I'm sorry but that's just not true. You should really go back and check out past flagships. The ONLY past Nvidia flagship that even came close to the size of like the GK110 or GM200 etc.. on the original TITAN, TITAN BLACK, 780, 780 TI, 980 TI, TITAN X etc.. is the GTX 480 with its 529mm2 die size, and the subsequent tweaked GTX 580 which was just a 480 with bug fixes. The 580 shrunk a tad down to 520mm2. And 520mm2 is actually just as close to, say the GTX 980's ~390mm2 die size as it is to the 600mm2 size of the TITAN BLACK or TITAN X Maxwell.
Other than that, EVERY other flagship GPU has been WAY smaller than ANY of the TITAN cards. The GTX 285 was a 470mm2 die size, the Nvidia 9800 GTX was only a measly 324mm2 die size! Even the unprecedented 8800 Ultra was only 480mm2 size and was just as expensive as the TITAN as the time and nearly impossible to get a hold of. The 7900 GTX and co. were only 196mm2 tiny little things!
It's just not true that the TITANs are some new phenomena and that we are now getting x60 series cards labeled as x80 cards. The fact of the matter is it literally wasn't POSSIBLE to make a die that big (~600mm2) until Nvidia moved to the 28nm node and was getting skilled enough to deal with bad yields. The GTX 680 was originally intended to be the literal max full Keplar flagship at the time. The engineering team members have specifically stated that they never thought it was possible to make a card like the OG TITAN when Keplar first launched. Unfortunately things like dark silicon and even worse yields do limit the size of this 14nm TITAN somewhat, but it's still a pretty decent bump especially on a brand new node only a few months into production.