Overclock.net › Forums › Industry News › Hardware News › [Twitter] Jen-Hsun introduces the new NVIDIA TITAN X
New Posts  All Forums:Forum Nav:

[Twitter] Jen-Hsun introduces the new NVIDIA TITAN X - Page 118  

post #1171 of 3587
Quote:
Originally Posted by magnek View Post

Umm what? I'm simply saying in the dreamland where we had a choice on the buyout of AMD's graphics division, I'd much rather see Intel buy them out than anybody else.

My joke post failed............ cryingsmiley.gif
My home PC
(16 items)
 
  
CPUMotherboardGraphicsRAM
AMD Threadripper 1950x Gigabyte Aorus X399 Gaming 7  MSI Geforce GTX 1080ti Gaming X G.Skill DDR4 3600 CL16 
Hard DriveHard DriveCoolingOS
Samsung Evo 840 500GB Samsung 960 Pro 500GB Noctua NH-U14S TR4 Windows 10 Pro 
MonitorMonitorKeyboardPower
Dell U2711 Samsung 55" 4k Corsair K70  EVGA SuperNova G2 1300W 
CaseMouseAudio
Corsair Carbide Air 540 Logitech G502 Denon AVR-X3300W 
  hide details  
My home PC
(16 items)
 
  
CPUMotherboardGraphicsRAM
AMD Threadripper 1950x Gigabyte Aorus X399 Gaming 7  MSI Geforce GTX 1080ti Gaming X G.Skill DDR4 3600 CL16 
Hard DriveHard DriveCoolingOS
Samsung Evo 840 500GB Samsung 960 Pro 500GB Noctua NH-U14S TR4 Windows 10 Pro 
MonitorMonitorKeyboardPower
Dell U2711 Samsung 55" 4k Corsair K70  EVGA SuperNova G2 1300W 
CaseMouseAudio
Corsair Carbide Air 540 Logitech G502 Denon AVR-X3300W 
  hide details  
post #1172 of 3587
Quote:
Originally Posted by Seyumi View Post

Geeze people can you take this somewhere else. The last 20 pages were the same 6-12 people bickering back and forth about pricing. Just officially wasted 30 minutes of my life browsing through nothing but ranting. Yes pricing sucks but unless you have billions to fab your own GPUs there's nothing you can do except lower your expectations by buying something more affordable or quit the hobby all together. End of story.
Quote:
Originally Posted by skypine27 View Post

Yup, no **** bro.

I once had to say to a guy (think it was about the upcoming Battle Field 1):
"You don't have to consume things you don't want to."

Seems people have lost the basic building blocks of common sense. Don't buy something you don't want to.

In all fairness, what is there left to talk about, given the scarce details nVidia has revealed thus far?

We have discussed specs, performance estimates, GDDR5X vs HBM2, and other technical stuff. I don't see why outcry at the price is a problem. If you want a sanitized "praise the lord JHH for such an impressive product" thread you won't find it here. If the thread has run its course then perhaps we should ask the mods to close it.

And no offense l88bastar, but the thread only took a turn for the worse after a certain post you made.

With that out of the way, please carry on with your non-price related discussions now.
Edited by magnek - 7/24/16 at 12:18am
post #1173 of 3587
Can we talk about JHH jacket now ! I like his jeans too

Enough jokes wink.gif

Whats the die size, what do you think?
post #1174 of 3587
aaaaaaand we talked about that too! wink.gif
Quote:
Originally Posted by ChevChelios View Post

so how big will this die be ? ~440-450 mm2 ? or 470-480mm2 ?


stock vs stock we can expect this to be ~30% faster than a 1080 in games ? maybe 35% ..

Quote:
Originally Posted by EightDee8D View Post

312 x 1.5 = 468mm^2

so around 460-80 yes.

this isn't fully enabled GP102 though. so expect another card with full die later.

Quote:
Originally Posted by tajoh111 View Post

Probably 490-510mm2.

314mm2 = 7.2 billion transistors.

12/7.2 = 1.666

1.666* 314 = 523mm2.

Larger dies have higher transistor density so 490 to 510mm2.
post #1175 of 3587
Quote:
Originally Posted by Kpjoslee View Post

That is really low expectation lol. Fury X is already around 1070 level.

I expect vega hbm2 to be half way between gtx 1080 and titan xp
Edited by renejr902 - 7/24/16 at 12:39am
post #1176 of 3587
Historically, neither company has ever succeeded in getting 1:1 scaling when doubling up on a chip. 290 (non-X) was basically a doubled up 270X and had 1.6x the performance. Fury X was a doubled up 280X (itself a rebranded 7970 GE) and had 1.7-1.9x the performance depending on resolution. So, if we literally double up the P10 chip used in RX 480 (double everything, from shaders to ROPs to TMUs etc), a low estimate would put it around 1070 performance, and a high estimate would put it pretty much equal to 1080 performance. And this is assuming the big chips can maintain the same clocks as the smaller chips.

So a hypothetical Vega with 4608 shaders would be around 1080 performance.. Suppose we can get another 20% performance out of it through OCing. Assuming the most optimistic 1:1 scaling, that means we need it running around 1520 MHz. So this Vega chip with 4608 shaders overclocked to 1520 MHz would be 20% ahead of a stock 1080. But we know that Polaris struggles to even reach 1400+ game stable clocks, and scaling is never 1:1.

Also, current rumors say Vega will only have 4096 shaders, which means there's a chance it may not even reach stock 1080 performance. Obviously there are still a lot of unknowns, and HBM2 remains a big wild card. But working with what data we have at the moment, I'm gonna say within 10% of 1080; + if you're optimistic, - if you're pessimistic.
post #1177 of 3587
I expect 500mm2-520mm2 too, but im really not sure

About performance i expect 30%-35% faster than gtx 1080 too. And add another 10-15% after overclocking, just enough for a average of 65-70fps in 4k and no hairwork for witcher3 and 65-70fps average for rise of tomb raider. Crysis 3 and far cry primal will have a hard time to get 60fps at 4k, maybe 55fps average, but i really hope for 60+
post #1178 of 3587
wink.gif
Quote:
Originally Posted by magnek View Post

Titan X = 1.5x 980 in terms of chip spec, but in actual games only ended up being 25-35% faster. And this with only a 6.7% boost deficit relative to the 980 on average. (1119 MHz vs 1194 across 9 games) If you look through that Anandtech review, in the two games with the smallest boost clock difference (Crysis 3 and GRID Autosport), Titan X ends up being only ~35% faster even at 4K.

Titan XP = 1.4x 1080 in terms of chip spec (well shader count really since we know nothing about number of TMUs and ROPs), but the official listed boost clock at 1513 MHz is already 12.7% slower than 1080's official boost of 1733 MHz. So there's just no way stock vs stock the Titan XP will end up being 40% faster. As I've said previously, stock vs stock I expect Titan XP to be ~25% faster.

lol maybe we're arguing over price because we truly have run out of things to discuss lachen.gif :/
post #1179 of 3587
Welp, I guess we're done then. See you guys in a week when the reviews come out so I can read people bickering about a card I have no intention of buying. tongue.gif
post #1180 of 3587
Quote:
Originally Posted by magnek View Post

Historically, neither company has ever succeeded in getting 1:1 scaling when doubling up on a chip. 290 (non-X) was basically a doubled up 270X and had 1.6x the performance. Fury X was a doubled up 280X (itself a rebranded 7970 GE) and had 1.7-1.9x the performance depending on resolution. So, if we literally double up the P10 chip used in RX 480 (double everything, from shaders to ROPs to TMUs etc), a low estimate would put it around 1070 performance, and a high estimate would put it pretty much equal to 1080 performance. And this is assuming the big chips can maintain the same clocks as the smaller chips.

So a hypothetical Vega with 4608 shaders would be around 1080 performance.. Suppose we can get another 20% performance out of it through OCing. Assuming the most optimistic 1:1 scaling, that means we need it running around 1520 MHz. So this Vega chip with 4608 shaders overclocked to 1520 MHz would be 20% ahead of a stock 1080. But we know that Polaris struggles to even reach 1400+ game stable clocks, and scaling is never 1:1.

Also, current rumors say Vega will only have 4096 shaders, which means there's a chance it may not even reach stock 1080 performance. Obviously there are still a lot of unknowns, and HBM2 remains a big wild card. But working with what data we have at the moment, I'm gonna say within 10% of 1080; + if you're optimistic, - if you're pessimistic.

I never understand why radeon cards has more cores unit and tflops than nvidia cards and never perform like them. Can someone here explain me that shortly ? Thanks

( about memory bandwith i know they dont compress memory like nvidia can do, but about core units and tflops i dont understand they cant perform better than nvidia)
Edited by renejr902 - 7/24/16 at 12:46am
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Hardware News
This thread is locked  
Overclock.net › Forums › Industry News › Hardware News › [Twitter] Jen-Hsun introduces the new NVIDIA TITAN X