I was thinking about AdoredTV's "rumors" in light of the latest information on GTX 1660 and GTX 1650 GPUs.
Basically AMD has to compete with RTX 2060, GTX 1660 TI and below as that's the stronghold of the market.
If we look at Vega 64 to Radeon VII , the shrink was about 33% less die area.
Just how plausible is it , based on the usual benchmarks? It is highly probable the primary change from shrink to 7nm will be leveraging the additional efficiency for higher clocks (1800-2100MHz) in addition to the additional memory bandwidth from GDDR6 that allows them the same memory bus.
RX 580 performance at 75W
: 33% less die area on the Polaris10 chip which is 232mm2 would be about 150mm2 which is in line with Polaris 11 (RX 460). So that one checks out more or less for the board itself (not the GPU cost to consumer , but maybe reference design with cheap blower). Performance-wise a RX 560 is about 40% behind a RX 570 so that's plausible since we've seen Radeon VII go up to 2000-2100MHz and RX 560 (1024 shaders) is only 1200MHz-1300MHz or so. It's a similar case with Vega 16 (Macbook Pro 75W mobile GPU) : https://www.techpowerup.com/gpu-spec...-vega-16.c3331
Just bumping clocks to 1800MHz would yield ~40% improvement , moving to GDDR6 would also improve memory bandwidth to near RX 580 levels. AMD has traditionally used Firestrike as the performance reference so the "RX 3060" would be easily able to overtake GTX 1060 on that benchmark. On hwbot a 1700MHz RX 560 1024 shader scores 9K Firestrike GPU score (https://hwbot.org/benchmark/3dmark_-...=0#interval=20
). It doesn't fare as well in Timespy , at 1700MHz it was able to obtain ~2.6K Timespy GPU score so likely it is memory bandwidth or ROP bound (RX 480 performs on par with GTX
If you go back a generation to Pitcairn, the HD 7850 (also 1024 shaders but on 28nm) had 130W TDP at 850MHz, scaling it all the way up to 1800MHz should result in approximately RX 480 performance. https://www.techpowerup.com/gpu-spec...-hd-7850.c1055
Likewise proportionally the RX 560 1024 shaders is exactly half the shaders of the RX 570 which is essentially 2048 shaders.
The price is also plausible due to the recent leak of GTX 1650 costing $180 base price, $130 for a 4GB GDDR6 reference board is possible if the profit margins are slimmer (7nm vs 12nm is expensive) and due to making 7nm CPUs at the same time.
Vega 56 at 120W
: this is harder since Vega 56 uses HBM2, but if we scale a RX 470 (120W) based on Polaris 10 which had 1200MHz boost by 50% to 1800MHz we would get only roughly GTX 1660 Ti / GTX 1070 performance. The BIOs for RX 470 is typically 85W GPU power limit whereas RX 480 typically has 110W. At 1500MHz RX 570 with 2100Mhz memory it reaches ~16-18K Firestrike GPU score likely due to memory bandwidth (https://hwbot.org/benchmark/3dmark_-...=0#interval=20
). The move to GDDR6 will largely alleviate this although if the 32 ROPs is kept without refinement then likely the render backend will be the weakpoint. This falls in line with GTX 1070 FE (16-18K) and GTX 1660 TI (~17K FireStrike). Vega 56 has a 150W silent BIOs / 165W default BIOS @ <1500MHz, once you add fans and such that's when it ends up to 210W.
Likewise the GTX 1070 (and Vega 56) is about 50% faster in Timespy vs RX 470 (6.5K vs ~4K).
If you look at the R9 390 (275W 28nm process card) that was replaced by RX 470 , the performance at 1200MHz is around 17K Firestrike so the same thing at 1800MHz would be another couple thousand. (https://hwbot.org/benchmark/3dmark_-...=0#interval=20
R9 390 is actually ahead of RX 570 & RX 580 in Unigine superposition.
Pricing is less plausible , GTX 1660 is supposed to launch at $230 (base price) so if this launches at $200 it would be odd pricing. This likely would be a >$250 card as the GTX 1660 Ti runs $290-330 and also uses GDDR6. There's a value in having more VRAM , more memory bandwidth as it is likely 256-bit bus which would land it around 384 GB/s or more (close to Vega 56), and also 7nm process. Used GTX 1070s run around $200 already so this would do decently at $250.
Vega 64+15% at 150W
: unlikely at $250 since it's rumored to compete with RTX 2070 and GTX 1080. It would also hurt Radeon VII sales (as that is ~30% faster than Vega64). I think at a minimum it will be around $350-400 (they can advertise as RTX 2060 pricing for $500-600 RTX 2070 performance , RTX 2070 has been seen below $480), it's a business not a charity. Even used GTX 1080s run near $300. You'd basically need over double the performance of the RX 480 so unless it's 40CU (as opposed to 36CU) at 1800+MHz I doubt it's achievable with 150W TDP. Vega64 is memory bandwidth bound so maybe 256-bit bus GDDR6 clocked all the way up to around 448GB/s or more would allow this to be a minorly ahead of VEGA64. If the claim was Vega 64 at half power it would be more plausible , but an extra performance bump on top of that I'd be quite wary of.
Superposition chart sources: