Originally Posted by
Offler
It is quite simple to calculate real power of GPU (any) by checking of its ROPs, computing cores, and similar gpu core units, and multiply that by frequency.
I was telling people for few years now that GCN by AMD is currently ahead of time, but people were still looking on partial data, not considering whole image. Comparing R9-290x with GTX980ti has to take into consideration that Nvidia has VRAM running on 7000Mhz on stock (not sure about headroom for OC) and 96 ROPs (render output unit) while its computing core gives 5600 Gflops in single precision performance.
R9-290x has just 64 ROPs, 5000MHz on VRam, and 5600 gflops in single precision as well. And both cards are being compared, while Nvidia is considered slightly better?
No way guys, when you need 32more ROPs and a lot more VRam bandwidth to archieve similar score.