Originally Posted by ZealotKi11er
Yes 290X and 290 were good cards for used market but that does not make AMD money. This was the biggest problem why Nvidia got market share.
I didn't buy any of my Hawaii parts used, though I did sell a few. Just before the 300 series dropped, and shortly after, new 290Xes were well under 300 dollars.
AMD's problem with the R9 290/290X was lack of supply early on and inadequate cooling on the reference designs.
Originally Posted by Superplush
I personally see Fiji as trying to get in on HBM v1 before all the HBM v2 cards came out, even with 4gb limit it isn't as limiting as 4GB GDDR.
Memory capacity is memory capacity and 4GiB of HBM on Fiji is just as much or as little of a limitation as any other memory standard would be.
Originally Posted by Majin SSJ Eric
I still think that the speed of HBM has blunted the 4GB limitation somehow.
This has never even come close to making anything vaguely resembling sense. You can't alleviate a bottleneck by making the part suffering from it (rather than causing it) faster. The larger the gap between local memory speed and the interface that attaches the part to where the rest of the assets are stored, the more acute the effect of the capacity limitations of the former should be.
Originally Posted by Majin SSJ Eric
It just seems to me that anecdotally I keep hearing from Fiji owners who are having no VRAM issues at resolutions that kill 4GB GDDR5 cards. Could just be a myth, but a prevalent one if it is...
Fiji does handle memory capacity limited scenarios better than contemporary 4GiB GDDR5 cards, it's not because of HBM. It's possible that Fiji caches resources better than Hawaii, and being a newer incarnation of GCN, Fiji has better delta color compression that all other AMD GPUs other than Tonga, but that would be the case no matter what kind of memory was attached to it.
There are tests out there that do show Fiji performance falling off faster with increasing VRAM use than cards that have more, even in some cases, Hawaii parts. Shadow of Mordor has always been a prime example of this: http://techreport.com/blog/28800/how-much-video-memory-is-enough
Of course, most current games don't really need more than 4GiB, except at the most extreme of settings, so Fury's 4GiB usually does just fine...for now.
I'd really like to see how the card reacts, compared to 6 or 8GiB parts in Elite: Dangerous
as with the custom texture settings I use, I frequently see VRAM allocation max out, which is usually followed by a short period of hitching as the game seemingly evicts assets.
Originally Posted by epic1337
on a side note, this is also the reason why a fast system ram can give you a slight edge in gaming performance.
PCI-E speed and latency are generally quite a bit worse than system memory and this is where the bottleneck is when retrieving assets that cannot be stored locally, or that have been evicted.
Originally Posted by Rayce185
Is mining still that profitable? With the new cards coming out and nvidia boosting their DP performance it may be again...
GPU mining hasn't been profitable for some time as the most popular key derivation functions are all minable on ASICs that are orders of magnitude more efficient. There are some smaller coins using different algorithms that still work best on GPUs, and some money can be made here, but these coins are inherently more of a gamble.
DP performance is irrelevant as most coins never used algorithms that benefited from DP performance.
Originally Posted by ronnin426850
Maybe Litecoin or Dogecoin. Surely not Bitcoin, there are huuuuge mines that make it terribly hard for a GPU to get any return.
Litecoin and Dogecoin use scrypt and there are ASICs for this as well. They don't outclass GPUs to the same degree as Bitcoin (SHA256) ASICs do, but they are good enough that most people mining Litecoins with GPUs now will lose money.