Originally Posted by brettjv
Originally Posted by Woundingchaney
There is still the question of whether 4gb can even be readily accessible via a 256 bit bus. I personally see 4gb being throttled by a 256 bus, much like the notion of sucking a pizza through a straw. 2gb usage give or take is most likely adequate, but simply stacking memory on a chip without increasing the bus size is little more than marketing when we are talking about this amount of hardware.
There is hardware differences given the ROPs and memory controllers between the 7900 series and the 600 series, though a 256 bit bus is still limiting when looking at the notion of actually utilizing 3+ gigs of memory.
I shall once again re-iterate my take on this subject, with a few additions for clarity
I believe that the 4GB card may very well have utility in SLI, at very high resolutions (i.e. surround) in a number of gaming scenarios ... part of the reason for this being what Sonny brought up earlier, i.e. in SLI each card is only responsible for 1/2 of your FPS, which in a certain sense means that your slightly limited bandwidth becomes less critical ... I believe his overall notion of 'needing' more bandwidth as FPS goes up is fundamentally sound, although there's a chicken/egg conundrum going on there. In a bandwidth-constrained scenario what we'd likely see is that SLI scaling is a bit less than we'd hope.
OTOH, I do not believe the 4GB has much utility in a single card scenario, excepting the case when one is using huge textures like a modded Skyrim, in which case it very well might. I think there's a pretty large difference in terms of bandwidth requirements when it comes to accessing textures vs. accessing the frame buffer, esp. when you're doing something like MSAA + deferred rendering.
This is because, for the most part, the textures are already 'there' in the memory, and all that needs to happen is the mapping of the texture to the geometry. I don't think that raising texture resolution requires all that much additional core power or bandwidth. However, when you raise the screen resolution, and start applying AA, you start introducing larger 'communication' requirement between the core and the memory (building the contents of larger, and more times sampled, frame buffer(s)). Hence core power and bandwidth become more critical.
Aside from the one benchmark from the German site linked to upthread that shows that Skyrim at 8xMSAA + 2xSGSSAA at 2560x1600 is *just* playable on a 4GB card, and definitely not playable on a 2GB card, there's really no tangible evidence in the form of benchmarks that a single 4GB card provides utility over a single 2GB card in terms of either raw FPS, or in maximum playable settings.
However, none of these review sites, TTBOMK, is testing games modded with very high-resolution textures. I *DO* believe that it's likely that the 4GB card may have utility in this scenario in terms of better fps/smoother gameplay. And again, that's because high-resolution textures really don't strain the core's power, nor the bandwidth. You really just want physical room to keep them handy so that they don't need fetching from system ram in real-time as you play.
So the above constitutes my answer to the OP's question as to WHY he sees people saying that the 670/680 'can't use' > 2GB. It's not a question of PHYSICAL usage. OF COURSE it's possible to create a gaming scenario where >2GB is physically 'used', and of course the 670/680 are perfectly capable of doing so. Nobody who's saying what the OP is asking about literally means that the memory physically cannot be addressed and utilized by these cards.
What remains in question, however, is how much UTILITY can be gained by slapping 4GB on one of these cards in terms of increasing one's maximum playable settings. The available evidence as of NOW suggests the answer is: it's extremely rare for it to have any.
So this is 'source' of the rumor that the OP is asking about: The evidence for the benefit of 4GB on one of these cards ... is extremely scarce. And the evidence for the contrary ... is all over the place.
This all being said, it may be possible to find some 'testimonials' of real-life folks out there who've upgraded from the single 2GB card to the 4GB card and discovered that they now get higher playable settings in some game or another. I've not personally seen them, but they certainly may be out there. I'm open to changing my mind on this, in the face of actual, tangible evidence
My best guess, though, based on the available evidence atm, is that any increase in playable settings on the 4GB card will almost entirely involve texture mods, and almost never involve resolution/AA levels. And that's because of a lack of core power, and/or a lack of bandwidth coming into play BEFORE memory capacity becomes the limiting performance factor ... when memory usage increases through the typical manner of increased resolution/AA levels.
Basically I think that the 256-bit 2GB GDDR5 memory subsystem outfitted on the Kepler cards is very well-matched to the overall architecture when running a single card setup, esp. running games in their stock state. If you're a big 'texture mod' fan, getting the 4GB card for a single card setup probably makes reasonable economic sense (or if you're just rich
) ... but otherwise ... I'm not convinced it does.