Originally Posted by Arizonian
Exactly my point to those who claim 1GB VRAM isn't enough for a single monitor due to VRAM usage but all of a sudden 2GB across 3 monitors even though it's exceeding 2GB VRAM enough?
So people think that across 3 monitors going over 2GB VRAM dosent have any effect on FPS but going over VRAM on one monitor does?
Anyone else see the problem in this failed hipocracy?
On a single monitor the GTX 570 is going to be enough on a single monitor like Pioneer said in previous post and many others concur.
A fair number of people rag on nVidia for not catering to surround/extreme resolution gamers with their standard card, but while AMD do do better on that front with their 2GB as standard on the 6900 series, I'm growing increasingly of the opinion that for Surround, classically "insane" amounts of VRAM are required. When AMD ship a card with 4GB of VRAM per GPU
, then I'll think they're actually taking it really seriously.
Same with nVidia.
VRAM usage doesn't scale in a linear fashion to number of screens used. Framebuffer scales linearly, but nothing else does. In any given scene, the wider it is, the more likelihood there is that you'll need more textures, but for most cases, if you've got a few hundred textures loaded they'll probably cover it. Look at Deus Ex: Human Revolution; perhaps not the best example, but the VRAM usage in Surround goes up only marginally from VRAM usage on a single monitor. Now, either it's got a really fantastically efficient texture load/unload algorithm going on behind the scenes, or Surround VRAM utilisation is tremendously engine-dependent.
Originally Posted by Jodiuh
AFAIK, no PhysX in BF3. At least, I didn't see that option while playing.
FWIW, I could feel the stutter as it approached 1000+ VRAM via Afterburner.
I believe I can present an explanation; or at least one that explains some of the exhibited issues...
The framebuffer reserves an amount of VRAM for each frame as it is being drawn/set to monitor, plus room for however many frames you have set to render ahead. Because it stores this as raw pixel information, it's not exactly small. You have 8-bit data for each of RGB, and on modern cards alpha as well. When you include overhead, such as lookup tables and other 'housekeeping' info for the card, each frame starts to get a bit on the large side. This limits how much you can fit in the rest of it, as it's essentially not there for other purposes - textures, shader code, etc.
This means that you'll start seeing what AMD/ATi termed "Hypermemory" and nVidia terms "TurboCache" swapping to system RAM when the GPU runs out of texture room. And compared to VRAM, which can transfer at rates easily topping 150GB/s on higher-end cards, system RAM is slow. Almost glacially so. And you won't get the raw speed of system RAM, either, as it has to talk through both a DirectX layer, and a driver layer, and a kernel layer, before getting to what the card needs. Add on the latency inherent, and when calling info for the GPU out of system RAM (PCI-E bus latency, PCI-E controller, memory controller, RAM, then all the way back again) the card needs to still need it once it's arrived - because it takes a long, long time.
Now, if your situation is anything similar to mine when I went investigating this a while ago (over a year, now!) if you monitored system RAM usage, you'd probably see it absolutely full to bursting when the huge framerate dives happen.
This is when the caching to system RAM fails, and it falls back to a pagefile
on your HDD/SSD. Which, regardless of whether you're running the latest Sandforce SSD or a 4200RPM HDD, is so epically slow, it's like watching Grand Prix Continental Drift occurring in real time. Whenever I saw huge fps drops when testing, it was always when I'd pushed my GPU too far in terms of how much I was asking it to load, and it was paging first to system RAM, then to a pagefile. I confirmed this by stuffing another 6GB of RAM into my system and seeing if it still did it at the same points. It did not. Don't get me wrong - framerates were still appalling, but because it wasn't hitting up the HDD for texture memory, they didn't drop into the low-single-digits, and instead remained in the teens.
You'll possibly also see some odd behaviour at times - where, if you're asking the GPU to load too much in terms of textures, it doesn't even try to fit them into VRAM, and just goes straight to system RAM... in these scenarios, you can be seeing what amounts to a random VRAM usage number in monitoring software, and high system RAM usage levels, but also at the same time terrible framerates.
At least, this is the hypothesis I've formed after significant testing; I have no way of actually confirming whether when system RAM is full Windows is intelligent enough to load other stuff that is less access-speed dependent into the pagefile first, but it seems to tally up with all the evidence I've gathered.