Using examples like "8xMSAA @ 1440p" or 3x3SSAA @ 4K as the logic behind advising someone to spend a SUBSTANTIAL amount ofmoney on GPU's, much less someone who has stated that 1080p is the highest resolution they are using and will be using for the foreseeable future, is like telling them that they need a Ferrari 599GTB for their daily commute on 65mph highways instead of a BMW M3, because the Ferrari can reach over 200mph while the M3 can "only" go 185mph. It's the kind of advice I see, both in the enthusiast PC world as well as in the semi-/pro auto racing world, given by people who don'thhave experience with the equipment, be it using 3GB 780Ti's @ 1440p for gaming or sitting behind the wheel of a full-race-dress Italian exotic at 170mph on the track.
While you can learn a lot from reading, there comes a point where you are unable to learn anything more withoutfirsthand eexperience.
That said, I think the combination of the advent of the first affordable 4K displays combined with the "new" consoles (specifically their being far closer to a PC in components than many generations of previous consoles), has caused a tremendous amount of FUD.
I game primarily on either a 21.5" 1080p or higher end 1440p (27") panels, and have a number of different GPU setups (670FTW/680LTG/780Ti Kingpin all 2-way SLI, 3x 580 3GB Classified Ultra, and in the past year have also had 2x7970LTG, 2xR9 290X LTG, and a few others).
There are fewer than half a dozen games that are truly limited by 2GB VRAM @ 1440p, and only two (Wolfenstein + WD) are unrelated to NORMAL levels of AA, frankly they're just garbage optimization. Everything else I've played is fine with 2xMSAA + FXAA/SMAA on 2gigs, which visually (1440p) is superior to 4xMSAA.
Running 8xMSAA @ QHD or higher is a waste of resources, has literally almost zero visual benefits over 4x which is not even necessary in most cases. Intelligent allocation of resources to image quality goes a long way, for example combining low (2x) MSAA with shader based SMAA, as the way each works is so different (plus the zero cost for FX/SMAA) that the end result is much more than the sum of the parts.
Excluding Fail Dogs, I have yet to see anything come close to really pushing 2x 780Ti's (Kingpin) @ "just" 1398/8100, but again I don't use more than 4xMSAA + FX/SMAA (rarely more than 2x), as I have not found a single benefit from doing so. It is a pure epeeen thing.
VRAM allocated DOES NOT EQUAL VRAM required/used.
My friend has 3x Titan Blacks, and at the same settings in the same game they'll indicate 3.8-5GB "in use" (1440p), yet when he runs 2/3 of the cards (same "in use" ie allocated VRAM), my "inadequate" 3gig cards (2.3-2.5 allocated) consistently perform 8-14 percent higher.
I saw the same thing before I ditched the 290X Lightnings. Higher indicated "used" memory, but with the 290X's at the highest clocks they could hit, they were still 9-14 percent slower than the lowly 3 gigs I have. Oh, and I was running "just" 1328/7800. At higher, but still not max, clocks of 1480/8200 the gap grows by a further 6-10 percent.
This is all from first hand experience, from equipment I spent my hard earned money on. Why would I have sold the (one worthwhile model of) 290X's and stuck with the Kingpins if they were worse in any way? I wouldn't have, because on the exact same system, the Kingpins are significantly faster.
But, if you require 8xMSAA (or the manly AA: 3x3SSAA) even at QHD + resolutions, I can't stop you. Just realize that 4 gigs is exactly as future proof as 3.... Which is ZERO!