Originally Posted by ThePath
Look decent. Remember that maxwell shader are better than Kepler. 3200 maxwell cores is better than 3200 kepler cores
Shader power isn't everything.
40 ROPs is pretty low, but more than I would expect for the mid-range part. We'll probably see 48-64 ROPs on big Maxwell.
Originally Posted by Dangur
Whats the point of 4gigs with 256bit?
Bus width isn't the same as bandwidth, and nothing about a 4GiB buffer demands extreme bandwidth to be useful.
It's true, that with current and past titles, that to utilize a 4GiB texture/frame buffer you would need to run at settings most cards equipped with 256-bit memory buses could not handle well. However, the trend people have of linking the two, absent reason, needs to stop. It just leads to foolishly incorrect assumptions.
If a game has a lot of high res textures, or if multi-GPU solutions are used, that large buffer will come in very handy, even on mid-range parts.
Originally Posted by Rookie1337
I thought the whole point of GDDR5 was you could have a smaller bus-width and still utilize a higher VRAM fully (to a degree) and still have as good or better bandwidth?
GDDR5 does provide more bandwidth at the same bus width as previous standards. It's effective clock speed is higher.
Originally Posted by Exilon
If Nvidia is using a 256-bit bus, it's because they think the huge L2 cache in Maxwell will counteract the reduced memory bandwidth.
It's cheaper. Smaller memory controller = smaller die. Less memory channels = less PCB traces.
256-bit is perfectly sufficient for a mid-range to upper-mid range part.
Originally Posted by Popple
What are the main factors of a card that determine its antialiasing performance?
For conventional AA types (MSAA, SSAA, and their derivatives)? Fill rate (ROP count * clock speed) and memory bandwidth (memory bus width * clock speed), are the largest factors.
For post-processing AA (MLAA, FXAA, SMAA, etc)? Shader performance is the prime factor.
Originally Posted by Alatar
The specs do sound somewhat reasonable for a GM204 part. Cuda core count is maybe a bit high imo but we'll see.
That said there's two things that aren't being said here:
1) There seems to be a price listed in that table which is quite silly. Prices of GPUs are up in the air until the last couple of weeks before launch.
2) This comes from OBR. OBR might have a good record with Bulldozer but he has an absolutely terrible record with anything related to Nvidia GPUs. He's the guy who was passing around photoshopped versions of random chips calling them GK114, or some random ccard with IHS calling it GK100...
Grain of salt...
Agreed. That price, most especially, does not seem reasonable for a non-flagship part.
Originally Posted by AnnoyinDemon
I thought the bus speed effects the memery. I still have a lot to learn...
How big is the cache on a GPU?
Bus width does affect memory bandwidth, but it's one component of it, and even theoretical memory bandwidth figures say little about the whole memory sub-systems.
GPUs typically have very small caches. I think the 750ti had 2MiB of L2. Kepler only had 256KiB.