Overclock.net banner
1 - 20 of 20 Posts
Discussion starter · #1 ·
649
Quote:
The last time we did an article on PCI-Express scaling, was when graphics cards were finally able to saturate the bandwidth of PCI-Express x16. Not only was it a time when PCI-Express 2.0 was prevalent, but also when the first DirectX 11 GPU hit the market, and that was over six years into the introduction of the PCI-Express bus interface. Since 2009, thanks to fierce competition between NVIDIA and AMD, GPU performance levels have risen at a faster rate than ever, and the latest generation of high-end GPUs launched by the two GPU rivals adds support for the new PCI-Express 3.0 interface. The new interface sprung new questions from users like "Do I need a new motherboard to run a PCI-Express 3.0 card?", "Will my new PCI-Express 3.0 card be much slower on an older motherboard?" or "My motherboard supports only x8 for multiple cards, will performance suck?"
Source
 
Wish they would do x1 for 1.1 and 2.0
smile.gif
I have eGPU and until we get thunderbolt 2.0 x1 is the highest we get atm.
 
Quote:
Originally Posted by G3RG View Post

Interesting to see how little even pcie 1.1 4x actually bottlenecks. I also now get to rub this in the faces of the stupid people that keep telling me pcie3 is a must have because 2.0 is such a bottleneck.
biggrin.gif
No one in their right mind has ever said PCI-E 2.0 was a bottleneck with a single GPU that this article tests. You only start to see bottlenecks when you are running 3-4 GPU's in high-resolution Surround monitor config's and when slots get kicked down to 8x.

Single GPU you are fine with PCI-E 2.0, always have been. This article doesn't test anything that I mention above that would actually stress the PCI-E bus.
 
Quote:
Originally Posted by CallsignVega View Post

No one in their right mind has ever said PCI-E 2.0 was a bottleneck with a single GPU that this article tests. You only start to see bottlenecks when you are running 3-4 GPU's in high-resolution Surround monitor config's and when slots get kicked down to 8x.
Single GPU you are fine with PCI-E 2.0, always have been. This article doesn't test anything that I mention above that would actually stress the PCI-E bus.
this i wish they would have tested multi gpu setups and see what the hit was across the board there
 
I had to put my brothers HD 6850 in a PCIe 2.0 x4 slot because the sound card would only fit into the x16 slot without blocking airflow to the graphics card. This puts my mind at ease, I was worried I'd severely bottlenecked his rig.
smile.gif
 
I'd like to see the same testing on an AMD system where the PCI-e data still has to take a ride thru the system bus.
 
Of course a single card won't bottleneck...Now add in a second and third card and watch the bottlenecks appear. This is like testing CPUs using some benchmark based around late 90s code like SuperPi, and trying to say it applies to anything but benchers.
 
GTX 680 at x4 3.0 is 96%. How isn't that a bottle neck? This is the main reason I'm going to go for a dual GPU. I don't have to worry about the pcie lane bullcrap
I would love it if they showed the differences between a GTX 690 at x16 v x8. If big kepler is the card it's supposed to be, x8 could possibly be a serious limitation for the 700 series cards. Ultimatly, CPU architecture needs to be expanded to offer more PCIe lanes, PCIe 4.0+ availability, or widespread implimentation of PEX chips, is the only solutions for the future. (I don't like PEX idea.) People also forget that we don't have to just worry about GPUs. What about add on devices, wifi, usb, firewire, esata, AND SSDs. These all steal PCIe lanes! Anyway, /rant.
 
Quote:
Originally Posted by Brutuz View Post

Of course a single card won't bottleneck...Now add in a second and third card and watch the bottlenecks appear. This is like testing CPUs using some benchmark based around late 90s code like SuperPi, and trying to say it applies to anything but benchers.
Yeah, agreed--in fact, in reading the intro of their write-up, I kinda thought that was going to be their overall goal, especially in discussing Ivy Bridge properties specifically:
Quote:
Another impressive feature of Ivy Bridge Core processors, provided they're paired with Intel Z77 Express chipset, is that the second x8 link from the CPU root complex can be split as two x4 links, making x8/x4/x4 possible, giving some motherboards 3-way SLI and CrossFireX capabilities without clogging the DMI chipset bus (that 4 GB/s pipe between the CPU and chipset), which is better left untouched by graphics cards to help with today's bandwidth-hungry SSDs.
I thought it was a pretty mundane choice to showcase again that pci-e scaling (i.e. graphics card performance) with a single card has relatively little effect even at pci-e 1.1 8x. That really doesn't have much to do with Ivy Bridge as much as it has to do with GPUs not needing the bandwidth that current pci-e can provide. Even SB CPUs didn't bottleneck single cards, so why would you think IB would be any different for a single card?
rolleyes.gif


Unless I am missing something grossly obvious?
 
I am really surprised that there are in fact a few charts like this one:

649

An actual change going from 16x 2.0 / 8x 3.0 up to 16x 3.0. Usually the percentages are with 1-2% but there are a few where it actually makes a difference. That is a 13% increase in performance going all the way up to 16x 3.0, now just imagine multiple GPU's and those slots getting knocked down to 8x. So you can see PCI-E 3.0 is needed on high end setups, not so much for single GPU users.
 
Quote:
Originally Posted by CallsignVega View Post

I am really surprised that there are in fact a few charts like this one:

An actual change going from 16x 2.0 / 8x 3.0 up to 16x 3.0. Usually the percentages are with 1-2% but there are a few where it actually makes a difference. That is a 13% increase in performance going all the way up to 16x 3.0, now just imagine multiple GPU's and those slots getting knocked down to 8x. So you can see PCI-E 3.0 is needed on high end setups, not so much for single GPU users.
Agreed--which is why THAT would have made for a much better (or rather, more interesting) investigation.
 
I don't think multi-gpu setups are as relevant to this as the fact that some less savvy users have had concerns about using a PCIe 3.0 card on a 2.0 or lower system. Most of us here may know better, but it's possible that some people needed confirmation that PCIe 3.0 is not needed for a single GPU, which is probably what the majority of such people would be using.

To side with you guys though, I was hoping for multi-card testing as well.
thumbsdownsmileyanim.gif
 
Quote:
Originally Posted by Shmerrick View Post

GTX 680 at x4 3.0 is 96%. How isn't that a bottle neck? This is the main reason I'm going to go for a dual GPU. I don't have to worry about the pcie lane bull****.
I would love it if they showed the differences between a GTX 690 at x16 v x8. If big kepler is the card it's supposed to be, x8 could possibly be a serious limitation for the 700 series cards. Ultimatly, CPU architecture needs to be expanded to offer more PCIe lanes, PCIe 4.0+ availability, or widespread implimentation of PEX chips, is the only solutions for the future. (I don't like PEX idea.) People also forget that we don't have to just worry about GPUs. What about add on devices, wifi, usb, firewire, esata, AND SSDs. These all steal PCIe lanes! Anyway, /rant.
Woah woah calm down, pcie 3.0 just came out and its nowhere near bottle-necked by any gpu on the planet.
 
Quote:
Originally Posted by Shmerrick View Post

GTX 680 at x4 3.0 is 96%. How isn't that a bottle neck? This is the main reason I'm going to go for a dual GPU. I don't have to worry about the pcie lane bull****.
I would love it if they showed the differences between a GTX 690 at x16 v x8. If big kepler is the card it's supposed to be, x8 could possibly be a serious limitation for the 700 series cards. Ultimatly, CPU architecture needs to be expanded to offer more PCIe lanes, PCIe 4.0+ availability, or widespread implimentation of PEX chips, is the only solutions for the future. (I don't like PEX idea.) People also forget that we don't have to just worry about GPUs. What about add on devices, wifi, usb, firewire, esata, AND SSDs. These all steal PCIe lanes! Anyway, /rant.
If you ask me, your logic is flawed. Why would you even be concerned about PCIe x4? 96% on only 4 lanes actually sounds pretty good and is also an indication that an x8 link would be sufficient for one card. That being said, on a Z68 or Z77 platform you'll have 20 PCIe lanes. So, theoretically, you could have a pair of GTX680's @ x8/x8 3.0 along with each of the other devices that borrow PCIe lanes and not be starving for bandwidth.
Please enlighten us on exactly where the problem with that is.
 
Quote:
Originally Posted by metal_gunjee View Post

I'd like to see the same testing on an AMD system where the PCI-e data still has to take a ride thru the system bus.
AMD FX CPU with HT 3.1 (Hyper Transport at 3.2Ghz) can process data transfer up to 25.6GB/s (12.8GB/s unidirectional) at factory clocks.
And I don't remember where, but I've seen a review describing GTX 680 on PCIe 3.0 has a maximum data rate (GPU to CPU) of 9.1GB/s (unidirectional, checked by sandra GPU benchmarks).
I know other peripheral devices depend on HyperTransport bus too. But If we slightly overclock thye HT bus, AMD can get better results from a PCIe 3.0 GPU (Untill there is no single GPU performing as fast as quad 680). I also know there will absolutely be a difference with PCIe 2.0 over PCIe 3.0. But as long as there is 60+ fps at any resolution then at least for AMD, its not a problem. Yes AMD will be failing competing in Benchmarks.

No offense man ! Just said what was in my mind.
thumb.gif
And as you want, I really would like to see this type of reviews with AMD system too.
 
Quote:
Originally Posted by sumitlian View Post

AMD FX CPU with HT 3.1 (Hyper Transport at 3.2Ghz) can process data transfer up to 25.6GB/s (12.8GB/s unidirectional) at factory clocks.
And I don't remember where, but I've seen a review describing GTX 680 on PCIe 3.0 has a maximum data rate (GPU to CPU) of 9.1GB/s (unidirectional, checked by sandra GPU benchmarks).
I know other peripheral devices depend on HyperTransport bus too. But If we slightly overclock thye HT bus, AMD can get better results from a PCIe 3.0 GPU (Untill there is no single GPU performing as fast as quad 680). I also know there will absolutely be a difference with PCIe 2.0 over PCIe 3.0. But as long as there is 60+ fps at any resolution then at least for AMD, its not a problem. Yes AMD will be failing competing in Benchmarks.
No offense man ! Just said what was in my mind.
thumb.gif
And as you want, I really would like to see this type of reviews with AMD system too.
Sounds reasonable to me.
I'm actually an AMD fan myself (but not fanboy if ya know what I mean)
smile.gif
but I've wondered for a long time just how much advantage an on-die PCIe controller has.
 
Quote:
Originally Posted by Shmerrick View Post

GTX 680 at x4 3.0 is 96%. How isn't that a bottle neck? This is the main reason I'm going to go for a dual GPU. I don't have to worry about the pcie lane bull****.
4% is within a margin of error.
Quote:
Originally Posted by Shmerrick View Post

I would love it if they showed the differences between a GTX 690 at x16 v x8. If big kepler is the card it's supposed to be, x8 could possibly be a serious limitation for the 700 series cards. Ultimatly, CPU architecture needs to be expanded to offer more PCIe lanes, PCIe 4.0+ availability, or widespread implimentation of PEX chips, is the only solutions for the future. (I don't like PEX idea.)
A double of the card does NOT mean double of the data though. i.e. The texture data to the cards is not increased since data is mirrored. System architecture already offer enough bandwidth to cover future expandability for years to come... as demonstrated in the the review. What is a PEX chip? Do you mean PLX?

Quote:
Originally Posted by Shmerrick View Post

People also forget that we don't have to just worry about GPUs. What about add on devices, wifi, usb, firewire, esata, AND SSDs. These all steal PCIe lanes! Anyway, /rant.
Well, that's the point of this benchmark.... to see what happens when there are few PCIe lanes. Since PCIe is point-to-point, any other devices use PCIe have no direct impact on the GPU though.
 
1 - 20 of 20 Posts
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.