Overclock.net banner

FAQ: Does my P67/Z68 motherboard(s) support PCI Express 3.0?

42K views 22 replies 13 participants last post by  olehaus 
#1 ·
I have seen a lot of questions regarding whether or not one's P67/Z68 motherboard supports PCI Express 3.0. To clear up the confusion I decided to take what I have written in the past and post it here with some editing.

First PCI Express has the maximum of 16 lanes, whether this is PCI Express 2.0 or PCI Express 3.0. PCI Express 3.0 has doubled the performance of 2.0. On 2.0 each lane can do 1 GB/s; on 3.0 each lane can do 2 GB/s. Think of this as a freeway. No widening is done, but the speed limit has doubled. PCI Express that graphics card(s) use comes from the CPU itself.

Z68-Block-Diagram_450x408.jpg

(See how PCI Express 2.0 x16 is wired directly to the CPU, CPU is 2nd Gen Sandy Bridge).

Z75-blockdiagram_450x408.jpg

(Again, see how PCI Express is coming from the CPU directly).

Sandy Bridge CPU has PCI Express 2.0 controller built inside.
Ivy Bridge CPU has PCI Express 3.0 controller built inside.

So how come there is Gen3 motherboard going on? Why can't you just swap the CPU and have PCI Express 3.0? Answer to that is PCI Express switches. PCI Express switch is used to split a single 16 lanes into two 8 lanes. On motherboard that supports SLI setup, Nvidia mandates that both GPUs are to connect to the CPU directly. Since CPU itself can only supply a single 16 lanes, a gizmo called PCI Express switch is used to split it into two 8 lanes. This is from Gigabyte GA-Z68XP-UD3 (a motherboard that supports SLI) manual, page 8.

2q0r3f7.png


See how PCI Express switch sits between the slot and the CPU? This thing determines whether or not you are going to have PCI Express 3.0. Intel did not have any plan with PCI Express 3.0 when Sandy Bridge platform was first released. As a result motherboard manufacturers use PCI Express 2.0 switches on Gen2 motherboards. Which is why you cannot use PCI Express 3.0 on Gen2, because the switch, whom supposed to split x16 into two x8, doesn't support 3.0. Now take a look at GA-Z68A-D3-B3, a motherboard that does NOT support SLI.

2r4st9j.png


Since it only has one x16 slot, all 16 lanes are connected to the CPU directly. No switch is used. Therefore when you install Ivy Bridge it will run at PCI Express 3.0.

In fact, on ASRock website,
Quote:
By adapting PCI-E 3.0 quick switch IC onboard, supports the Next-Gen PCI-E 3.0! PCI Express 3.0 can maximize the bandwidth for the next-gen PCI Express 3.0 VGA cards, provide ultimate graphics performance.
http://www.asrock.com/microsite/pcie3/overview.html

On ASUS website:
t6886c.png

Notice how motherboards from above either are Gen3, or that they don't support SLI (single x16 slot).
http://event.asus.com/2011/mb/pcie3_ready/

Summary:

* When Sandy Bridge is used with any LGA1155 motherboard, it will always run at PCI Express 2.0. Always.

* When Ivy Bridge is used with motherboard that does not support SLI, it will run at PCI Express 3.0.

* When Ivy Bridge is used with P67/Z68 motherboard that does support SLI, it must says that it supports PCI Express 3.0 and/or Gen3 for it to support PCI Express 3.0. Otherwise it will run at 2.0.

Keep in mind that some BIOS/UEFI allow you to set the speed of PCI Express. You still have to fulfill one of the condition above to use PCIe 3.0.

PCI Express lanes coming from the chipset (x1 and x4) are still running at 2.0. It is the same with H77/Z75/Z77 chipset regardless of what CPU is used.

Z77 motherboard only: If your motherboard has a PCI Express x4 slot wired to the CPU, it can only be used when Ivy Bridge is installed. The slot will be disabled when Sandy Bridge is used. Consult your motherboard manual/diagram to find out if it is wired to the CPU or the chipset.
 
See less See more
5
#2 ·
This is good info, but it doesn't relay the fact that not even 2 Radeon 7970s can saturate PCI-E x16, nor does it mention the fact that there is less then 1% difference in the real-world performance between 16x lanes and 8x lanes.

Current GFX can't use maximum bandwidth of PCI-E 2.0, You'd probably need Quad fire or Quad SLI to even notice a difference between PCI-E 2.0 and 3.0 with current video cards.
 
#3 ·
Quote:
Originally Posted by Pitbully View Post

This is good info, but it doesn't relay the fact that not even 2 Radeon 7970s can saturate PCI-E x16, nor does it mention the fact that there is less then 1% difference in the real-world performance between 16x lanes and 8x lanes.
Current GFX can't use maximum bandwidth of PCI-E 2.0, You'd probably need Quad fire or Quad SLI to even notice a difference between PCI-E 2.0 and 3.0 with current video cards.
Very true. Currently it takes 3 way SLI/CF or more with 7970/680 to saturate PCIe 2.0. However, for those who intend to keep CPU/MB for couple years it is worth to keep PCIe 3.0 in mind, as we are getting close to saturate PCIe 2.0 x8. Supposedly it also benefits those who use a GPU and a RAID card/PCIe based SSD, as it is very likely for future RAID card/PCIe based SSD to have PCIe 3.0.
 
#5 ·
Question:

I'm more looking to confirm what I'm fairly sure of, than ask a question. My board is the GA-Z68XP-UD4. With a BIOS update, it is compatible with Ivy Bridge. Specifcations here: http://www.gigabyte.us/products/product-page.aspx?pid=3910&dl=1#sp

It does look though however, that it will be stuck to PCI 2.0 even if I used a Ivy Bridge chip right?
 
#7 ·
Quote:
Originally Posted by ShtKck View Post

Question:
I'm more looking to confirm what I'm fairly sure of, than ask a question. My board is the GA-Z68XP-UD4. With a BIOS update, it is compatible with Ivy Bridge. Specifcations here: http://www.gigabyte.us/products/product-page.aspx?pid=3910&dl=1#sp
It does look though however, that it will be stuck to PCI 2.0 even if I used a Ivy Bridge chip right?
Which rev do you have? Look at the edge of the board. If it is rev 1.0 it will use PCIe 2.0 switch, 3.0 for rev 1.3.
 
#8 ·
Would I be right in saying that a Dual GPU graphics card gets split to the lane it's installed in.
Such that a GTX 690 Would be each GPU running at 8x / 8x?

One thing I don't understand about the GTX 690 spec is that is says 48 lanes (16 per GPU and 16 shared)
Can anyone explain that in normal terms by chance? Source

Does that mean that a GTX 690 in a 16x slot on a Gen 3 mobo with an Ivy Bridge CPU would be able to push a total of 64GB/s [16 per GPU x2( b/c of pci-e 3.0), x2 (b/c 2 GPU's)] (assuming it magically had the power to do so?)

Or would it be 32 GB's/s max which is the same as a GTX 680 in two different card slots with pci-e 3.0 (again assuming magically it could saturate the entire lane)?
 
#9 ·
With dual GPU such as 690, it is essentially two GPUs on a single PCB. To allow both GPU cores share a single PCIe slot, a switch chip such as PEX is used on the PCB.
Quote:
Rather than engineering its own solution, though, Nvidia looked to PLX for one of its PEX 874x switches. The PCIe 3.0-capable switches accommodate 48 lanes (that's 16 from each GPU and 16 to the bus interface) and operate at low latencies (down to 126 ns).
This is talking about PEX chip. This chip allows 16 PCIe lanes from the GPU1, 16 from GPU2, and 16 from the PCIe slot connected together. Unlike a traditional PCIe switch where PCIe lane is split, PEX chip alternates between GPU1 and GPU2 at a very fast pace. Think of this as a train track switching from track 1 to track 2 back and forth so on.

Each GPU core will get 16 lanes.
 
#10 ·
Sweet. So all multi-GPU cards have a switch with them but this one is unique because it provides 16 lanes for each GPU with an extremely low time 126ns for "switching tracks" right?
So essentially 2 cards on 2 16x slots is better than a dual GPU card in a 16x slot since you can constantly both cards pushing data at the same time rather than switching?
I'm sure none of this is probably noticeable in terms of Real World performance though.

I think I read somewhere about the old boards having a similar setup allowing them to run 16x on Z68 for 2 different cards slots. Was that essentially doing the same thing with switching between cards rather than splitting to 8x/ 8x?
 
#11 ·
Quote:
Originally Posted by gmpotu View Post

Sweet. So all multi-GPU cards have a switch with them but this one is unique because it provides 16 lanes for each GPU with an extremely low time 126ns for "switching tracks" right?
So essentially 2 cards on 2 16x slots is better than a dual GPU card in a 16x slot since you can constantly both cards pushing data at the same time rather than switching?
I'm sure none of this is probably noticeable in terms of Real World performance though.
I think I read somewhere about the old boards having a similar setup allowing them to run 16x on Z68 for 2 different cards slots. Was that essentially doing the same thing with switching between cards rather than splitting to 8x/ 8x?
Some very high end motherboards use combination of PEX and standard PCIe switch, allowing x16/x16 for two GPUs or x8/x8/x8/x8 for four GPUs setup. x16/x16 is only beneficial if you are doing GPGPU, running two dual GPU cards such as 690, or running a single GPU with a x16 RAID card.

Don't worry too much about this unless you are planning to do something extreme.
smile.gif
 
#12 ·
Quote:
Originally Posted by trumpet-205 View Post

Some very high end motherboards use combination of PEX and standard PCIe switch, allowing x16/x16 for two GPUs or x8/x8/x8/x8 for four GPUs setup. x16/x16 is only beneficial if you are doing GPGPU, running two dual GPU cards such as 690, or running a single GPU with a x16 RAID card.
Don't worry too much about this unless you are planning to do something extreme.
smile.gif
I'm not personally planning to do something this extreme because it's too expensive, but the technology interests me and I like learning how things work. Plus, what happens if I win the lottery and go on a PC building spree?
 
#13 ·
#14 ·
Quote:
Originally Posted by gmpotu View Post

Given the discussions in this post can someone explain how this board
http://www.newegg.com/Product/Product.aspx?Item=N82E16813131822
supports 16x/16x/8x/8x configuration if the Ivybridge chip on LGA 1155 only supports 16 lanes?
That motherboard uses an expensive PCIe switch called PLX chip. This chip alternates between PCIe lanes as opposed to splitting it up. Imagine a train track switch alternate between track 1 and track 2, except does so very fast.
 
#15 ·
Gotcha so it's just like the switch with the GTX690 and using the 690 in a single slot on any other Ivy board that doesn't have the expensive switch on the board
 
#17 ·
Quote:
Originally Posted by azeem40 View Post

So you don't need an IB MOBO with an IB CPU to make use of PCIe 3.0?
Some p67/z68/h61 boards will support pci-e 3.0 with an ivy bridge cpu, no board supports pci-e 3.0 with a sandy bridge cpu.
 
#18 ·
So if I've gotten the explanation right, then

If using an SLI/CF z68/p67 and Ivy Bridge CPU on a board not specified to be PCI E 3.0 then running one 3.0 Card on the first PCI E slot the card will run at 3.0...?

the switches are relevant if a second card is added? without a second card the 3.0 switches are irrelevant?
 
#19 ·
Quote:
Originally Posted by stbone View Post

So if I've gotten the explanation right, then
If using an SLI/CF z68/p67 and Ivy Bridge CPU on a board not specified to be PCI E 3.0 then running one 3.0 Card on the first PCI E slot the card will run at 3.0...?
the switches are relevant if a second card is added? without a second card the 3.0 switches are irrelevant?
Quote:
* When Ivy Bridge is used with P67/Z68 motherboard that does support SLI, it must says that it supports PCI Express 3.0 and/or Gen3 for it to support PCI Express 3.0. Otherwise it will run at 2.0.
To answer you question, no. It will be 2.0.

Switch is the limiting factor on boards with SLI/CF support because it limits bandwidth to 2.0
 
#20 ·
So I'm planing on getting a Gigiabyte G1 Sniper 2 board which is [1155 2nd gen] [z68 chip set]

http://www.techpowerup.com/reviews/Gigabyte/G1_Sniper2/

Intel Core i7 3770K which is [Ivy bride] [Supports PCI gen 3.0]

http://www.cpu-world.com/CPUs/Core_i7/Intel-Core%20i7-3770K.html

EVGA GeForce GTX 680 SC Signature 2 that does support PCI gen 3.0 and I would like to get 2 in SLI

http://eu.evga.com/products/moreInfo.asp?pn=02G-P4-2687-KR&family=GeForce%20600%20Series%20Family&uc=EUR

Would I be able to run this config at PCI gen 3.0? or should I only use one GTX 680?
Thank you, I've learnt allot from this thread.

(Do I need a 3rd gen 1155 socket or will the 2nd gen make any difference to my motherboard?)
 
#21 ·
Don't get any of Gigabyte's P67/Z68 boards. They lack support, and full of various bugs.
 
#23 ·
I have an Asus P8Z68-V PRO (not Gen3 version) with an i7-3770 and am getting pcie 3.0 8x speed on both sli/cfx slots using 2 RTX 3060 cards (GA 106-300). This is confirmed by both GPU-Z and the fact I am getting full hashrate on ETH mining with Nvidia 470.05 driver. So it's possible to get pcie 3.0 on sli/cfx boards although the OP says otherwise.
 
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top