Overclock.net banner
1 - 20 of 182 Posts

Storage Nut
21,146 Posts
Discussion Starter · #1 ·
SSD Interface Comparison:

PCI Express vs SATA

(And not only one, but two reviews of the Plextor M6e!)



Remember the old days? Back when you had to use that big fat PATA cable to connect your drives? As HDD technologies progressed; RPMs and cache size increased, controllers, and firmware matured, they finally replaced the old connection with SATA to allow more speed and features to the end user. Well, we are at that time gain!

As newer and newer SSDs are hitting the marketplace, they all seem to be facing the same issue. The SATA interface itself has become a bottleneck. Newer and more advanced SSD controllers are able to allow more lanes for NAND, NAND flash I/O speeds are increasing, but are speed potentials are sadly shackled down by newest SATA 6Gb/s standard, let alone SATA 3Gb/s. The host interface is having issues keeping up! How will manufactures solve this issue? PCIe is the answer!

Today we are going to go into the similarities and differences between PCIe and SATA SSDs. I will be providing information on how SSDs perform now, the potential for the future, and what it all means for you the end user. By the end of this article you should have a good understanding on how these work and what is the best for you.

What is SATA?

If you've built PCs or had to add/replace a hard drive in your computer, chances are that you have encountered this connector before. SATA is the current computer bus interface for connecting a hard drive or SSD, or optical drive to the rest of the computer. SATA replaced the older PATA, offering several advantages over the older interface: reduced cable size and cost (seven conductors instead of 40), native hot swapping, faster data transfer through higher signaling rates, and more efficient transfer through an (optional) I/O queuing protocol. Since its introduction there have been three main revisions doubling bandwidth from the previous and allowing for extra advanced features while maintaining the same physical connector.

In order for a drive to communicate with the system, the SATA controller needs to have a mode set. Common SATA interconnect modes are IDE, AHCI, RAID.

Here is what each do:

  • IDE - Old, slower, it is simply a compatibility mode for storage devices and the system. The device will run as an IDE or PATA drive.
  • AHCI - AHCI stands for Advanced Host Controller Interface. It is a system memory structure for computer hardware vendors to exchange data between host system memory and attached storage devices. AHCI gives software developers and hardware designers a standard method for detecting, configuring, and programming SATA/AHCI adapters. It makes Native Command Queuing (NCQ) along with hot-plugging or hot swapping through SATA host controllers possible. NCQ is one of the important features of AHCI for SSDs. SSDs can process requests faster than HDDs. It can process so fast that the SSD could end up waiting for work. NCQ allows the OS/controller to request up to 32 simultaneous requests at once.
  • RAID - RAID stands for redundant array of independent disks, originally redundant array of inexpensive disks. It is a means by which your PC uses multiple disks as if they were one, either to increase performance, safeguard against disk failures, or both. RAID mode has all the advantages of AHCI mode. There are four main factors of a RAID setup: striping, which spreads data across multiple drives, mirroring, which copies the data to more than one disk, space efficiency, which is how much of the total space is available to use, and fault tolerance, which is a measure of how well protected the RAID array is against disk failure.

What is PCIe?

Peripheral Component Interconnect Express, or PCIe, is a physical interconnect for motherboard expansion. Normally this is the connector slot you plug your graphics card, network card, sound card, or for storage purposes, a RAID card into. PCIe was designed to replace the older PCI, PCI-X, and AGP bus standards and to allow for more flexibility for expansion. Improvements include higher maximum bandwidth, lower I/O pin count and smaller physical footprint, better performance-scaling, more detailed error detection and reporting, and hot-plugging. The physical connector on the motherboard typically allows for up to 16 lanes for data transfer. A PCIe device that is an x4 device can fit into a PCIe x4 slot up to an x16 slot and still function. PCIe 1.0 allowed for 250MB/s per lane, PCIe 2.0 allows for 500MB/s per lane and the newest PCIe 3.0 allows for 1GB/s per lane.

However, in real world throughput PCIe 2.0 allows for around 400MB/s due to its 8b/10b encoding scheme, while PCIe 3.0 allows for 985MB/s due to its improved 128b/130b encoding scheme. With that, by multiplying lane speed by the number of lanes gives us a theoretical maximum speed for that slot. Cards are generally backward compatible and the PCIe is full-duplex (data goes both ways at one time, unlike SATA).

So far PCIe seems to be the way to go for fast access storage, but why hasn't it taken off sooner at a larger scale? Well there is just one catch. Until recently, there hasn't been much standardizations on how PCIe SSDs would communicated with the host system. Manufacturers had to create their own and they were more focused on performance rather than standardization and compatibility. However, now they have been working on standards for PCIe SSDs which is why newer drives are getting better by the minute.

What is SATA Express, NVMe, and M.2?

SATA Express - SATA Express, initially standardized in the SATA 3.2 specification, is a newer computer bus interface that supports either SATA or PCIe storage devices. The host connector is backward compatible with the standard 3.5-inch SATA data connector, while also providing multiple PCI Express lanes as a pure PCI Express connection to the storage device. The physical connector will allow up to two legacy SATA devices to be connected if a SATA Express device is not used. The industry is moving forward with SATA Express now rather than SATA 12Gb/s. SATA Express was born because it was concluded that SATA 12Gb/s would require too many changes, be more costly and have higher power consumption than desirable.

For example, 2 lanes of PCIe 3.0 offers 3.3x the performance of SATA 6Gb/s with only 4% increase in power. (2 × PCIe 3.0 lanes with 128b/130b encoding, results in 1969 MB/s bandwidth) 2 lanes of PCIe 3.0 would be 1.6x higher performance and would consume less power than a hypothetical SATA 12Gb/s.

SATA Express is not widely implemented at this time so I am not going to go into much more detail about it. However, keep in mind as of now SATA express SSDs will normally be limited to the chipset and implementation limitations in terms of speed when compared to the potential of true PCIe SSDs.

NVM Express - NVMe or Non-Volatile Memory Host Controller Interface Specification (NVMHCI) is a new and backward-compatible interface specification for solid state drives. It is like that of the SATA modes IDE, AHCI, and RAID, but specifically for PCIe SSDs. It is to support either SATA (I believe specifically SATA Express) or PCI Express storage devices. As you know, most SSDs we use connect via SATA, but that interface was made for mechanical hard drives and lags behind due to SSD's design being more DRAM like. AHCI has a benefit of compatibility with legacy software. NVMe is much more efficient than AHCI and cuts out lot of overhead because of it. NVMe has the ability to take more advantage of lower latency and parallelism of CPUs, platforms and applications to improve performance. Multiple OSes have NVMe support already built into them, for example Microsoft added native support for NVMe to Windows 8.1 and Windows Server 2012 R2 and Linux has it built into is kernel as of 2012. Now it's up to SSD manufacturers to design supporting drives to take advantage of this for their consumer products.

M.2 - Well now that you know what PCIe, SATA, and the different interconnects are, let us go into the new M.2 form factor. I am mentioning the M.2 standard because the Plextor M6e we have for testing is simply a M.2 SSD connected to a PCIe adapter. The M.2 standard is an improved revision of the mSATA connector design. It allows for more flexibility in the manufacturing of not only SSDs, Wi-Fi, Bluetooth, satellite navigation, near field communication (NFC), digital radio, Wireless Gigabit Alliance (WiGig), and wireless WAN (WWAN). On the consumer end, SSDs especially benefit due to the ability to have double the storage capacity than that of an equivalent mSATA device. Furthermore, having a smaller and more flexible physical specification, together with more advanced features, the M.2 is more suitable for solid-state storage applications in general. The form factor supports one SATA port at up to 6Gb/s or 4 PCIe 3.0 lanes at 4GB/s.

Ok, PCIe allows for faster potential bandwidth, but why is the current SATA interface such a bottleneck? What does this all mean for the future of SSDs?

Current SATA 6Gb/s specification only allows for SSDs to reach a maximum of ~560 MB/sec. While that seems pretty fast in itself, it is still a bottleneck. SATA 2.0 is even slower, only allowing SSDs to achieve a max bandwidth of ~280MB/s. In SSD design there is a main controller that branches out to NAND chips via lanes. Your information is stored on the NAND chips. Each chip has a rated speed at which it performs.

Early NAND ran around 50MB/s per chip. The NAND interface bandwidth for current Toggle-Mode Toshiba NAND and OFNI NAND is up to 400MB/s. Now, the controller basically uses the NAND chips as one would have multiple drives in a RAID 0 array. More chips connected to it via the lanes allows for more data speed. With this in mind, current NAND chips in consumer SSDs usually run around 200MB/s. That coupled with usually 4-8 chips on a typical SSD, there is a potential for up to 1.6Gb/s transfer speeds. You can see how easily SATA 6Gb/s has become a bottleneck now can't you?

Now, another issue with the SATA interface is that it has a lot more overhead than that of PCIe 3.0. 20% of bandwidth is lost due to 8/10-bit encoding overhead. PCIe 2.0 suffers from the same 8b/10b encoding dilemma as SATA 3.0, but still allows for around 25% more bandwidth. PCIe 3.0 however, has been revised to use 128b/130b encoding thus only ~1.54% bandwidth loss due to overhead.

PCIe technology enables interface speeds of up to 1GB/s per client lane (PCIe 3.0), versus today's SATA technology speeds of up to 0.6GB/s (SATA 3.0). And with that, more lanes from SATA require more SATA devices while in PCIe bandwidth can be scaled to up to 16 lanes on a single device. Current chipsets such as Z87 have a limit of about 1.6-1.7 GB/s transfer speed when SSDs in RAID 0 are connected via SATA 3.0. With PCIe, the bandwidth throughput possibilities are much higher at up to 1GB/s x16 lanes from a single device to allow for 15.75 GB/s transfer speeds via PCIe 3.0. And then you can even RAID 0 multiple PCIe storage devices for even faster speeds, chipset permitting of course.

Furthermore PCIe offers less latency than a SATA connection due to how PCs are designed. It allows for a direct connection to the CPU. Traditionally, the SATA controller is connected to a chipset which is then connected to the CPU. The PCIe links travel directly to the CPU. Therefore, there is no need to bridge back and forth between intermediate technologies in data communication allowing for faster data flow and processing.

The Plextor M6e:

Previously, OCZ had their Revo drives that broke SATA 6Gb/s barrier by using the PCIe bus rather than SATA, however, they had many compatibility issues to overcome and were far more expensive $ per GB than their SATA counterparts. Not only that, but they had a separate RAID BIOS to deal with and it actually increases a system's boot time because of it.

Plextor has been hard at work trying to remedy these issues and now they released the PCIe M6e…and I have unlimited access to one!

Aesthetically, the drive is a plain basic green PCB. I would personally prefer a black PCB, but that's just me. The front side consists of all the main components including the M.2 connector and the M.2 drive itself, while the back lay bare. The layout of some computer cases and smaller case designs call for PCIe cards to meet HHHL (Half-Height Half-Length) standard. Older PCIe SSDs tended to be full height and were unable to fit, Plextor's M6e however uses a HHHL PCIe card making it suitable for any case. Dimensions (L/W/H): 180.98mm x 22.39mm x 121.04mm. It weighs in at around 72 grams.

The back plate is a very nice matte metal finish with engraved logo and three small lights.

  • Red = Power on
  • Green = When a valid link is established
  • Yellow = Data transmission in progress

After I disassembled the M.2 drive from the adapter, I found its controller on the backside. The Plextor M6e uses a single Marvell 88S9183 controller for its performance. It supports AES 256 encryption to keep your data safe need be. TRIM, SMART and Native Command Queuing is supported as well to keep up performance and monitor health. The Marvell 88S9183 controller is actually a dual core chip comprised of two Marvell 88NV9145's. Fun fact on the 9145, by itself it has a PCIe x Gen 2.0 x1 endpoint (5Gbps). That means one PCIe lane per 9145 in the Marvell 88S9183. The 9145 supports the NVMe driver by itself, so this device may as well, but it is not stated anywhere I have read. Even then, I am unable to find where to even find an NVMe driver to see if it would even make a difference. Hopefully later on there will be a firmware update or driver release for it to take advantage of an NVMe drvier. I am quite interested to see if it would make much of a difference in this drive's performance.

There are 4 Toshiba 19nm Toggle NAND chips on each side of the PCB (8 x 32GB). Totaling 256GB, but after formatting in the OS 238.47GB is user allocated.

On the front side there is a single Nanya 512MB DDR3 RAM chip cache. (128GB model has 256MB RAM, 512GB model has 1GB RAM)

As you can see, the M.2 form factor is a very compact form factor for SSDs manufacturers to develop on. The chips align nicely on the PCB and I really like the simple and clean design of the drive overall…it's just missing a black PCB with gold traces!

Now, let's move on to some testing and analysis!

Specs List:

  • Interface: PCIe Gen 2.0 x2
  • Form Factor: Standardized PCle Card with Half-Height/Half-Length
  • Dimension (L/W/H): 180.98mm x 22.39mm x 121.04mm
  • Weight: 72g Maximum
  • Controller: Marvell 88SS9183
  • NAND: Synchronous Toshiba 19nm Toggle flash
  • Buffer: Nanya 512MB DDR3 RAM cache (256GB model)
  • Encryption: AES 256
  • Command Set Support: TRIM, S.M.A.R.T., NCQ
  • OS Support: Windows 7 x86 / x64, Windows 8 x86 / x64, Windows Server 2008, Windows Server 2012, Linux series, Fedora, SUSE, Ubuntu
  • Warranty: 5 years Warranty Service
  • Operating Temperature: 0°C ~ 70°C (Operating)
  • Shock: 1500G(Max), At 1 msec half-sine
  • Vibration: 7 ~ 800Hz, 16.3Grms (Non-operation)
  • Operating voltage: 3.3v
  • Support: Legacy and UEFI BIOS
  • MTBF: 2,400,000 hours
  • Warranty: 5 years Plextor's Warranty Service

Test Rig 1 System Specs:

  • OS: Windows 7 Pro 64-bit SP1
  • OS drive: Plextor M5 Pro 128GB
  • Mobo: Asus P8Z68-V
  • CPU: i7-2600K @4.5GHz
  • RAM: 32GB G.Skill Ares 1866MHz
  • GFX: MSI GTX 660 Ti PE OC
  • Storage: WD10EZEX & Toshiba DT01ACA300
  • PSU: Corsair TX 650W
  • Cooler: Thermalright Silver Arrow
  • Sound: Asus Xonar DX
  • Case: Corsair 650D

Test Rig 2 System Specs:

  • OS: Windows 8.1 Pro
  • OS drive: SanDisk Extreme II 480GB
  • Mobo: ASRock Z87 Extreme 6
  • CPU: i5-4670K @4.0GHz
  • RAM: 8GB Samsung M379B5273DH0-YK0 Green 2133MHz
  • GFX: Intel® HD Graphics 4600
  • Storage: Intel 520, Crucial M4, OCZ Vertex 4 x 2 RAID 0
  • PSU: Seasonic Platinum-660 (SS-660XP2)
  • Cooler: Corsair H80
  • Sound: On-board Realtek ALC1150
  • Case: Nanoxia Deep Silence 1

Intel Rapid Storage Technology driver:


Drives in comparison:

  • SATA 6Gb/s: Samsung 840 Evo 250GB
  • PCIe: Plextor M6e 256GB

Benchmark Software:

  • Anvil Storage Utilities 1.1.0
  • AS SSD v1.7.4739.38088
  • Crystal Disk Mark v3.0.3b x64
  • ATTO v2.47


Plextor M6e 256GB Test Rig 1Plextor M6e 256GB Test Rig 2Samsung 840 Evo 250GB


For this test I simply used the built in stopwatch app on my Nexus 7 to time how long the system took to boot from test Rig 1 with Windows 7. When I booted from the Samsung 840 Evo I had the Plextor M6e disconnected and vice versa. I cloned the current OS over to both drives with Acronis True Image and I was able boot off both drives no issue in less than 10 minutes. Times were recorded over 5 boots from and the averages are displayed below. Note, the Windows 7 installation is over a year old with many programs and files on it and that my motherboard's POST (Power On Self Test) takes around 22 seconds for my system.

Boot time differences:

Base system boot time w/ Plextor M5 Pro SATA 6Gb/s: 33.14 seconds avg.

Plextor M6e: 33.75 seconds avg.

Samsung 840 Evo 250GB SATA 6Gb/s: 33.19 seconds avg.

Samsung 840 Evo 250GB SATA 3Gb/s: 33.30 seconds avg.

As you can see boot times are very similar. The difference between the SATA and PCIe bandwidth seems to not matter at all when it comes to booting Windows 7 64-bit. You will learn more about booting speed differences and UEFI with Windows 8 fast boot in Parsec's notes.

Analysis and Thoughts:

Unlike older PCIe drive implementations, it uses a single M.2 SSD connected onto a PCIe 2.0 x4 card adapter. The drive supports a dual BIOS for both legacy and UEFI BIOSes. The best thing about that is that there is no hit to boot time like with dual controller RAID type PCIe SSD cards. For example, the OCZ Revo Drives, but I don't have access to one, so I cannot comment much more on those drives at this time. In my testing, via legacy boot on my motherboard, there is a slight 1-2 second splash screen from the M6e, and it didn't seem to add to my POST time at all. The M6e appears in the BIOS/UEFI just like a SATA drive and uses the built in Windows and Linux AHCI driver which works great from my testing. I have found the M6e to work fine with my Z68, H61, and A85X motherboards. When used as a boot drive on a system with UEFI that is fast boot compliant (UEFI 2.0 and newer) and Windows 8, the startup times are cut in half. Our very own member parsec was able to verify this for me as you will see later on in his testing notes.

Installing and setting up for the M6e is as simple as plug-n-play...the way it should be. No special drivers or extra cables to connect. Just slide the card in an empty PCIe slot and you got a new SSD ready to be used. However, there was a hiccup when it came to getting the drive to run at its rated speeds. At first I had installed the M6e into my secondary PCIe x8/x16 slot. The slot defaulted to only x1 speeds; I was told by Plextor that this was due to my Z68 chipset and i7 2600K CPU. I was advised to use a Z87 system to attain rated spec. Thinking for a bit, I remembered on my motherboard I can manually set the last PCIe slot to x4 mode. After changing to that slot and a reboot it seemed to do the trick! Sequential speeds were up from ~400MB/s to its rated 780MB/s read 550MB/s write! But then after a run of AS SSD I saw the higher queue depth speeds, I was still missing performance out of the drive. I was only achieving ~40,000 IOPS, far less than its rated 105,000IOPS. This issue however, is not of much concern for me, nor should be for many users as typical desktop workload will only reach around queue depth 3-5. AS SSD 3rd test benchmarks 4k speeds at a queue depth of 64, much higher than what typical consumers will ever reach. While this was disappointing, AMD users will be happy to hear that my A85X motherboard was able to achieve rated speeds no issue. I am not sure why older intel chipsets have this issue or if it can be fixed via a software or firmware update, but be warned, if you are not able to manually set your PCIe slot speed or if you need the best performance out of your M6e drive, a Z87 motherboard or newer is suggested.

After benchmarking for a while I wanted to run a secure erase on the drive. I had not yet performed on a PCIe drive and didn't know if I could without special software. I then remembered Plextor had a tool I could try. I installed it, but after some trying I was unable to use the Plextool "Secure Format" (Secure Erase) utility to do so. I then proceeded to use my handy dandy Parted Magic USB! With the Plextor M6e I was able to successfully and easily boot up Parted Magic and run a secure erase on the drive as if it were a normal SATA drive.

My Conclusion of the Plextor M6e

Besides the initial hiccup on reaching rated speeds, the Plextor M6e has performed flawlessly throughout my usage. The M6e is marketed more towards the gamer market, and reaching speeds unattainable by SATA 6gb/s SSDs the M6e has proven to best most SSDs out there in terms of sequential speed. With Plextor's rigorous testing and validation on the drive and 5 years warranty service, you won't have to worry much about your product failing, and if it does, Plextor has your back.

Now, should you go out and buy one today? If you need sequential performance and are limited to SATA 3Gb/s, sure go ahead and get one. Game loads will be faster and you sure will have some more E-Peen than your friends. I am not too sure if I would really recommend this drive over a normal SATA 6GB/s solution as of yet. While it is able to perform better in sequential speeds than SATA 6Gb/s drives, the Samsung Evo surpassed the it in 4K read and write speeds and the M6e comes in at a hefty price tag more than twice the amount of most SATA 6Gb/s drives as well. On top of those points, I figure a 4 lane controller is around the corner and would offer twice the performance for a similar price tag…and then there is the potential for PCIe 3.0 SSDs, once more consumer ones hit the market this drive will just be known as a stepping stone.

Parsec's User Experience and Testing:

When I first started this article, I thought I'd be set with my current rig for all testing purposes. I was soon proven wrong when I finished my first benchmark for the Plextor. It turns out that my Z68 mobo won't allow me to test the max IOPS that the M6e allows, nor will I be able to show the decrease in boot times from UEFI fast booting vs normal boot. So I had to ship the drive off to my friend with a Z87 system where he could do some extra testing for me. Now, without further ado, here is what he has to say about the drive!

Hi, parsec here, some of you may know me from the OCN SSD forum. Sean asked me if I would perform some testing of the M6e on my Z87/Windows 8.1 PC, for a different perspective than his PC, and to determine if a PCIe 3.0 interface would enhance the M6e's performance. Being the SSD enthusiast that I am, I gladly accepted his offer.

M6e Installation:

In my time with the M6e, used as both an OS and storage drive, it functioned flawlessly. I did have one temporary issue when I initially inserted the M6e into a PCIe 3.0 x16 capable slot, with the lane set to PCIe 3.0 speed in the UEFI/BIOS. I could get into the UEFI interface fine, but the PC refused to boot, all I had was a blank screen. Changing the PCIe x16 lane to PCIe 2.0 speed in the UEFI did not change anything.

In the end I had to remove the M6e, set the PCIe x16 lane to PCIe 2.0 speed, insert it again, and the PC booted fine. I later tried setting the PCIe x16 lane back to PCIe 3.0 speed, and the PC booted with the M6e functioning normally. When installing a M6e, I suggest setting the PCIe slot's speed to PCIe 2.0, "Gen 2", or Auto to prevent any potential first time installation issues. The M6e works fine on a PCIe 3.0 link once the drive has established a connection to the PC's motherboard.

I cannot blame the M6e for that behavior, as it is specified as a PCIe 2.0 device. I also suspect that due to the nature of the M6e's Marvell® 88SS9183 SSD controller, which uses the native AHCI driver (Microsoft AHCI driver in my PC) of the PC's OS, the M6e may have struggled with my Windows 8.1 installation, trying to use the new Windows 8 storahci driver, while my PC was in RAID mode using the Intel IRST driver. That is all speculation on my part. Establishing the initial link between the M6e and the motherboard might require a PCIe 2.0 (or lower) link speed.

M6e Usage:

Users of the M6e and other examples of the new PCIe SSDs using the new Marvell SSD controller we will soon see, will need to work with two differences these drives have compared to the standard SATA SSDs we've been using. Most of what we are accustomed to in Windows with standard SATA SSDs is the same with the M6e:

  • Windows Explorer, Device Manager, and Disk Management all display the M6e's attributes in the same way as a SATA drive. For example, the Windows Explorer Properties of the M6e are identical to any other 256GB SSD (Capacity is shown as 237GB) and all options and settings work normally.
  • SMART self-test and error logging is supported. SMART data from the M6e can be displayed by the usual programs.
  • The M6e uses the standard AHCI driver of the PC's OS, and a Device Manager entry for the Standard SATA AHCI Controller will appear under the IDE ATA/ATAPI controllers entry. The Marvell® 88SS9183 flash controller and AHCI driver are the M6e's private storage controller.
  • TRIM is supported; the trimcheck (version 0.5) program concluded that TRIM "seems" to be working (which is all it will say), and the Windows 8 SSD Optimize feature (manual TRIM) runs on the M6e fine.
  • The firmware of the M6e can be updated. Currently there is only one version of the firmware available to the public, 1.02. The firmware update program is a Windows executable file.
  • The M6e is compatible with UEFI booting, as is another PCIe-style SSD that claims to be the first PCIe SSD to be compatible with both Legacy and UEFI booting, but requires a switchable dual BIOS to accomplish that. Not so the M6e.

PCIe SSD Differences and Requirements:

The similarities could not go on forever, and diverge due to the necessity of the M6e to have its own storage controller. On Intel chipset based motherboards, the Intel Rapid Storage Technology (IRST) RAID utility and driver do not recognize the M6e, the same will occur on AMD SATA chipset boards, or discrete RAID cards. This is to be expected, since the M6e is not connected to any of those SATA controllers.

Which brings us to one downside of PCIe SSDs like the M6e, no RAID capability with the motherboard's SATA chipset(s). This is obvious once we understand that the M6e contains its own storage controller. We cannot create RAID arrays between storage drives on different SATA controllers, such as an Intel Z87 PCH, Marvell 9128, or a discrete RAID card.

Another difference users will encounter with PCIe SSDs like the M6e, is providing them with an appropriate PCIe 2.0 x2 connection. While that may seem simple, it may not be, depending upon your motherboard's PCIe slot configurations and PCIe 2.0 resources.

For example, on both Sean's and my PC that use an i7-2600K CPU (Z68 and Z77 boards), that also provides the PCIe 2.0 lanes, the M6e operates at PCIe 2.0 x1 mode, regardless of what slot it is used in (ie, x16), even when we used the integrated graphics on the CPU (no video card installed.) The result in benchmarks is similar to what we see when using a SATA III SSD on a SATA II interface, the sequential speeds are about one half of the PCIe x2 speeds, with slight to moderately reduced 4K random and 4K high queue depth speeds, at least on my Z77 PC. Sean's results were a little different; he had higher 4K random speeds than I did with a PCIe 2.0 x1 connection. That is what Plextor said should happen.

This situation does not occur with Ivy Bridge and Haswell processors. I doubt that X79 systems would be an exception, given their much greater amount of PCIe 2.0 lanes. While we have had PCIe 3.0 support for two years, that does not mean that all older boards have PCIe 2.0 support. The Intel X58 chipset from late 2008 only supports PCIe 1.1. Carefully check your motherboard for the type of PCIe support and the number of PCIe lanes available at the very least before purchasing an SSD of this type.

M6e Operating Temperature

Some SSDs provide temperature data that can be displayed in various hardware monitoring utilities, such as HWiNFO64. The M6e apparently does not have a temperature monitoring capability. I used an infrared thermometer (Sperry Instruments IRT100) to check temperatures of the M6e, as a rough comparison to the temperature readings I get on several of the SSDs I use that have temperature data available.

The problems with this comparison are many. Temperature readings from standard 2.5" form factor SATA SSDs are really ambiguous, since we have no idea what device or area of the SSD is being measured. We don't know how the temperature data is acquired, such as a temperature sensing diode on the SSD's PC board, or internal to the SSD's controller. Given the M6e's construction, an M.2 type SSD mounted on a board with a PCIe interface, an IR thermometer has direct access to components that are internal and inaccessible in a 2.5" SATA SSD. My point is we cannot directly compare the temperatures I found with the IR thermometer to those we get from 2.5" SATA SSDs.

I measured these temperatures with the M6e used as an OS drive. Ambient temperature was ~78F/25C.


I could not measure the Marvel controller's temperature, as it is not exposed on the top of circuit board. These temperatures are not at all excessive, and are great compared to reports of the temperatures of some mSATA SSDs. That is what I wanted to know.

M6e Operating System Boot Time

Sean and I wanted to test the OS boot time of the M6e for comparison to standard SATA III SSDs. A PC's "start up time", "boot time", "cold boot time", etc, are terms we read about and use, but what do they really mean? We tend to be sloppy in our usage of these terms. For example, booting a PC and booting an OS are really not identical. The time from the moment the power button is pressed to the time the POST process is complete does not involve the loading and executing of the OS from the OS drive. The loading of the OS and anything the PC user needs to be running when the OS's User Interface is displayed (ie, Windows Desktop) is really what "booting an OS" is. That is the definition of "boot time" I will be using.

That definition excludes a major variable in the start up time of a PC, the interval between the power button push and the end of POST. That time period is extremely variable between motherboards, the type and mode of firmware the board uses (BIOS or UEFI, Legacy or EFI), and the type and quantity of hardware connected to the motherboard (storage drives, optical drives, add-on cards, etc.)

The "Fast Boot" option that has been added to the UEFI/BIOS of many new motherboards, includes Fast and Ultra Fast settings. This option and settings only affect the POST time of the board, it does not change the OS boot time.

Sean's motherboard has a longer than average power-button-push to end of POST time than the motherboards I used with the M6e, and he includes the total time from power-button-push to desktop display in his boot times. So our "boot times" are not comparable.

So when do I start measuring OS boot time? Starting from the single beep signaling the end of a successful POST. The boot time is complete when the display of the Windows 8.1 desktop occurs. So no POST time, and no motherboard specific diagnostic tests or features that occur prior to POST are included. This isolates the performance of the OS drive as much as possible in the "boot time" measurements. My goal is to provide an OS boot time that is more comparable across PC types, including desktop, laptop, and tablet PCs.

My Z87 test PC has a "UEFI Booting" Windows 8.1 installation, which may also be used in "BIOS/Legacy Booting" mode. Windows 8 introduced the Microsoft Fast Start feature, which can be enabled and disabled easily. I measured boot times in all the possible configurations, and those of my SanDisk Extreme II OS drive on this PC. The OS installation on the M6e was a clone of the SanDisk's installation, and performed perfectly, albeit for the few days I used the M6e in that configuration.

Windows 8.1 Boot Time (seconds)

The M6e was just slightly faster than the SanDisk EX II, although my stop watch measuring method is not exact. Subjectively the M6e seemed just a bit faster. My Samsung 840 Pro and 840 EVO are both in the same range as the M6e and SanDisk EX II, with about three seconds seeming to be the limit for these SSDs on my PC. The small differences in OS boot time are not at all significant. There are many variables involved in boot up time, such as the number and type of Windows start up programs. As we know, OS boot time can vary from instance to instance. As usual, attempting to simplify a complex situation like booting an OS is difficult if not impossible.

The M6e can announce its presence during POST, with an OROM-like display of its name and information about the drive. This disappears quickly and does not significantly add to the PC's POST time. Disabling the Option ROM display on my board removed this display during POST.

When UEFI booting, the M6e does not slow down POST time at all, in contrast to some if not all of the earlier PCIe interface type SSDs. When Legacy booting, I would see a very quick (fraction of a second) flash of the M6e's controller initializing, and nothing else.


My impressions of the M6e are all positive, and in the end I liked it more than I thought I would. Given the specs, I was disappointed that the increase in speed seems to only occur in the sequential read area, with a very modest increase in sequential write speed, compared to the best current SATA III SSDs. The M6e does break 100,000 IOPs in the 4K high queue depth read and write test, which I normally dismiss as inapplicable to typical PC usage. Those IOPs numbers leave the 840 Pro and EVO significantly behind, with the SanDisk EX II the closest with read IOPs just over 90,000. The M6e's 4K random read speed does not quite match that of the 840 Pro or SanDisk Extreme II, which puts it even farther behind the 840 EVO in that aspect.

Given that, the M6e at least equaled all of those SSDs in OS booting speed, and actually surpassed them by a small amount. Do we attribute that to the M6e's superior high queue depth performance, which must exist at lower queue depths than used by AS SSD? I tend to think so, which would also explain the SanDisk EX II's performance being equal to the two 840s when booting an OS. That's a simple explanation, but the data fits the results.

If you can provide the M6e with the PCIe interface it requires, I would endorse it as a worthy choice. Other than the temporary problem I had inserting the M6e in my Z87 PC, which was likely caused by the UEFI/BIOS setting I used for the PCIe slot, the M6e functioned perfectly. That is not faint praise, given the new controller used in the M6e, and the firmware is provided by Plextor, not Marvell.

So what do these results mean in real world use?

Consumers - Currently, for typical consumer tasks and daily usage of just web browsing, emailing, word editing, etc. it won't make much of a difference having a PCIe based SSD or a SATA one. I have a bunch of SSDs here I have messed with them with my OS on them through the last two weeks to see if I noticed any difference in my work as I wrote this article. Changing from one drive to the next made no difference in system responsiveness/perceivable performance. Besides the Plextor M6e 256GB and Samsung 840 Evo 250GB, I have a Samsung 830 128GB, Plextor M5 Pro 128GB, Crucial M4 64GB, and SanDisk Extreme 2 480GB. None really made a difference when working on daily tasks from one to another. I currently have the Samsung 830 on SATA 3Gb/s right now too because I'm playing with RAID 0 on my 6Gb/s ports and the SSD is fast as ever in daily use. However, in prosumer use and in gaming PCIe vs SATA 6Gb/s sure will make a difference.

Prosumers - Let us start off with how PCIe SSDs affect professionals. PCIe shines in scaling and max throughput as there is so much more bandwidth available to it than the SATA 6Gb/s interface.

With more and more 4K video cameras hitting the market, the demand for higher sustained speeds is increasing. Uncompressed 24fps 4k video (3840x2160, 12-bit RGB color) requires around 900MB/s of bandwidth. And even if you are dealing with compressed 4k formats, videographers typically work with multiple streams of video at a time, which can easily surpass the SATA 6Gb/s barrier. Sure you could run a RAID 0 array to remedy this, but even on the newest Z87 chipsets you will be saturating bandwidth with 3-4 SATA 6Gb/s SSDs at around 1.6Gb/s. With PCIe you can have more than that bandwidth available to you via a PCIe slot, up to about 10 times that of the SATA chipset!

Personally, my current favorite advantage from PCIe SSDs is that you can have them take up a PCIe slot while allowing you to have more SATA ports free. Without this PCIe SSD in my system I was unable to have a separate RAID 0 array running on my two SATA 6Gb/s and have the OS on another SSD without the SATA 3Gb/s bottleneck. Now, the OS can be on the PCIe SSD running at full speeds while I can have my separate RAID 0 array for photo editing @1GB/s.

Gamers - Gamers always need faster response out of their system. Whether it is more FPS or faster map loading, gamers do not want to wait for their system when they are in a game. They want cutting edge performance. With current big title games such as titan fall or call of duty ghosts reaching sizes of 30-50GB along with flight simulators reaching the hundreds of gigabytes in size fast low latency storage is a must. PCIe SSDs allow for much faster lower latency map and texture loading to the gamer. If you are a gamer and are limited to the SATA 3 or 6Gb/s speeds, then a PCIe drive would be a good idea for you.

Enterprise - Now, when it comes to workstation and server usage, PCIe SSDs have more to offer in terms of performance than to just boot up an OS or load games quickly. In workstation and server applications usually there will be far more pending I/O requests than that of a consumer system. The amount of pending I/O requests is called queue depth. For consumer usage usually one would not exceed more than a queue depth of 3-5. In workstation and server use, the queue depth may be in the hundreds! Having the ability to deliver constant high performance at that scale is where SSDs shine the most. As you had seen in high queue depth benchmarks the drives performed very fast. What requests may take a few hundred HDDs to perform can be done from a single SSD.

Consumer SSDs on SATA 6G/s connections are hitting a wall around 100,000 IOPS. That seems like a lot doesn't it? Well, as always more is better. With PCIe Fusion io has created a monster of a drive called the ioDrive Octal and it is able to hit over 1 million read and write IOPS on a PCIe 2.0 x16 connection…Now with PCIe 3.0, SSDs of the future will have far more bandwidth to take advantage of. Drives reaching these speeds are prefect for high demand, low latency business use where you can have literally hundreds users connect to a data store for their Virtual desktop, an application, or even just a word document file.

What interface is right for you?

Depending on what you do with your PC, it will dictate what will make sense for you as a user. For the average Joe either interface should suite fine. For most PC enthusiast, gamers, and professionals with older chipsets without native SATA 6Gb/s you will have a much better experience with a PCIe SSD rather than a SATA SSD connected via a SATA 3Gb/s slot. And soon, once more faster PCIe SSDs enter the consumer marketplace at a competitive price point; PCIe SSDs will be the go to option for those demanding the best performance. And finally, for enterprise usage, PCIe drives should be the go to option for most scenarios for best performance. Personally, I can't wait to see a PCIe SSD that fully saturates PCIe 3.0 x16, as well as more PCIe drives hitting 2GB/s+!

Conclusions & Final Thoughts:

In the end, PCIe drives like the Plextor M6e, are showing that SATA 6Gb/s interface is indeed a bottleneck for modern SSD controllers. Current SSDs are able to pass the SATA 6Gb/s barrier no issue, but are held back. Luckily the industry is already trying to solve this issue and has many great standards and improvements in development. Right now is an exciting time in the SSD storage world. A lot of new advancements are coming to the market for us consumers and enterprise users to take advantage of. The future of SSDs looks very promising and right now we are only seeing the tip of the iceberg as to what's to come from all of it.

Thanks for reading; I hope this article helped you learn something new. I am always up for learning myself, so if you know something I didn't mention or don't know, please feel free to post a reply and let me know!

Update: Plextor just saw the article and sent me this for you guys!

As the leading developer and manufacturer of high performance storage devices we will be launching a weekly SSD giveaway campaign. We will be giving away one to two SSDs every week to a lucky winner that simply likes their Facebook page (PlextorAmericas). Prizes range from the newest M6 lineup, which features the M6S, the M6M, M6e PCI-e, and later this year the M6P. You'll have several chances to win with just one click. Learn more at

"Good luck!"

Plextor will also be providing a promotional offer of up to $30 off certain capacities of the M6e PCI-e SSD for a limited time. Use promo code: clockm6e at NewEgg USA.

Premium Member
543 Posts
Here, have ALL of my imaginary +rep

Very well written piece, infinitely useful.

5,751 Posts
Thankyou for taking the time to do this

invisible +rep

cant help but think if they want PCIe SSD to take off they have to destroy Sata SSD FLAT, with speeds. Other wise most will pass.

5,436 Posts
Very nice, thanks for the excellent review. Basically confirms my suspicions, that unless you are doing a lot of sequential reads and writes, PCI-E and SATA Express are essentially useless at the moment. Need new controllers that have much higher random reads and writes.

Premium Member
8,166 Posts
Great read Sean, as always!

Fusion is Future
1,393 Posts
Very nice read ;3

Frog Blast The Vent Core
6,118 Posts
What happens if I put my fancy shiny graphics card into the x16 slot, expecting it to use all 16 lanes, and then also install one of these into an x4 slot?

And as these proliferate, are we going to be in a situation where the PCIe lane restrictions of platforms like Z87 and Z97 are going to be a downside compared to things like x79 and X99?

What about PLX chips? How will they impact lane availability for GPUs alongside PCIe storage?

5,436 Posts
Originally Posted by Mand12 View Post

What happens if I put my fancy shiny graphics card into the x16 slot, expecting it to use all 16 lanes, and then also install one of these into an x4 slot?

And as these proliferate, are we going to be in a situation where the PCIe lane restrictions of platforms like Z87 and Z97 are going to be a downside compared to things like x79 and X99?

What about PLX chips? How will they impact lane availability for GPUs alongside PCIe storage?
Your GPU will be bumped down to 8x connection, assuming the 4x slot is connected directly to the CPU. Assuming that you're using a platform with only 16 lanes, like LGA1150 and FM2+.

These products will initially be very niche, and mostly reserved for those that are already buying enthusiast/prosumer parts (LGA2011). Down the line as it becomes more mainstream (won't happen for at least 1-2 years), Intel has the choice of either keeping their mainstream platform at 16 lanes (as far as I know, Skylake will be 16 lanes, thus Skylake's successor will be 16 lanes as well, meaning 16 lanes for mainstream Intel until at least end of 2017) to push people towards their enthusiast platform, increasing the number of lanes available from the southbridge (if they don't move to complete SoC), or increasing the number of lanes direct from the CPU.

In a PLX mainstream Intel board, the PLX chip allows 32 lanes of connectivity from 16. However, there still is only 16 lanes for communication from the CPU. Assuming you have two GPUs and a 4x PCI-E SSD, the first GPU would run at 16x, second at 8x, and 4x for the SSD. If GPU traffic to the CPU does not exceed 12x, the SSD won't be affected. If it does, and the SSD is attempting to use 4x traffic, you'll run into a bottleneck as the GPUs and SSD now have to share bandwidth.

1,117 Posts
Great article man

1,994 Posts
another perfect article by Master Sean. You own, dude.

Storage Nut
21,146 Posts
Discussion Starter · #18 ·
Thanks for the positive feedback everyone! Glad you like it!

1,475 Posts
Thanks for sharing this plethora of information. It couldn't have come at a better time, because my Samsung 840 EVO 500gb SSD literally just arrived an hour before I saw this article.

2,815 Posts
Great article. I've been looking at a upgrade to a pci-e ssd and wonder what your thoughts are on Asus' Raidr Express pci-e ssd raid card?
1 - 20 of 182 Posts
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.