Overclock.net › Forums › Components › Hard Drives & Storage › SSD › Extend SSD Life and Sustain Performance
New Posts  All Forums:Forum Nav:

Extend SSD Life and Sustain Performance

post #1 of 4
Thread Starter 
Like many of you, I have been doing some serious research into my purchase of an SSD for my workstation. I chose to go with Intel w/o RAID, since reliability is the most important factor for me followed by sustained performance.

With that in mind, let me share what I learned about the benefits of over-provisioning. First, over-provisioning a drive means making a fraction of its capacity unavailable to the OS for use. An example would be taking a 160GB drive and creating a single 144GB partition, leaving 16GB unused. With the per-GB cost of SSDs being so high this seems like an insane thing to do. So why should one consider this? The short answer is cost-of-ownership, longevity and sustained performance.

Cost of Ownership and Longevity
Yes, I know, this sounds like CFO mumbojumbo. Nevertheless, it reflects the real cost of the drive. SSD longevity (or endurance) is measured in terabytes written. There is no penalty for reading data. So, a better cost metric of SSDs should be writable terabytes per dollar.

Amazingly, over-provisioning your drive (at least for non-SandForce SSDs) dramatically reduces the cost per TB written. For example, an Intel X25-M 160GB SSD currently sells for about $400, or $2.50 per GB. The same Intel drive is rated for 29TB of user data written when the full unformatted capacity is partitioned for use by the OS. This translates to a cost of $13.80 per TB written (TBW). However, if only 144GB were partitioned (10% over-provisioning) the endurance scales to 68TB, lowering the cost of ownership to $5.90/TBW.

If over-provisioning is increased to 20%, endurance increases to 104 TBW and the cost of ownership drops again to $3.85/TBW. So the moral of the story is that by giving up 10% of your drive's initial capacity, the cost of ownership drops by more than a factor of two. Not bad.

Sustained Performance
Sustained performance is something we all are concerned about. What's the point of spending big dollars on an SSD if after some period of time its performance drops to that of a magnetic drive---or worse?

Over-provisioning guards against performance degradation (at least on non-SandForce drives). Okay! That's it! How can giving up precious GB be such a cure-all for endurance and performance?

NAND Flash Memory: A Cost-Compromise
Let's start with the basic architecture of a NAND memory cell. In order to get the cost of non-volatile memory down to a point that we could consider using them for mass storage, engineers had to find ways to pack as much storage into as few transistors as possible. Without going into greater detail, trade-offs had to be made. One trade-off is that once written, the cells must be erased before writing again. Other forms of memory including HDDs do not have this limitation. Another important trade-off is that single cells cannot be individually erased. Instead, cells must be erased in blocks.

Dirty Cells
As long as a block has already been erased, writing can occur at the maximum speed of the flash memory, However, if the cells were previously written (dirty cells), then a whole lot of hand waving has to go on in order to complete the write transaction. It wouldn't be so bad if we only had to erase the target memory cells. No, we have to erase the entire block first---but wait---if there is data in the block we want to keep, block erasing will destroy that data as well. Now we have to first find a new home in another block for the valuable data. Hopefully, there is an available block somewhere. This involves reading the desired data and writing it somewhere else, THEN we erase the target block, THEN we can finally write the original data. Granted, manufacturers take advantage of RAM buffers to make temporary homes for data that has received eviction notices, but you get the idea.

Ideally, clean pre-erased blocks would always be available anytime a new write comes along to keep throughput at a maximum. This is why brand new drives benchmark so well---it's clear sailing. Over time, and without intervention, more and more blocks will become dirty until there are no clean blocks. Welcome to Slowville.

Cleaning House
The industry has adopted both industry-standard, and proprietary methods of avoiding this congestion. The industry standard method is known as TRIM. It's a new SATA command (that HDDs don't need) that tells SSDs when specific data is no longer needed. This allows the SSD's controller to perform Garbage Collection (GC). GC is like a little housekeeping robot that goes around cleaning up ditry blocks when the drive is idle, thereby preparing them for fast writes. Additionally, most manufactures incorporate some form of proprietary autonomous Background Garbage Collection (BGC). Even in the absence of TRIM commands, BGC will ferret out dirty cells and do it's best to keep the house in order. It is this feature that makes it possible to use RAIDs (which block TRIM commands) and OS's that don't support TRIM. BGC performance without the support of TRIM is very manufacturer dependent.

Equal Time for Equal Cells
Since each cell has a finite number of times that it can be written (about 5000 for 34nm flash), it is paramount that the controller spreads writes evenly amongst all cells. This is called wear-leveling. The requirement for wear-leveling is just another poke in the eye to the controller. Not only must it maintain clean blocks to write to, it also has to make sure it that cells see the same number of writes over time.

Elbow Room
Now, imagine what is happening in the drive as it gets closer to being full of data. There is vanishing space for the controller to move data around for garbage collection and wear-leveling. As free space approaches zero, so does the drive's performance. Clean blocks are a luxury at this point. Write amplification (the ratio of actual cell writes to system-requested writes) gets higher and higher due to the limitations put on the controller. Wear-leveling begins to fail, sending some cells to an early death.

A great analogy is rearranging furniture in a room. So long as a room only has a few pieces of furniture in it, it is easy to move things around. Now think about moving the furniture when there is no more floor space---not fun. What if (before we filled the room with furniture) we declared that furniture could not be placed in an area that was 10% of the total area? When we fill the remaining area with furniture we still have a clean area to move furniture as needed when we are rearranging. This is what over-provisioning does for SSDs.

Therefore...
When one considers the constellation of events that must occur for writes to happen at like-new speed, over-provisioning starts to make sense. The 16GB of unpartitioned space I reserved in my example above isn't available to the OS, but it is available to the SSD's controller. Unknown to the OS, the SSD is writing to the unpartitioned space as needed. Even when the partitioned space is filled there is still room for GC and wear-leveling to operate efficiently. Even RAID setups can function well in this environment (assuming the controller has decent BGC) in the absence of TRIM.

I Thought My SSD Already Has Over-Provisioned Space!
Well, most do, but it is generally not enough. Manufacturers have to play the specmanship game. It's really our own fault when you think about it. If two manufacturers are selling drives with an advertised 120GB of available space and one over-provisions with 8 GB and the other over provisions with 16GB, who has the market advantage? How many people question the built-in reserved capacity?

Important Tip
Make sure the drive is either factory-fresh, or secure-erase it before creating the OS partition. This insures that the unformatted space is clean. Otherwise, you may compromise peak performance.

SandForce Disclaimer
From what I understand, SSDs based on SandForce controllers incorporate dramatically different approaches to solving endurance, write amplification and BGC compared to the rest of the industry. They also use a clever technique of compressing/decompressing "compressible" data on the fly. This is transparent to the OS, and therefore looks like a "normal" drive (data-out = data-in). Having said that, there are reports of serious performance degradation on exercised drives as well as slower writes when the data is not compressible. I simply do not have enough knowledge about their inner workings to know if what I stated above applies equally to SandForce-enable SSDs. Time may prove that SandForce got it right, and all future drives adopt their topology. Until then, I'm playing it safe and going with SSDs that have simpler functionality (no compression) until the verdict comes in.

My Disclaimer
Everything you read hear comes from what I was able to distill from many hours scouring the web and reading white papers, data sheets, etc. Please do not take what I say at face value. Hopefully it shed some light on some issues you may not have been aware of, and will prompt you to investigate further.
Edited by RW\ - 2/11/11 at 3:31pm
post #2 of 4
Extremely interesting and useful. +1
However, if I have empty space in a partition, will the drive use it like unpartitioned space? Because I have 10g free, and I am wondering if I should shrink my partitions. Also, any tips for eliminated writes (page file, internet cache, indexing, etc).
My PC
(13 items)
 
  
CPUMotherboardGraphicsRAM
Athlon X4 635 Biostar A770E3 2 8400 GS's (not SLI) 2x2 gigs ddr3 2000 
Hard DriveOSMonitorKeyboard
OCZ Vertex! Win7 Pro x64 3 of them! Small 
PowerMouse
500W Wireless 
  hide details  
Reply
My PC
(13 items)
 
  
CPUMotherboardGraphicsRAM
Athlon X4 635 Biostar A770E3 2 8400 GS's (not SLI) 2x2 gigs ddr3 2000 
Hard DriveOSMonitorKeyboard
OCZ Vertex! Win7 Pro x64 3 of them! Small 
PowerMouse
500W Wireless 
  hide details  
Reply
post #3 of 4
Thread Starter 
I'm going to be talking out of my ars a little here...

Having 10GB of space left still allows for garbage collection. Wear leveling should also be OK if your drive is 100GB or less. However, I don't think it is the same as having 10GB that the OS cannot reach. In the latter case the ssd "knows" that 10GB is free, instead of having to figure it out. This should help the wear leveling algorithm. If this is true then it probably varies greatly from one controller to the next. So, you may be fine by just leaving at least 10% (15 to 20% prefered) available if all of the drive is partitioned.

Just to be safe, I choose over-provisioning just so I don't have to worry about it.

There are a number of places to find optimization info, but I will mention two import ones...

1) Keep your page file. MS agrees on this one. The nature of the page file is ssd friendly and some programs will not run if one is not present. To my knowledge, no one has documented performance gains by eliminating the page file.

2) Don't go crazy with benchmarking. It eats away at the endurance and dirties blocks. Sometimes it's necessary to BM (no pun), like when the drive is initially installed. Just secure erase it before partitioning to get it sparkling clean.

Good luck and thanks for the bump!
post #4 of 4
Great read for sure but I think the provisioning is theft to the consumer. They should have set a 20GB-40GB provisioning by default OR should have to clearly state it on a product page. I myself didn't notice that one till I bought a Vertex 2 90GB and ended up 20% is recommended for Sandforce drives so I ended up with 67GB after it was all said and done. When your paying $1.50-2.00 per GB each one counts. I am not asking for a free fix but when I buy a 120GB SSD I want every single bit of space I bought with no surprises. Its false advertising at this point and that pisses me off.

However once SSDs prices lower and I can buy a 240GB SSD for $100 I wont be so pissy about it. I only need 120GB for a primary OS/Game SSD so losing 40GB wont be much of a downer to me. Price and size will be perfect for me by then. Doubt I will see that price for another 1-2 years though.


*edit*
BTW anyone know of a good site that has such details as what provisioning each SSD has by default? Also any news on the new G3 models from Intel? Thought they had a Feb release date.
Edited by Twist86 - 2/11/11 at 8:02pm
    
CPUMotherboardGraphicsRAM
Q6600 GIGABYTE GA-EP45-UD3R (rev. 1.0) XFX GTX 260 868mb 192 Core 2x2GB G-skill PC6400 DDR2 800 
Hard DriveOSMonitorKeyboard
Western Digital Caviar Black WD5001AALS Windows 7 x64 1680x1050 Acer X223w 22" Logitech G11 
PowerCaseMouseMouse Pad
TX750w Corsair PSU Antec 900 Logitec MX-518 WoW Catclysm 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
Q6600 GIGABYTE GA-EP45-UD3R (rev. 1.0) XFX GTX 260 868mb 192 Core 2x2GB G-skill PC6400 DDR2 800 
Hard DriveOSMonitorKeyboard
Western Digital Caviar Black WD5001AALS Windows 7 x64 1680x1050 Acer X223w 22" Logitech G11 
PowerCaseMouseMouse Pad
TX750w Corsair PSU Antec 900 Logitec MX-518 WoW Catclysm 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: SSD
Overclock.net › Forums › Components › Hard Drives & Storage › SSD › Extend SSD Life and Sustain Performance