Originally Posted by TwoCables
We're creating one hell of a useful thread here!
Actually, it's not just that data may
be in a fragmented state for Wear-Leveling, but it's that it is
. The better the drive's controller, the better this is implemented. Running a defragger on a Solid State Drive doesn't do anything with the fragments (it can't) because the drive's onboard controller doesn't allow it: it's is going to organize the fragments in the way that it has been programmed to do. There's no software that can ever change that or make the controller do something other than what it's programmed to do. Therefore, the controller has to "lie" to the OS so that the OS tells the software that is successfully defragging a hard drive. I mean, it really is constantly lying to the OS in order to continue doing whatever it has been programmed to do...
...I'm not sure, but I think that a Solid State Drive's onboard controller takes care of all of these things. It has to at least be assisting in a big way because it has to keep the data fragmented for wear-leveling and it always knows where everything is at any given time. So when data is accessed, the controller goes to work and then we get it...
That pricked me to research some more on SSDs and wear leveling! This is what I found: The SSD has a map that it uses to translate LBAs (Logical Block Addresses, this is the offset of our physical disk "file") to the cells where it has internally stored the data. Using a scheme of some sort, the SSD basically will swap out a cell expending write cycles faster than the others. However, this all happens inside of the SSD, where the translation is done transparently. So, a single I/O operation to "read 4,194,304 bytes at LBA 127,693" will still have a I/O synchronization latency; and internally, the SSD will have to do some very quick jockeying to return the data for that single read operation. A request to "read 4,096 bytes at LBA 127,693" will have the same I/O synchronization latency (and the SSD will probably only have to read one block).
Physically, your disk's partitions and data are stored in randomly assigned blocks on the SSD. However, that all stays inside of the SSD. The SSD has
to behave like a regular disk, where the volume offsets and data offsets stay where Windows put them. The "fragmentation" inside the SSD doesn't really mean anything; who knows, they could even use a parallel storage scheme to increase throughput even more! However, the NTFS filesystem data still fragments on a SSD just like it does on a HDD.
Thus, everything about file fragmentation still stands on an SSD
(on a very fragmented file, the NTFS driver will still have to do a lot of short reads instead of a few large reads), it's just much, much less of a problem
because SSDs are so fast that the delay will not even be noticeable in most cases. What doesn't
stand on a SSD is disk layout
. On a HDD, the beginning of the disk reads data ~2x faster than the end, and sequential read speeds are frequently up to 10x faster than random access speeds. On a SSD, that is not the case; thus, any defragmenter that simply removes fragments only from files that have a pile of fragments (without rearranging the entire disk) will be more than adequate for a SSD.
My conclusion is that while good defragmentation can cause major performance improvements on a HDD, it is rarely if ever needed on a SSD. Ok, it's only needed on an SSD when you have a scenario like this: Two programs are recording a concert with a laptop: two stereo WAV files and an AVI file are being written simultaneously, and they get interleaved on the disk, with 100,000+ fragments for the AVI file and 50,000+ fragments for the two WAV files. I actually had this happen. Windows thought it was a good idea to start all three files a few KB from each other, and of course they collided—again and again!