Originally Posted by TwoCables
Can you provide proof of this, please?
When I learned about SSDs, one of the things I learned as it pertains to this subject is that running a defragger on a Solid State Drive is a waste of time because its onboard controller has its own
way of arranging and organizing the fragments in the cells, and this is usually in a very fragmented state for the sake of Wear-Leveling.
I'm not saying that you're wrong or anything, I'm just looking to maybe learn something. I mean, maybe running a defragger on a Solid State Drive can be good if it causes its onboard controller to go ahead and organize the fragments in the most optimal way according to the way the controller has been programmed for the benefit, again, of Wear-Leveling.
Good question! Proof (as in measurements, tests or citations), no; but theory, yes.
Basically, at HDD (or SSD) level, there is no such thing as fragmentation. The whole disk is one giant file; this is what is copied by disk imaging software when you make an image of a physical drive
. Now, you mentioned that data may be stored in a fragmented way internally on an SSD for wear leveling; however, I believe that this is handled transparently by the SSD itself; the SSD is still seen and addressed sequentially by Windows. On this big file is a partition table; when Windows starts, it reads the partition table. At this point, Windows loads driver(s) that address the single, huge file that is the disk's storage space, and translate it to volumes (partitions).
Each of these partitions are seen individually as large files (i.e. a 20 GB partition on your HDD or SSD is a single 20 GB file that holds the filesystem and all the files that you have stored on it). At this point, Windows reads the header on the "file" that is the disk's partition, and determines what driver (NTFS, FAT32, etc.) should be used to mount that partition. Here is where the filesystem comes in. At this point, I'll be assuming that the filesystem is NTFS. The filesystem stores many things on the "file" that is the partition it is mounted to:
• A filetable (the MFT, or Master File Table)
• A B-tree structure for each folder stored on the partition. These contain the names, sizes and dates of each file and folder stored in the folder referenced to by the B-tree structure.
• A "cluster" bitmap, that indicates what areas in the partition "file" are free for writing new data to (errors in this can cause data loss).
• A logfile that can be used to correct filesystem corruption (chkdsk uses this).
• All the data for the files that you have stored on the partition (this data is just kind of strewn throughout the partition "file").
For each file that you have stored on the partition, the MFT has a record (or multiple records, if the file was ever extremely fragmented at any point in its lifetime
) that says what the file's name is, what it's parent folder's index is, and where that file's data (called "extents") is actually stored in the partition "file". If a file is frequently modified and extended (a detailed logfile with frequent events would be a good example of this), Windows will have
to create a new "extent" for that file as soon as simply extending the file would cause a collision with existing data. I've seen files with 50,000 fragments ("extents") and many (I didn't count, sorry) MFT indexes to store all that "extent" data.
Here's where the theoretical performance decrease comes in for ANY storage medium (especially SSD, where the more significant disk delays are greatly reduced): Reading such a file (I said "terribly fragmented" for a reason) would require the NTFS driver to do many I/O operations to read the "file extent" data from the MFT. This data is usually strewn throughout the MFT (i.e. the file's first record could be at MFT index 67,579, the next record at index 2,890, the next at 49,628 and so on). Since Windows doesn't cache the whole MFT into memory, a lot of reads will have to be done to read all that randomly placed "file extent" data (each seek necessitates another read operation). Regardless of the disk's access time, there is a time penalty for simply doing an I/O operation because there is a "synchronization" time with the motherboard (Northbridge?) involved. After that, if the file has lots of tiny fragments, the NTFS driver will have to do a lot of additional reads to actually read the file data. Each of these reads take time, besides the time taken by the NTFS (filesystem) driver to convert MFT clusters (usually 4 KB) to bytes for the partition driver, then the partition/volume driver has to convert that number to a physical disk offset, and the disk-drive's driver has to convert that number to sectors (I think) for the actual I/O request.
I hope that clarifies things; if any of that information collides with something you know, speak up! SSDs are an area that I don't know much about; however I do know a lot about how the NTFS filesystem operates, quite a bit about how data is actually stored on a disk, and a little bit about the actual I/O path in Windows. I don't see how an SSD could avoid fragmentation that it would never see (as I said above, disks don't get fragmented, filesystems do; the SSD sees one contiguous file that the NTFS driver is playing around with). That said, I agree that defragmentation being something that would be rarely needed on an SSD, and the performance hit from excessive fragmentation may go unnoticed by the user, due to blazing fast access speeds and times. Also, file-layout, the biggest issue with defragmentation on a HDD, is a moot point for a SSD.