Originally Posted by qadri
So in the case of a sandforce controller or marvell (considered to be good controllers) for example then why not just simply use an SSD without a HDD (if you do not needs TBs of space)?
Ideally, you want to keep as much free space on an SSD as possible. This is so that the space can be wear-leveled easily, and so that the controller can more appropriately group data together.
One thing that must be considered regarding SSD's is that they all have "pages" of data. If any data within the page is edited, the whole page must be re-written. Pages are typically 512kb to 2mb.
If you have an SSD that is 75% full, it's very possible that the data is fragmented enough to where you have no free pages. You just have a bunch of partially used ones. This is where you get the slowdowns - as the SSD writes to hundreds of small files a second, it also has to read a bunch of pages, combine the new data with the old data into new pages, and rewrite it.
Another slowdown cause is the fact that SSD's don't actually delete data from pages when the data is deleted on the computer - they just remove the reference from it. When those pages need to have new data written to them, the drive has to first erase the old data from the page, then write the new data. It takes a good deal of extra time to do this process. TRIM takes care of it, provided you have the right OS and proper idle time. If not, you'll need a good controller. Intellix-based drives are terrible in this respect. SF-1200 drives are ok. SF-2500 drives mitigate this issue altogether.
A good controller (such as the SF 2500) has all the tools necessary to work with a drive even when it is "dirty" with undeleted data and partially used pages. You'll basically never see a slowdown with those SSD's, no matter the usage patterns.
Everything I said could be wrong, but that is my current understanding of how they work.