Originally Posted by Techie007
To me, this borders on fanaticism and demonstrates a complete lack of knowledge as to what is actually possible. Samsung is already doing their best trick with RAPID. This is not a matter of being outclassed, it is a matter of understanding how data transfers are handled between storage devices and software. If Samsung's programmers were so great, we would not have had this issue in the first place, let alone a second time with the October fix
. I know the many stages the data goes through to get from the SSD to where it can be used by programs, and I will repeat: There is no way that the SSD can pretend to "read faster" than it really does. You can insert caching mechanisms like RAPID in the middle, but the end of the matter is that the SSD will have to retrieve data eventually because everything doesn't fit in RAM, and at that point, the SSD's real performance will show.
Once the SSD has fulfilled the data read request, there's no going back. Either the request was fulfilled successfully (no error code) and in full (all the data that was requested), or it was not. From a programs' standpoint, the fulfillment of the request comes with the data. There's no pretending to a benchmark program (especially ones like SSD Read Speed Tester
and File Bench
that go through the filesystem) that the reads were fulfilled sooner than they were. In other words, when the read request returns, it's returning with all the data, and the SSD can't say later "Oh, I've got more/revised data for you now." Besides actually improving their algorithms or overclocking the SSD (limited by its already designed thermal capabilities), the only corner they can cut is error correction. And if just a little corrupted data is returned—depending on when—your computer:
- Will BSOD instead of booting
- Won't be able to successfully boot to the desktop
- Will crash programs on a regular basis, particularly when loading them
- Will frequently report filesystem errors or missing files
So for these reasons, they can't bypass error correction. And error correction is rather black and white: Either the chunk just read (internally by the SSD) passes checksum, or it does not and needs to be read again. That leaves one possibility: The new read algorithm has a lower error rate and/or a better way of reevaluating data to pass checksum quickly.
While I'm fully with you guys that Samsung did a horrible job at customer service, I have the feeling from some of you that no matter what Samsung did to fix the 840 EVO, you would not be pacified. BUT: economically, I believe that Samsung made the best decisions. Coming clean publicly or giving more feedback to us about the issue—while appealing to technically inclined customers—would greatly spread word that there is something wrong with their SSDs. As things stand now, most people are using the old, initial firmware blissfully unaware of the slower performance they are experiencing, and meanwhile, tons of positive reviews on the SSDs are pouring into places like Amazon daily. Those of you who have been deeply offended are a very tiny
part of their customer base.
You are under the mistaken impression that your software (and any other) can't be fooled by a few lines of code.
Check for programs X,Y,Z. Do this.
Easy to do and is the level that Samsung has fallen too many times before. Any benchmark program including your utility doesn't do anything with the data, it merely returns 'data was read'. Therefore no error correction or bypassing of such needed.
My theory also explains the new BSOD's, btw. While your position is based on a Samsung that wants to do their best for it's customers and they have blatantly shown time and again that is not what they're here to do.
Step back from how unlikely it is for Samsung to do such dastardly deeds and then maybe you too will see how all (or at least some) of your support for Samsung is misplaced.
You and others here should not be defending Samsung, that is their job. But they're not and that is the first clue that their goals are not as altruistic as you and a few other here think they are.
And because of this, I'm sure that we'll be discussing this same/related issue in a few months from now again. Because fix #3 will be in the wild for an issue that originated in the middle of 2013.