Overclock.net - An Overclocking Community - View Single Post - What are good value SSDs in 2019? 1TB / Phison E12 + Toshiba

View Single Post
post #87 of (permalink) Old 07-23-2019, 08:48 PM - Thread Starter
JackCY's Avatar
Join Date: Jun 2014
Posts: 10,004
Rep: 339 (Unique: 240)
By all means. I wrote into OP the expected usable capacities for the 1TB advertised and 960GB advertised E12 drives from what was reported by users and I calculate the OP size, it's pretty simple really. They love to advertise sizes in GB with each k, M, G, T being a multiple of 1000 not 1024.
So a 1TB advertised drive shows up as 1024 * 10^9 B = 953.85 GB, yeah they advertise 1 TB but shows up as 1024 * 10^9 B not at 1000 * 10^9 B which would be 931 GB.

Quote: Originally Posted by Sean Webster View Post
Do you have any ideas on how this could be achieved? It's a great idea, but this is kinda tough to do, but I was thinking of building an iometer script to show how long it takes for the SLC buffer to recover, but with so many cache sizes and flushing patterns, this may take a while to develop. I would wind up doing interval'd time checks, but I would also be constantly refilling the cache and the time to carry out the test may be longer than it is worth - I can easily see it going for hours. Gotta make something I can easily fit into my already day-long testing. This is also something that challenges testing. You need to ensure that cache is clean before starting the next benchmark.
Yes those concerns sound valid. The write cache can be seen from those drive fill ups and how different drives especially SLC vs MLC vs TLC QLC drop in write speed on long writes depending on user of cache and it's size.
There was also some difference between firmware version of the E12 drives in how well the cache was recovering and thus the write speed spiking back up.

I can only say how I would do something like write caching, keep as large portion of RAM as write buffer and use dynamic SLC cache. This all seems to differ controller to controller if it has RAM and how it caches on the NAND.
This should likely show on a continuous write as the small RAM having extreme write speed that lasts very short time, very very short time (maxing out the interface speed to write into the 1-2GB RAM found on some drives), followed by SLC cache that lasts longer and allows the TLC/QLC drives to offer decent write speeds for small to large writes depending on SLC cache size, after that the TLC, QLC drop in performance as cache is gone and may occasionally recover as cache is being asynchronously cleaned by the controller.

Seen here: https://www.tomshardware.com/reviews...sd,6180-2.html

Sustained Sequential Write Performance

Official write specifications are only part of the performance picture. Most SSD makers implement an SLC cache buffer, which is a fast area of SLC-programmed flash that absorbs incoming data. Sustained write speeds can suffer tremendously once the workload spills outside of the SLC cache and into the "native" TLC or QLC flash. We hammer the SSDs with sequential writes for 15 minutes to measure both the size of the SLC buffer and performance after the buffer is saturated.

Because it has a Phison E12 NVMe controller at its heart, we know the Silicon Power P34A80 features an SLC write cache. After testing, we can see that it is capable of absorbing up to 24GB of data at 3GBps before performance degrades to native direct to TLC write speeds. This matches the MyDigitalSSD BPX Pro, although it does so without the extra overprovisioning. After the cache fills, write performance will degrade to just over 1GBps until it has a break to recover. Here it ties for fourth place overall.
An SX8200Pro for example has larger SLC cache dropping to similar speed once it's full but then it drops again later to 600 MB/s.


It seems that the Adata XPG SX8200 Pro features a two-tiered write cache. During the first minute of the test, the drive wrote over 165GB of data at an average rate of 2.85GB/s. Then performance degraded to an average of 1.1GB/s over the next 7-8 minutes while the drive wrote an additional 500GB of data. After that, it degraded once more to an average of 615MB/s. So, for those of you who write lots of large files, the SX8200 Pro should be able to handle the workload without much issue.
Why is it two tiered and so slow, don't know. It's better than the E12 for large writes except a total full drive write.

Yes you have this pretty covered.

Good stuff that I've been wanting to do, just don't have the $ to invest in one of those cameras/accessories. I've toyed with this every now and then, but I usually just plop a 120mm fan on the PSU aimed at my PCIe slots and temps of most drives stay under 60C - usually 45-50C at most underload like that. M.2 SSDs only consume 5-7W at most and average less, but I've been thinking about doing a heatsink vs non-heatsink head to head sometime putting those very ideas into play too. The VPN100 is one of the tallest heatsinked models I've used so far and it doesn't interfere with my GPU even in the highest M.2 slot. I think i mentioned that in my review, I try to at least. Also, motherboard heatsinks vary a lot. On my x470 system, it makes my OS drive idle at 60C, but temps never pass that either lol. (also, I don't think I've ever seen one M.2 with the controller on the bottom side)
I have this: https://www.aliexpress.com/item/32848953120.html
Costs $5 or even a little under elsewhere, or more on Amazon, Newegg and rebrands.
Can't find there how tall it is but it's pretty low profile probably 6mm tall, it goes above PCIe slots a little and it would just, just clear a GPU if I had one over it:

VPN100 heatsink is taller and so are most heatsinks one can buy that aren't that thin cheapest heatsink with rubber/silicone straps.

Very good point. I've recently started this when I started at Tom's too. Although, sometimes I don't go into deep details and just state entry-level/high-end competitor or something along those lines.

When I'm reading it and that's IF I'm reading it at all as I often look for data and read skim it after, I would look for a list of drives, there is a mention of the MyDigitalSSD BPX Pro but I can't buy that in my region and there are not other E12 mentioned, no Corsair MP510 that's available worldwide for a long time and other easy to get/find. So the comparisons and alternatives are listed more as specific products there right now instead of as product "designs". For example with GPUs one would not be listing competing single product but a graphics chip/design/line (RX 580, RTX 2080, etc.) that's used in many products. So with SSDs I would look for a list of other E12 drives and a mention of say SM2262EN with a link to list or review of those.

For example TPU has https://www.techpowerup.com/review/?...=25&order=date which is nice to search for only reviews but it doesn't offer an option to search by SSD controller, with GPUs this is user "hackable" because all GPUs (usually) have name of the chip in their own name so one will type in RTX 2070 and get a list of reviews of all of them.

With E12 drives there is sometimes P34 in their code/name but even then it's hard to search by typing just that. You can type in E12 or SM2262EN and get zero results as the controller is not in name of products as it is with GPUs.

On Tom's this seems to work to search by SSD controller but it returns both Reviews and News with no way to limit it only to reviews. And to be honest I only found that search a moment ago after years of seeing Tom's site. There is this search icon under top menu, with the search field hidden until icon is clicked and can't click into the search field when it's hidden either. When search field is shown it breaks the top menu, try it, click search icon here:


And then click on Product Reviews, now mouse down to select what review section you want and OH NO the menu hides on it's own before you can click anything.
The only way to get rid of this is to load a page with the search field hidden.

This x100000. So, PCMark 8 and sysmark do a decent job covering a broad range of applications representing the use cases most consumers would use, especially sysmark since it is application-based, not trace based. I started using Spec workstation 3, which goes further than PCMark 8 as a test for workstation users - aka - real pro workloads. It even breaks down IOPS and throughput, but, nothing really on VMs. Would be cool to do some VM load times and transfer performance within the VM. I'm just not sure how much that would be worth the effort, not many care about that honestly.
I'm not a big fan of Intel sponsored benchmarks, or Intel compiler compiled ones, even for SSDs it can have an impact. There is a whole issue about this and why some won't use the tools that Intel has put their nose into to try and boost performance of their products in those benchmarks.

Level1Techs Wendell: https://twitter.com/tekwendell

Would probably know about what to look for and test with VMs.

From my 1 computer VM use, I would say one could test a VM start up time and performing operations/tests in the VM if it makes sense while the VM is located on tested drive obviously. Opening a VM from HDD = go make a tea while you wait, and similarly sluggish it is to use it as well ==> SSD helps a lot similarly how host OS performance changes between HDD and SSD. There more complex tasks to do with VMs too, cloning, backups/snapshots, etc. for a more prosumer use case. Since VMware broke it's Worsktation on my hardware (it won't install or launch, literally, for years now) I use a free VirtualBox instead which covers my occasional basic VM needs.

Thats a good idea, I never thought to define and classify categories like that. I think I'll start in my recommendations.
Different use cases definitely can have different product recommendations, say one category is for an office use on low end (hey SATA is fine for you, buy the cheapest you can find), gamer use for mainstream/mid range (these M.2 are good value), prosumer/creator for "HEDT" higher end platforms (you may want to look at PCIe4 drives but those fast PCIe3 aren't bad either). The differences are both in use and in budget. Of course there is always the "invisible" category of business server use and who knows where they go for reviews of that kind of hardware, as far as I can tell they likely review themselves for their use cases or buy what ever they can get with specs they want as the options often may not be as vast.

That's why I also wrote my most demanding use case in OP so people can recommend drives for that mid range good value. And why I went with E12 drive that's cheaper than SM2262EN as for my use it makes no sense to spend extra for the SM2262EN.

Right now the E12 and E16 drives probably cover most of the market. E12 being at SATA prices and offering good value and speed with the next step above to even consider being PCIe4 drives.

I've read others asking about linux compatibility check before too, but every drive should be fine in Linux until a bug is found. I don't have any protocol analyzers, so that's hard to do if you need a concrete answer. Otherwise, they are just standardized SATA or NVMe devices at the end of the day. Every SSD I test, however, has been secure erased within Parted Magic at one point or another. So, I'd say they are compatible after accessing them within that live OS. So, I could start mentioning that if you think it really would help.
Drives are probably fine, but then for example what about Optane PCIe drives, and other fancier or unusual drive solutions. Or a RAID controller card. etc.
Often it's motherboards, their UEFI IOMMU. CPU support in Win, Linux, ... as we can see with every new platform and architecture there are always some "first adopter" issues to watch out for. With CPUs even programs get broken at times and need fixing.
SSD being much more simpler probably don't suffer from so many compatibility issues.

I will try update my drive and see if the temperature sensor readout changed at all, my guess is it will stay the same.

Quote: Originally Posted by AlphaC View Post
I don't know what's up with Crucial Storage executive, I have used it on an older laptop (~200MB size) and it's probably bloated from Java. I believe that unlike the Sandisk tools it detects the status of other brand SSDs and they have the momentum cache thing as you mentioned.

Yes Java apps are a bloat almost by definition. Having these tools in sizes of 1MB+ is crazy, all it does is display a few tabs of GUI and a piece of text, you could literally fit this into a <500k executable no problem. The problem is they probably use some crazy graphical frameworks to make the GUI and then the apps explode in size instantly. You can see this even with the AS/ATTO/CDM tools, some are <1MB with simple GUI while other opted for a fancier skinable GUI and suddenly explode to 5MB. It takes some serious amount of code to get such crazy file sizes and the main culprit often is using a fancy GUI framework be it in C/C++ or C# or Java, though who the hell would make something like this in Java and have to jump through hoops of DLL calling to get low level access and then extra effort to pack it into an executable since normal people won't know how to run a Java program or a JAR.
Attached Thumbnails
Click image for larger version

Name:	DSC02234_1500.JPG
Views:	168
Size:	493.6 KB
ID:	283170  

JackCY is offline