Putting it all together

Beyond benchmarking, we also install and look at every feature of software associated with each manufacturer’s SSD.  These toolboxes are great resources for managing your SSD, like updating firmware, during Secure Erase or Wipe, optimizing features, overprovisioning, and checking the disk health.  We examine these applications, take screenshots and show you what they have.

When we put all this data together in a review, we specifically are going to separate drives compared on the graphs by their speed generation.  For example, if we are reviewing a PCI Gen4 SSD, then all you will see on the graph are other PCIe Gen4 SSDs.  If we are reviewing a PCIe Gen3 SSD then all you will see on the graph are other PCIe Gen3 SSDs.  If we are reviewing a SATA SSD then all you will see on the graph are other SATA SSDs. 

There are several reasons for this separation.  The first one is of scale; we don’t want the results to be widely skewed on the graphs.  The second reason is as we keep adding more and more SSDs to reviews, the graphs are going to continue to grow very long.  If we just put every SSD on every graph, they would get to a point where it would be hard to see what’s actually going on. 

Finally, at the end of the day, when you are comparing an SSD being reviewed to another one, you want to compare it with an SSD of the same performance class so that you can actually see which one is better.  If we compared a PCI Gen4 SSD to a SATA SSD, obviously the PCIe Gen4 SSD is going to be faster, and the graph scale will be very awkward.

Through all of these tests and benchmarks, we should be able to derive a solid analysis of SSDs and report several different types of workloads.  We will be able to tell which ones are better at what and relate our experiences to you to make good buying decisions. 

Discussion

Brent Justice

Brent Justice has been reviewing computer components for 20+ years, educated in the art and method of the computer hardware review he brings experience, knowledge, and hands-on testing with a gamer oriented...

Join the Conversation

3 Comments

  1. I appreciate the effort, it looks well thought out and thorough. I especially applaud the testing standardization and the building up of a database. That is the best thing for all reviews imo, as it gives you the ability to objectively go back and make comparisons.

    I have to admit – personally, I don’t really look at storage benchmarks. I care about SSD vs HDD, but if one SSD is a bit faster than the next – not a consideration for me or my typical uses.

    I recognize I’m not everyone – and some people here do have cases where the difference in performance can make a big impact. I’m just not one of them.

    For me, the three biggest factors in my storage purchases:

    1) Is it reliable? Your temperature testing does get to that fact, but only tangentially. Here, I rely largely on Backblaze reporting, SSD overprovisioning and brand name reputation (which is not a great indicator of anything really). It’s hard to get this kind of data without some long term use cases, and particularly in SSDs, a lot of times they just haven’t been around long enough to be able to get that kind of data. If there are other resources that help get at this, I’d be very interested. I don’t know that this is something that FPS could invest in, it takes a good deal of time and resources. For SSDs – for home use I’m typical consumer use, so write endurance isn’t a huge factor, and even at work – it’s light duty database, and even that doesn’t see a huge amount of writes.

    2) Price per byte. I will look at interface when I look at price – I’d value nVME over SATA on SSDs, for instance, but only to a point. But for one nVME over another, I probably wouldn’t look at speed benches.

    3) Warranty coverage. I have shucked drives before, when the price per byte was just so low that it was hard to pass up. But more often than not I try to get drives with 5 year warranties. I’ve found most drives will outlive that, but on drive failures that I’ve seen, it tends to lump into three distinct bands – within the first 90 days, or around year 3-4, or well after year 7. I never touch rebuilt or used drives.

  2. Choosing to intentionally test drives via the second, chipset-connected slot is a very strange decision, given the compatibility issues that have popped up with drives like the (now resolved) WD SN850 and those which use certain SMI controllers (SM2262/EN, etc). There is also additional command latency for having to traverse the chipset interlink which can impact IOPS, to say nothing of any incidental secondary bandwidth from accessory devices which are switched to the chipset.

    Being able to more easily point a fan at the drive seems like a very weird justification given the potential for more important issues which could, quite literally, invalidate each and every drive test result on this site going forward.

    Many users have also experienced poor SATA performance on X570 boards, with lower than expected random 4K (Q32T16, etc) results as compared to B450/B550 or Intel platforms, so that could throw off reviews of any future hypothetical refreshed SATA SSDs. In fact, the same problem is clearly visible in your review of the TeamGroup T-Force Vulcan SSD. Q32T16 performance sits at about 230MB/s, below what is expected.

    https://www.thefpsreview.com/2020/07/08/teamgroup-t-force-vulcan-500gb-2-5-sata-ssd-review/6/

    To be clear, I’ve been reading your content since you started at [H] and this isn’t some gotcha dig at your credibility, it simply strikes me as a very weird choice and a bit of an unforced error. If anything, I’d expect testing on the main slot by default, with an additional quick sanity check on the chipset slot as well, to check for any glaring compatibility issues.

    Storage reviews haven’t really been a focus here, but if you intend to jump in with both feet it might be worth considering these things. :)

  3. To be clear, I’ve been reading your content since you started at [H] and this isn’t some gotcha dig at your credibility, it simply strikes me as a very weird choice and a bit of an unforced error. If anything, I’d expect testing on the main slot by default, with an additional quick sanity check on the chipset slot as well, to check for any glaring compatibility issues.

    It was considered, and the above check has actually been done. The motherboard we are using has the exact same performance between the M.2 slots. Plus, all drives are being benchmarked on equal turf, and thus comparable as they are all being benchmarked in the same way, on the same M.2 slot, and with the same cooling. Therefore, the testing is standardized and can be compared directly.

    IF, read IF, any latency issues exist due to this particular M.2 slot, it would be replicated on every test, for every SSD, and thus would still be comparable as the same configuration is being used. However, we did checks to verify that the performance is the same between the two slots prior to making this decision. I would not have chosen to do so otherwise. The M.2_2 slot provides the full potential of performance. The full PCI-Express 4.0 lanes are open to it and the slot can maximize PCI-Express 4.0 performance.

    Our system runs lean, and we do not have excessive data running through the chipset that would cause latency or bandwidth degradation. We also run the tests multiple times and verify the results.

    The M.2_1 slot on this motherboard does not have a heatsink. Only the M.2_2 slot, therefore this is another reason for using the slot, so we can apply the motherboard’s default heatsink as it is intended.

    In addition, the use of the M.2_2 slot is a real-world test configuration that would be used in a computer build. Testing on that slot is a configuration that would exist in real-world usage. It is therefore a real-world setup.

    The use of an X570 based motherboard for the test bench is obvious.

    All system specs, and configuration, are clearly stated.

Leave a comment