Image: Intel

Intel Chief Performance Strategist Ryan Shrout has shared a video showcasing insanely fast PCIe 5.0 SSD transfer speeds using an Alder Lake processor. The demonstration had been intended for CES 2022, but he released a teaser due to Intel switching over to a virtual presence.

A pair of Samsung PM1743 PCIe 5.0 enterprise SSDs are used in a system with Core i9-12900K and ASUS motherboard with 32 GB of memory. With custom card adapters, a single drive hit a read speed of 13.8 GB/s, double the speed of the OS drive on PCIe 4.0. The write speed is said to be 6.6 GB/s.

He also experimented with running the pair of drives independently together. This required disabling the EVGA GeForce RTX 3080 GPU to free up the PCIe 5.0 lanes needed for the test. Using IOmeter, the read speeds peaked at just over 28 GB/s.

Source: Ryan Shrout (via VideoCardz)

Don’t Miss Out on More FPS Review Content!

Our weekly newsletter includes a recap of our reviews and a run down of the most popular tech news that we published.

Peter Brosdahl

As a child of the 70’s I was part of the many who became enthralled by the video arcade invasion of the 1980’s. Saving money from various odd jobs I purchased my first computer from a friend of my...

Join the Conversation

15 Comments

  1. Yup!

    Master race at its best but let’s gain more speed and screw the thermals, reduce TBW, etc.

  2. My only hope here is that this will help push prices of larger drives down. If you want more than 2TB of NVMe, you have to be prepared to sacrifice some combination of performance as well as some cash versus the cost per TB at 2TB and below.

    I’m also not really sold on the need for even faster SSD interfaces. While there were definite gains moving to NVMe, having extra bandwidth on top of that means less and less on a broader set of workloads. On the consumer side, it’s difficult to see a use case beyond say the streaming level tech that was pioneered for the latest generation of consoles, and that’s questionable itself considering that the additional cost of faster NVMe is probably better spent on just having more of it, or more system memory so that streaming isn’t so necessary.

    On the commercial side, one has to wonder how prevalent the need is – it’s not like network interfaces are really keeping up either, as even relatively pedestrian NVMe arrays can keep up with the best networking fabric available and there’s still plenty of protocol overhead to consider.

    I can really only see a usecase for edge workloads, but this is no different than the NVMe 3.0 vs. NVMe 4.0 conversation. Very large workloads that are parallelizable and backed up by significant local compute resources are about the only usecase I can imagine that could actually be used to make a case for such storage transfer speeds.

  3. Yep, let’s hope those prices do come down. I was happy to score an Inland 2 TB gen 3 NVMe for my laptop over the summer for around $200, and at black Friday that pretty much became the norm for a number of manufacturers, but that’s still not a great price.

    In terms of real-world use, I also agree. Most of us are not going to see this improvement. At least, not right away anyway. On the enterprise level it could help in those scenarios for massive amounts of data transfers as [USER=1367]@LazyGamer[/USER] said but that would also depend on the interface used. There was a mention of that in one of the Twitter threads on this. I’m not so knowledgeable about such things but that person suggested a particular one should become a new standard on the consumer level.

    [MEDIA=twitter]1476675873847668737[/MEDIA]

  4. [QUOTE=”LeRoy_Blanchard, post: 45967, member: 137″]
    So Windows 10 should be able to do about 15. amirite?
    [/QUOTE]
    [MEDIA=twitter]1476679072487030794[/MEDIA]

  5. imo we have reached sufficient bandwidth and capacity for general consumer use. There needs to be a renewed focus on latency, which has actually been regressing. Ideally I would have a 768GB SLC OS drive focused on the lowest possible latency, and a 1-2TB mlc drive for games. Almost Anything else I need storage for, my NAS is sufficient bandwidth and latency wise.

  6. [QUOTE=”Endgame, post: 45973, member: 1041″]
    There needs to be a renewed focus on latency, which has actually been regressing.
    [/QUOTE]
    I’m not sure if Intel’s flash sale to SK Hynix affects their Optane line, but that’s where I’d want to go eventually. Obviously the long game is in getting more mass storage like Optane up to DRAM levels of speed and latency.

    [QUOTE=”Endgame, post: 45973, member: 1041″]
    Almost Anything else I need storage for, my NAS is sufficient bandwidth and latency wise.
    [/QUOTE]
    I wish that larger SSDs were more affordable the way that larger HDDs usually are as capacity increases. I also have a NAS that can regularly do 700MB/s+, but the latency still bites for a lot of things, that’s just inherent to TCP/IP and twisted pair – and I’m also not interested in braving the technologies that can bring that latency down just yet. Most I’d do right now would be fiber runs between key nodes.

    That’s really the other side of the equation, I think: you can make local storage faster for the masses, but as soon as that data is needed somewhere non-local?

    [QUOTE=”Peter_Brosdahl, post: 45971, member: 87″]
    depend on the interface used
    [/QUOTE]
    Did a quick search for [URL=’https://www.snia.org/forums/cmsi/knowledge/formfactors’]EDSFF[/URL] out of curiosity – it looks like Intel is already on board, as the spec includes their ‘ruler’ format. Also saw mention that they intend the ~2.5″ drive form factor version to also be able to handle AICs like GPUs and NICs, with a 70w power envelope.

    Obviously that’s at best today what, RTX 3050 level? But given that such a card can handle quite a bit, especially if we’re talking more varied workloads, I think that they’re striking a good balance between size, power, connectivity (supports eight PCIe lanes) and modularity.

  7. [QUOTE=”LazyGamer, post: 45976, member: 1367″]
    I’m not sure if Intel’s flash sale to SK Hynix affects their Optane line, but that’s where I’d want to go eventually. Obviously the long game is in getting more mass storage like Optane up to DRAM levels of speed and latency.
    [/QUOTE]
    Intel has stopped development on the consumer line of optaine as far as I know. Per Anandtech’s coverage of the 2nd Gen Optane:

    [QUOTE]
    The client/consumer focused portion of Intel’s Optane product family has shrunk considerably. They’re no longer doing Optane M.2 SSDs for use as primary storage or cache drives, and there’s been no mention yet of an enthusiast-oriented derivative of the P5800X to replace the Optane SSD 900P and 905P
    [/QUOTE]

    [URL unfurl=”true”]https://www.anandtech.com/show/16318/intel-announces-new-wave-of-optane-and-3d-nand-ssds[/URL]

    I would absolutely love a 480gb, m2, pcie 4 optane drive on second or 3rd Gen memory. Maybe staff here could get some additional insight from Intel?

    edit: it looks like it’s really, really dead:

    [URL unfurl=”true”]https://www.tomshardware.com/amp/news/intel-kills-off-all-optane-only-ssds-for-consumers-no-replacements-planned[/URL]

  8. Hmm…

    Maybe I’ve been thinking about it all wrong.

    Instead of wishing SSD capacity would go up, maybe I should be hoping RAM capacity goes up. Apart from keeping our bingo cards filled out, is there a great reason DRAM density hasn’t kept pace with SSD density?

    Instead of wishing for faster cold storage, maybe we should want cheaper dynamic storage. Just use any old media for cold offline storage, and be able to leverage better caching mechanisms during startup and while running. Not quite a RAM drive, but more like prefetch. And if you had enough of it, you could just cache everything …

    DRAM is going to beat even the best SSD out there, I suspect by a wide margin. So why are we all trying to make the offline storage faster, instead of just making offline storage as cheap as possible, not worrying about the speed so much, and leveraging the already pretty-damn-fast storage we have better?

    That, or get back to what Optane was trying to do – merge offline and dynamic storage into one device. That would be the ideal.

  9. [QUOTE=”Brian_B, post: 45978, member: 96″]
    Instead of wishing SSD capacity would go up, maybe I should be hoping RAM capacity goes up. Apart from keeping our bingo cards filled out, is there a great reason DRAM density hasn’t kept pace with SSD density?
    [/QUOTE]
    DRAM is more difficult in every way – more power hungry, more difficult to stack, and worst, doesn’t support the multiple ‘cell’ levels that NAND can, so it’s ‘SLC’ or nothing. DRAM is also far more focused on bandwidth than capacity it seems to me at least.

  10. If DRAM is your thing, you can toss 2TB on a threadripper pro. Of course, almost no programs are going to take advantage of that much Ram unless you muck around with a Ram drive, and that comes with its own set of substantial problems.

  11. [QUOTE=”Endgame, post: 45980, member: 1041″]
    If DRAM is your thing, you can toss 2TB on a threadripper pro. Of course, almost no programs are going to take advantage of that much Ram unless you muck around with a Ram drive, and that comes with its own set of substantial problems.
    [/QUOTE]

    The idea isn’t to use it as a RAM Drive, and of course programs aren’t meant to take advantage of all of it. Think prefetch/superfetch – they would probably need some tweaking to take better advantage of that capacity, but something along those lines.

    It may be entirely unpractical due to the economics of things – as Lazy says, DRAM isn’t really engineered for density. Just seems silly that the industry is circling around things like PCI 5 and DDR5 — when those things were never the bottlenecks and they have no practical real-world benefit in almost every use case.

    I guess whatever it takes to print a larger number on the side of the box than the competition.

  12. I see q use case here. Cache and performance drives for large databases. I mean data crunchy databases nothing with BLOB data. Putting that on a high speed direct connect storage over something like 100gb fiber and you will get some great I/O performance. Of course you’re probably dropping a million plus on a decently sized NAS and support for that.

  13. [QUOTE=”Grimlakin, post: 45982, member: 215″]
    I see q use case here. Cache and performance drives for large databases. I mean data crunchy databases nothing with BLOB data. Putting that on a high speed direct connect storage over something like 100gb fiber and you will get some great I/O performance. Of course you’re probably dropping a million plus on a decently sized NAS and support for that.
    [/QUOTE]
    Basically one of these:

    [URL unfurl=”true”]https://www.oracle.com/engineered-systems/exadata/[/URL]

  14. [QUOTE=”Endgame, post: 45984, member: 1041″]
    Basically one of these:

    [URL unfurl=”true”][URL]https://www.oracle.com/engineered-systems/exadata/[/URL][/URL]
    [/QUOTE]
    Yep pretty much thst or any ultra high speed direct attached storage.

Leave a comment