Image: AMD

AMD has released a statement confirming that its Smart Access Memory technology, which grants Ryzen 5000 Series processors full access to Radeon RX 6000 Series GPU memory, isn’t proprietary and can technically work with other hardware.

A recent report revealed that NVIDIA is developing a similar feature that leverages certain specifications already built into PCI Express (i.e., Resizable BAR, which is mentioned in the latter portion of AMD’s statement), so the revelation isn’t all that surprising.

“As the only company offering high performance gaming CPUs and GPUs, AMD is in a unique position to deliver incredible PC gaming experiences,” reads the statement (via PC Gamer). “With AMD Smart Access Memory, we have designed, optimized and validated both hardware and software technologies with all combinations of Ryzen 5000 Series processors, Radeon RX 6000 Series graphics cards, AMD 500 Series motherboards and the latest drivers and BIOS at launch. We believe this pairing unlocks maximum platform performance.”

“Smart Access Memory is built on features of the PCIe standard and firmware standards (Resizable BAR), and was developed through extensive validation and platform optimization. We welcome the opportunity to support other hardware vendors in their efforts as part of our ongoing commitment to using common and open standards to improve gaming experiences.”

First-party benchmarks from AMD suggest that Smart Access Memory can improve 4K game performance by 5 to 6 percent in many titles. The feature currently requires an AMD 500 Series motherboard, Ryzen 5000 Series processor, and Radeon RX 6000 Series graphics card.

Don’t Miss Out on More FPS Review Content!

Our weekly newsletter includes a recap of our reviews and a run down of the most popular tech news that we published.

Join the Conversation

19 Comments

  1. So AMD if that is the case WHY the heck are you not giving this to us for the 3000 series Ryzen CPU’s in 500 series motherboards? Obviously the paths are there… what is the catch?

  2. [QUOTE=”Grimlakin, post: 23648, member: 215″]
    So AMD if that is the case WHY the heck are you not giving this to us for the 3000 series Ryzen CPU’s in 500 series motherboards? Obviously the paths are there… what is the catch?
    [/QUOTE]
    $$$$$$$

  3. [QUOTE=”Grimlakin, post: 23648, member: 215″]
    So AMD if that is the case WHY the heck are you not giving this to us for the 3000 series Ryzen CPU’s in 500 series motherboards? Obviously the paths are there… what is the catch?
    [/QUOTE]
    Testing.
    You know avoiding the AMD this and that wah wah wah.
    Id imagine ‘ testing’ is actually fairly limited as even limiting cpus and chipsets the permutations are huge. Imagine if they opened it up to more and more cpus and chipsets and so on.

  4. I’m not liking this “new” AMD strategy.

    Higher prices, mid-range cpus/gpus performance to price is veey questionable compared to its flagship and entry-level performance, its so-called DLSS alternative ETA, and its raw RT performance.

    But tomorrow, is a new day.

  5. [QUOTE=”GunShot, post: 23653, member: 1790″]
    I’m not liking this “new” AMD strategy.

    Higher prices, mid-range cpus/gpus performance to price is veey questionable compared to its flagship and entry-level performance, its so-called DLSS alternative ETA, and its raw RT performance.

    But tomorrow, is a new day.
    [/QUOTE]
    I’m not sure what hat you mean by mid level $/perf? Their 6800 or 6800xt? Their $/perf seems to match up similarly,l. My biggest complaint is nothing in the actual range the majority of the population buy in. If I could grab a 3050/6600xt for like $250 that is ~2070 /5700xt performance, I’d be happy.

  6. [QUOTE=”GunShot, post: 23653, member: 1790″]
    I’m not liking this “new” AMD strategy.
    [/QUOTE]
    I agree.

    When your no longer the underdog, you tend to start acting like the big dog.

    That said, it looks like with the early M1 reviews Apple may be nipping at their heels already.

  7. Apple’s CPU’s work great in Apple’s proprietary garden. outside of that it’s never been the case. For evidence see Motorla chips and Power PC chips.

  8. [QUOTE=”Grimlakin, post: 23670, member: 215″]
    Apple’s CPU’s work great in Apple’s proprietary garden. outside of that it’s never been the case. For evidence see Motorla chips and Power PC chips.
    [/QUOTE]
    What I mean is… ARM in general may be coming for x64. You are right, M1 and Apple’s CPUs won’t ever see use outside of Apple – but that doesn’t mean that nVidia or Qualcomm or Samsung won’t see what Apple is doing and push harder to get into that same marketplace. Microsoft never quite got the secret sauce working with their ARM initiative, but maybe they don’t really need to.. if they can make x64 work on ARM, and some higher performance ARM parts start to come through, I think x64 dominance in the consumer space is really threatened. If Parallels works on OS X – maybe you don’t even need to wait for Microsoft to do it, a third party could wrap a emulation layer around x64 apps on Windows for ARM.

    Going a step further, you may start seeing a lot more datacenter use as well (Amazon is already doing it with their own custom SOCs). Apple has their 20W package performing about on par with 70W x64 packages…. that large reduction in power starts to look awfully good once you are packing a few hundred in a container with dedicated HVAC and multi-megawatt power supply.

    No one had ever really made high performance ARM work before – Apple seems to be showing that it can. That means others are poised to do the same. And nVidia is right there… Zen3 is great and all, but it may be a case of being late to the dance, Intel may have been doomed in the CPU space with or without AMD bring Ryzen.

    1. None of those other companies can realistically compete in the same space anytime soon. Apple has been attempting to compete in this space against IBM/Microsoft/Intel(AMD) for 30 years. Their mobile IOS platform is by far way more successful than their personal computer products. That time in the space has granted them solid 3rd party software in the creative market, and shifted many to their 1st party alternatives. Their M1 will mostly merge those audiences. Google is the company most likely next in line to move into that space, as they have been building an ecosystem around their g-suite for many years already. Samsung and Qualcomm have zero software presence and basically rely on either Google or Microsoft. The future of any other ARM devices pretty much relies on those two proceed. Perhaps they can join the Linux market. Nvidia isn’t much different in that respect. They’re more likely to be becoming a console manufacturer, built around Geforce Now game streaming.

  9. [QUOTE=”Brian_B, post: 23675, member: 96″]
    What I mean is… ARM in general may be coming for x64. You are right, M1 and Apple’s CPUs won’t ever see use outside of Apple – but that doesn’t mean that nVidia or Qualcomm or Samsung won’t see what Apple is doing and push harder to get into that same marketplace. Microsoft never quite got the secret sauce working with their ARM initiative, but maybe they don’t really need to.. if they can make x64 work on ARM, and some higher performance ARM parts start to come through, I think x64 dominance in the consumer space is really threatened. If Parallels works on OS X – maybe you don’t even need to wait for Microsoft to do it, a third party could wrap a emulation layer around x64 apps on Windows for ARM.

    Going a step further, you may start seeing a lot more datacenter use as well (Amazon is already doing it with their own custom SOCs). Apple has their 20W package performing about on par with 70W x64 packages…. that large reduction in power starts to look awfully good once you are packing a few hundred in a container with dedicated HVAC and multi-megawatt power supply.

    No one had ever really made high performance ARM work before – Apple seems to be showing that it can. That means others are poised to do the same. And nVidia is right there… Zen3 is great and all, but it may be a case of being late to the dance, Intel may have been doomed in the CPU space with or without AMD bring Ryzen.
    [/QUOTE]

    Ok I’ll give you that… but it’s going to take a while to get out of the cloud and into integrations of Datacenters. Hell it’s taking a long time to get some companies to even try an EPYC CPU in the datacenter and you know Intel will be fighting tooth and nail to keep ARM out.

    Right now I still feel like ARM is more of a ultra portable or ASIC type of a processor than a general purpose processor capable of performance. But that can change I am open minded.

  10. I guess the question I have is why do we want this feature?

    Yes, GPU RAM is fast and all, but even on a PCIe 4 enabled system it sits behind the relatively slow and high latency PCIe bus, instead of being directly connected to system RAM.

    So, you get 16GB more usable RAM from your GPU? And that 16GB is PCIe bus limited and thus performs worse than system RAM?

    Why not just spend the $80 and buy 16GB more system RAM instead?

    I mean, sure, maybe it helps on systems that don’t have enough system RAM. But for crying out loud, if you are buying a $700 or $1000 GPU, you have enough money to buy more system RAM.

    I just don’t understand why this feature exists, and what problem it solves.

    (And yes, as has been stated before, limiting it to 5000 series CPU’s is a bit lame. At the very least make it available on anything PCIe 4 or better, as certainly PCIe bandwidth is a limiting factor)

  11. No it’s more like… (I THINK) that the system can update the memory on the card directly instead of passing through hoops meaning the GPU itself… Same as how direct storage access is the next best thing for loading textures. This is just another streamlined channel to update video card memory.

  12. [QUOTE=”Grimlakin, post: 23677, member: 215″]
    Ok I’ll give you that… but it’s going to take a while to get out of the cloud and into integrations of Datacenters. Hell it’s taking a long time to get some companies to even try an EPYC CPU in the datacenter and you know Intel will be fighting tooth and nail to keep ARM out.
    [/QUOTE]
    To buy Epyc… you actually have to be able to buy Epyc. Cloud is really where Epyc should be making gains, but corporate datacenters? It’s a change of platform, and that carries some predictable and some unpredictable risks. Including supply!

    AMD is still trying to figure out how to make [I]enough[/I] CPUs. Granted it’s a nice place for them to be relative to their recent past, but the basic reality is that Intel still actually physically produces an order of magnitude more, and TSMC cannot match that for AMD, even if AMD CPUs were all they produced.

    [QUOTE=”Grimlakin, post: 23677, member: 215″]
    Right now I still feel like ARM is more of a ultra portable or ASIC type of a processor than a general purpose processor capable of performance. But that can change I am open minded.
    [/QUOTE]
    x86, as currently and broadly implemented, is a beast at chewing through out-of-order instructions. ARM is a supremely competent base, but it’s not really ever been pushed in that direction, mostly and quite simply because Intel (and occasionally AMD) have that use-case covered.

    Now, Apple’s goal is to stop paying Intel, at least on a per-unit basis. Their ARM project is a decade in the making; it’s not just the CPUs, but also all of the dedicated co-processors to handle all of the stuff that ARM isn’t good at, and then all of the software from micro-code to drivers to OS stacks to APIs and user-facing software.

    And they’re still not trying to beat x86 at what x86 is good at. The thing is, most consumers- most [I]end users[/I]- don’t need what x86 is good at either. x86 is the lazy way out; the CPU is fast enough that you can just throw stuff at it and it will work.

    But that’s not efficient, and efficiency matters. What’s efficient, now that we have circuits small enough, is to build some general-purpose control logic like ARM, and then to use the rest of your silicon and power budget for application-specific coprocessors. That takes real integration and real know-how, but as we’ve seen time and again, when it’s done right it pays off.

    Apple’s one example, but even Intel’s SSE2, which represents the vast majority of FP workloads, or Nvidia’s NVENC which is the standard for hardware video encoding (or Intel’s Quicksync which is literally everywhere), are examples of this process put into play.

    [QUOTE=”Zarathustra, post: 23690, member: 203″]
    (And yes, as has been stated before, limiting it to 5000 series CPU’s is a bit lame. At the very least make it available on anything PCIe 4 or better, as certainly PCIe bandwidth is a limiting factor)
    [/QUOTE]

    That should be the only factor, really. So long as the GPU has sixteen direct lanes to the CPU running at PCIe 4.0, rock on.

  13. [QUOTE=”Grimlakin, post: 23699, member: 215″]
    No it’s more like… (I THINK) that the system can update the memory on the card directly instead of passing through hoops meaning the GPU itself… Same as how direct storage access is the next best thing for loading textures. This is just another streamlined channel to update video card memory.
    [/QUOTE]
    I don’t know what it does because I can’t find any white paper or other documentation explaining it.

  14. Locking this to the 5000-series Ryzens – however – is very un-AMD.

    I can see locking it to CPU -> GPU’s where 16x of PCIe4 available, or some other bandwidth measure, but the arbitrary model number restriction is bothersome.

    I guess AMD is changing as they ahve more success. No longer the friendly champion of all things open and reasonable.

  15. Yea this feature not bein on ANY AMD CPU that supports PCIE 4 at 16x is a real boner move as they confirm it will work on other CPU’s.

  16. [QUOTE=”Zarathustra, post: 23880, member: 203″]
    Locking this to the 5000-series Ryzens – however – is very un-AMD.

    I can see locking it to CPU -> GPU’s where 16x of PCIe4 available, or some other bandwidth measure, but the arbitrary model number restriction is bothersome.

    I guess AMD is changing as they ahve more success. No longer the friendly champion of all things open and reasonable.
    [/QUOTE]
    AMD was never any different than Nvidia or Intel, just not as successful and that dictated their behavior.

    Lisa Su is a highly motivated ambitious individual. She likes to win and dominate just as much as Jensen Huang.
    She doesn’t like to share and has no plans to co-exist.

    No tucking little gamers in to bed at night, no home baked cookies, no sweaters knitted full of love.

  17. This is controlled in the UEFI. What’s to stop the mobo maker from allowing it to be enabled with Ryzen 2 chips that have PCIe 4? AMD is only validating Ryzen 3, but we haven’t seen that they will actively block other chips, have we?

  18. [QUOTE=”Auer, post: 24019, member: 225″]
    AMD was never any different than Nvidia or Intel, just not as successful and that dictated their behavior.

    Lisa Su is a highly motivated ambitious individual. She likes to win and dominate just as much as Jensen Huang.
    She doesn’t like to share and has no plans to co-exist.

    No tucking little gamers in to bed at night, no home baked cookies, no sweaters knitted full of love.
    [/QUOTE]

    Nvidia’s ways of lock-ins, lock-outs, and shady business practices almost to the level of Intel do not guarantee success, and do not necessarily result in better market success than the more open model.

    You can be transparent, open and competitive at the same time.

    AMD can and should charge market prices for their products. People looking to AMD as the “fair priced, kind alternative” are idiots. They are a corpoation with shareholders just like Nvidia and Intel, and they have a legally binding fiduciary responsibility to maximize returns for those shareholders, but the shady practices of lock-ins and lock-outs for any other reason than technical compatibility should be shunned from any vendor. They should be competing on their merits, not on their market manipulations.

    Being open can actually make AMD MORE competitive, as people are more likely to buy their products if they don’t fear future lock-ins and lock-outs.

    Just because you can do something doesn’t mean that you should.

Leave a comment