This may come as a major shocker, but it turns out that Apple had its Reality Distortion Field tuned a little too high when it claimed that the new Mac Studio with M1 Ultra chip could compete, or even beat, NVIDIA’s flagship GPU in performance. The Verge has shared a review of the new machine that shows how the M1 Ultra falls far behind the GeForce RTX 3090 in both general performance and games, such as Shadow of the Tomb Raider, which ran 30 FPS faster on Ampere hardware. Will Apple next claim that the Mac Studio 2 with M2 Ultra is faster than the GeForce RTX 4090 Ti? Stay tuned.
APPLE MAC STUDIO REVIEW: FINALLY (The Verge)
Apple, in its keynote, claimed that the M1 Ultra would outperform Nvidia’s RTX 3090. I have no idea where Apple’s getting that from. We ran Geekbench Compute, which tests the power of a system’s GPU, on both the Mac Studio and a gaming PC with an RTX 3090, a Core i9-10900, and 64GB of RAM. And the Mac Studio got… destroyed. It got less than half the score that the RTX 3090 did on that test — not only is it not beating Nvidia’s chip, but it’s not even coming close.
On the Shadow of the Tomb Raider benchmark, the RTX was also a solid 30 frames per second faster. Now, this is Apple gaming, of course, so Tomb Raider was not a perfect or even particularly good experience: there was substantial, noticeable micro stutter at every resolution we tried. This is not at all a computer that anyone would buy for gaming. But it does emphasize that if you’re running a computing load that relies primarily on a heavy-duty GPU, the Mac Studio is probably not the best choice.
I really want someone to come out and redefine the field. Even if it is someone LIKE apple. What it will take is someone redefining how to achieve performance. Maybe that will be with new quantum processors of whatever. I know it's coming. I just want to see a few new iterations more in my lifetime. Gotta be able to upload my mind to the cloud. I wonder if I will be self aware in that state... hummm...
Meanwhile, I have to laugh at the good/bad stuff, "Expensive" followed by "What you buy is what you're stuck with" cracked me up.
Edit: All the while I'll look over at my MSI Suprim 3090 and be happy knowing that its got years before it'll really need to be replaced for my needs.
Also, an ARM CPU that completely destroys any console is no small feat. We have yet to see any x86 APU laptop/desktop that has better performance than a Xbox Series X, particularly on the graphics front.
Having a chip that easily doubles the performance of last years Apple's top end is something that neither intel nor AMD have achieved in decades.
At this point I even feel bad for nvidia for loosing the ARM deal. Who knows what had they would have come up with.
It's a game they've been playing very, very well for the last decade.
Reluctantly agree, though again, Apple is leveraging advantages here, including shorter release cycles vs. consoles. On even ground the comparison might not be that stark, and more like Apple vs. Android in phones, where there tradeoffs more than outright advantages, and many of those coming down to individual opinions.
This is entirely by choice, likely driven by a lack of need in the market. AMD could release such a product at any time. As could Intel, at this point.
But for all the investment in bringing a high-powered APU to market - which for the record, I'd love to see! - most either want something that's competent but efficient, or are willing to put up with a dGPU, regardless of whether laptops or desktops are the point of comparison.
Can't say I agree here - Apple just very spectacularly glued two M1 Max cores together to produce the M1 Ultra, and that's something Intel and AMD do every time they double core counts (same for AMD and Nvidia in terms of GPUs). Apple's solution does appear to make it more likely that workloads will actually see that doubling of raw performance, but again, there's some selection bias involved, and we can't really test gaming on the M1 Ultra directly with an RTX 3090.
I was tossing a coin between appreciating them for how much they might be able to push the industry forward, while also feeling disdain for the inevitable corporate bullying that would likely have exemplified their 'pushing'.
I'm betting that their failure to acquire ARM won't slow them down too much. Their CEO has an ego to keep inflated, after all!
Then again, this is Apple, and there will be a lot of people who buy one just to brag about having one and not actually needing one.
Actually? People are loving Apple's APU, even at the prices they're charging. Thing is, Apple's APU has gobs of memory bandwidth to share between the CPU and GPU like AMD's console APUs do, but unlike any desktop-class APU. Apple's GPUs then are able to put out dGPU-class performance numbers, whereas any APU using a standard desktop memory controller is going to be severely bandwidth limited in comparison.
Well, they're targeting content creators, from the consumer to amateur to professional level. For that 'niche' market, Apple has tuned their entire hardware and software stack exceptionally well. And when you think about it, from a broad computer market perspective, that's the heaviest work that most people do. Apple is definitely on point when it comes to shipping high-performance systems.
I mean, sure? When you get that niche, you're likely looking at what is best suited to your very specific workload. I'd imagine Windows or Linux and leaning heavy on CUDA and OpenCL. Apple's walled garden with custom APIs is likely a pretty big turnoff, as would be the inability to upgrade GPUs / Compute Engines.
I'm sure that's prevalent, but the thing is, Apple is making the best laptops one can buy, and some of the best computers one can buy, assuming that desktop gaming isn't a priority.
I say that as I've been contemplating both a 14" MBP, because it'd run circles around my 15" XPS while being lighter, quieter, and having over twice the battery life, and also contemplating the latest 'Mac Studio', but in a minimal configuration. Thinking about it now, I'd probably just go with a Macbook Pro, but having a 10Gbit interface on the Studio is tempting!
And the prices for these Mac studio devices are not crazy for content creation workstations.
If I were in this market I would be hard pressed to find a better value.
Diy... well they are out for a multitude of reasons.. but then we are not their target market either.
If you think there is no market for performance APUs, think again. Apple is just the tip. The rest of the ARM pack is right behind. Probably none of them will reach Apple capabilities any time soon, but they could still give intel/AMD a huge blow, specially in the laptop market.
The biggest impediment is that the low volume would keep unit prices higher than just tossing in a dGPU, which would also be faster. AMD would have to design a different platform out of it to give it legs, and then get OEMs to adopt it.
Cost / benefit isn't there, even if many of us would want to see a higher-performance APU.
Edit:. Hell, add a few TB of nonvolatile storage to that infinity fabric while we're at it.
The die size coming from these new APU's from Apple I am rather confident are HUGE. The question is how big is too big? Do we want motherboards having to deliver 300, 500, 1000 watts of power to the CPU to be handled by what... a dual radiator loop?
With an neigh unlimited number of CCX's, memory and GPU processing cores of your choice we could easily build something that is going to topple everything else. But the cost would be ASTRONOMICAL.
When you add cores to a package, you usually still have common memory and I/O access, a common cache level, and a few other things that help keep things moving along nicely when you have work that needs to cross computational units.
Yeah, there are two dies glued together in the same package. But they don't share common cache or registers or anything - they have to work over that interposer. And that interposer is the real news story about the M1 Ultra.
Compared to:
RTX 3090 - 628mm2, 28 Billion transistors
Core i9 12900k - 215mm2, transistor count not disclosed (believed to be approx 10.5 Billion)
AMD Epyc 7742 (64-core) - 74mm2 per CCD, 416mm2 IOD, 1,008mm2 package, 29.5 Billion transistors
Largest Die I could find: Wafer Scale Engine - 815mm2, >2,000 Billion transistors
So, I would think, it is already optimized out said wazoo, within the constraints of the existing designs.
That said, the design targets for those products are a lot different. A very good chance Apple optimized their hardware around tasks like Adobe, Final Cut, etc. Whereas nVidia/AMD are optimizing for gaming and AI, and Intel is optimizing for Passmark.
So yeah, you are right, now that I've talked myself around in a circle there would be room for the 3 Stooges to optimize further, but I would think it might come at the expense of some of the optimizations they are implementing for more popular or general use cases.
(I kid with the Passmark part there - mostly)
Adobe could create a bare OS, have all the needed goodies for running their stuff, a browser some Adobe shop store, and perhaps some office software and such.
If they are smart the would allow the system to be open enough for general purpose, which would give them a headache, but would guarantee much more potential for sales ( not saying they would have the sales, just more potential)
Of all 3, I think Nvidia would be most cabable of doing this relatively quickly, but I think all 3 could do it in at least the equivalent development time of a console. All of Apples advantage, plus faster everything else if needed would most likely be the result.
All im saying is Apple ain't doing no miracles, they are doing unquestionably an excellent job in their for now safe relatively tiny slice of market.