Intel Core i9-10900K CPU Review Banner

Introduction

Please also check out our full review of the Intel Core i5-10600K CPU and our Intel Z490 Chipset Information article, this article today is our Intel Core i9-10900K review. 

You can always tell when Intel feels remotely threatened by a competitor. It tends to react in a knee jerk fashion, spewing processor models on to the market whether they make a whole lot of sense or not. Intel even tends to launch CPU models whether it can actually supply them or not, but that’s another subject I’ll probably delve into at another time.

After getting beaten by AMD’s Ryzen 3000 series for the better part of a year, Intel has finally reacted and is officially launching it’s 10th Generation Core Desktop Processors. We’ll touch on the other models only slightly. This review is of the Intel Core i9-10900K desktop CPU only. We were sent two review CPUs. The Intel i9-10900K and the Core i5-10600K. You can see Brent’s review of that CPU here. We will get to the other models over time. There simply isn’t enough time allocated before these launches to do the entire series in one go, assuming we’d ever get the entire lineup at once.

Tough Competition

For those of you who are familiar, Intel’s been pummeled by the Ryzen 3000 series for quite some time. It’s then mainstream flagship, the Core i9-9900K was bested in virtually any test outside of the gaming arena. The general consensus was that Intel largely lost that battle despite still being able to claim that it’s 9900K and later 9900KS models were the world’s fastest gaming processors. Intel held this title solidly, but only by an aggregate amount of around 5-6% across the board. Some specific games saw larger gaps in performance favoring Intel while others showed smaller ones. However, in general terms it was Intel’s last bastion of dominance.

In multi-threaded workloads, we saw the 9900K get absolutely destroyed in any workload that could make use of more than 8 threads. I say 8 threads because, after that, Intel had to depend on logical processors where AMD’s 3900X and 3950X had more cores and even more threads after that. There were even times where the much lower-priced 3700X and 3800X processors weren’t far behind or were ahead of Intel as well but at a cost that was substantially less. Intel reacted with price cuts, but it’s fair to say that enthusiasts largely favored AMD for price/performance or raw performance unless gaming was their only goal.

Even then, AMD’s platform has a much longer life span to it and an actual upgrade path. Intel’s LGA 1151 socket was known to be at the end of the line when CPU’s like the 9900KS were launching. This is a problem shared by Intel’s HEDT flagship, the 10980XE. X299 is rather long in the tooth as is LGA 2066. AMD’s AM4 is still going strong and represented a better investment that was surely worth a slight trade-off in gaming performance when we are talking about 6% or less of a difference. A difference largely obfuscated by GPU performance anyway.

Recent Posts

Join the Conversation

8 Comments

  1. So the memory on the Zen system was 3200 or 3600? I know the kit was 3600 but I am just double checking.
  2. It was set to DDR4 3200MHz speeds which is our testing standard for everything unless otherwise noted. If you look at the specification table, I list the part number for the RAM and then the speed used. That’s how I do it for all of these.
  3. It was set to DDR4 3200MHz speeds which is our testing standard for everything unless otherwise noted. If you look at the specification table, I list the part number for the RAM and then the speed used. That’s how I do it for all of these.

    Oof missed that. Was the platform unable to hit 3600?

  4. Oof missed that. Was the platform unable to hit 3600?

    Yes, it can easily hit DDR4 3600MHz speeds and more. I’ve addressed the Ryzen 3000 / X570 memory speeds in previous CPU and motherboard review articles. Given the time allotted for getting the 10900K review done by the embargo date, I was not able to retest the 3900X and 9900K under overclocked conditions. Even if I had, memory overclocking is handled separately as we try to keep that variable out of the benchmarks unless that’s what we are testing.

  5. Good review and well written. Nothing stood out as a glaring inconsistancy.

    It will be interesting to see what happens to code that has been heavily optimized for a 10+?? year old instruction set actually has to run on something new.

    This is what AMD is doing and I think that is a large reason so many of the normal work and Gaming examples were performing better on Intel. (Other than raw execution speed)

    I might be way off base in thinking that coders are using older optimizations that simply don’t exist on the newer AMD silicone.

  6. Intel has always pushed software companies to optimize for Intel silicon going back at least as long as I’ve worked with computer hardware. There are all kinds of SDK’s and programs for doing that. Intel even mentions this in the product brief we got. What little there was of it anyway. But this is one reason why I think that Intel achieves so much despite the lack of cores and threads compared to AMD. Sure, clock speed and cache are part of that too, but I think that optimization for Intel silicon comes into play in cases where we know something is multi-threaded, but Intel still manages to pull a big win vs. AMD.

    It’s worth noting that Ghost Recon Breakpoint was optimized for AMD silicon and it shows. The results between the 9900K and the 3900X are quite similar. The only reason why the Core i9 10900K beats either of them comes down to clock speed and additional cache. That and the extra threads don’t really matter. If I recall correctly, Ghost Recon Breakpoint only sees 12t or at least, that’s all it shows in the in-game performance metrics. Something like that.

  7. I find the 400 fps difference in Doom quite huge for the little difference between the CPU’s but I guess the average tells another story and the min’s are even stranger.

    Any chance of a quick retest when the new doom patch hits next week orso to see if that did anything?

  8. I find the 400 fps difference in Doom quite huge for the little difference between the CPU’s but I guess the average tells another story and the min’s are even stranger.

    Any chance of a quick retest when the new doom patch hits next week orso to see if that did anything?

    Yes. I’d have looked more into the anomalous performance if I had the time. That said, its easily something I could have done differently. Those are Frameview captures of manual run throughs. I could have done something with the camera, or did something slightly different that caused that in some of the runs. If you run into a wall and stare at it in most games your FPS shoots up, or if you explode an enemy at point blank, it can drop substantially. That’s why I prefer canned benchmarks for these types of things, but not every game that people are interested in has built in tools for that.

Leave a comment