Gaming Benchmarks Part 1

Now we come to the meat and potatoes of the Core i9-10900K. This is a follow up to the world’s fastest gaming processor. Intel claims it is the new king of that arena. Frankly, it has to be as it gets pummeled pretty badly by the Ryzen 9 3900X which has been around for the better part of a year now.

Generally, we are usually more GPU bound than CPU bound, but as you’ll see the CPU still matters even in GPU bound conditions. Also, many people still game at 1080P, which highlights stronger CPU’s as this is a somewhat CPU bound resolution.

3DMark 10

Intel Core i9-10900K 3DMark

Intel CPU’s tend to outscore their AMD counterparts in this test. Therefore, I am not terribly shocked by the outcome. We can also see an all-core 5.1GHz overclock earns us some extra points in this test.

Destiny 2

Intel Core i9-10900K Destiny 2

I apologize for the ugly graphs. The logarithmic scales were necessary given the huge gap between the minimums and the maximum frame rates here. As you can see, the 10900K achieves the best performance when overclocked across the board. Even so, our FPS can drop really low in areas where there are lots of enemies, explosions, and weapons fire. Intel does gain quite a bit over the 9900K with the improved clock speed and cache added to the 10900K.

Intel Core i9-10900K Destiny 2

Again, at 4K we see solid gains going from the 9900K to the 10900K. Interesting to note that the 9900K has the worst minimum frame rates here. This is a sharp contrast to almost a year ago when I first tested the 3900X in this game. Aside from the maximum frame rates, the 3900X is a better choice than the aging 9900K is. However, 10900K is the fastest where it counts. Although, I can’t explain why the 9900K still has higher maximums.

Join the Conversation

8 Comments

  1. So the memory on the Zen system was 3200 or 3600? I know the kit was 3600 but I am just double checking.
  2. It was set to DDR4 3200MHz speeds which is our testing standard for everything unless otherwise noted. If you look at the specification table, I list the part number for the RAM and then the speed used. That’s how I do it for all of these.
  3. It was set to DDR4 3200MHz speeds which is our testing standard for everything unless otherwise noted. If you look at the specification table, I list the part number for the RAM and then the speed used. That’s how I do it for all of these.

    Oof missed that. Was the platform unable to hit 3600?

  4. Oof missed that. Was the platform unable to hit 3600?

    Yes, it can easily hit DDR4 3600MHz speeds and more. I’ve addressed the Ryzen 3000 / X570 memory speeds in previous CPU and motherboard review articles. Given the time allotted for getting the 10900K review done by the embargo date, I was not able to retest the 3900X and 9900K under overclocked conditions. Even if I had, memory overclocking is handled separately as we try to keep that variable out of the benchmarks unless that’s what we are testing.

  5. Good review and well written. Nothing stood out as a glaring inconsistancy.

    It will be interesting to see what happens to code that has been heavily optimized for a 10+?? year old instruction set actually has to run on something new.

    This is what AMD is doing and I think that is a large reason so many of the normal work and Gaming examples were performing better on Intel. (Other than raw execution speed)

    I might be way off base in thinking that coders are using older optimizations that simply don’t exist on the newer AMD silicone.

  6. Intel has always pushed software companies to optimize for Intel silicon going back at least as long as I’ve worked with computer hardware. There are all kinds of SDK’s and programs for doing that. Intel even mentions this in the product brief we got. What little there was of it anyway. But this is one reason why I think that Intel achieves so much despite the lack of cores and threads compared to AMD. Sure, clock speed and cache are part of that too, but I think that optimization for Intel silicon comes into play in cases where we know something is multi-threaded, but Intel still manages to pull a big win vs. AMD.

    It’s worth noting that Ghost Recon Breakpoint was optimized for AMD silicon and it shows. The results between the 9900K and the 3900X are quite similar. The only reason why the Core i9 10900K beats either of them comes down to clock speed and additional cache. That and the extra threads don’t really matter. If I recall correctly, Ghost Recon Breakpoint only sees 12t or at least, that’s all it shows in the in-game performance metrics. Something like that.

  7. I find the 400 fps difference in Doom quite huge for the little difference between the CPU’s but I guess the average tells another story and the min’s are even stranger.

    Any chance of a quick retest when the new doom patch hits next week orso to see if that did anything?

  8. I find the 400 fps difference in Doom quite huge for the little difference between the CPU’s but I guess the average tells another story and the min’s are even stranger.

    Any chance of a quick retest when the new doom patch hits next week orso to see if that did anything?

    Yes. I’d have looked more into the anomalous performance if I had the time. That said, its easily something I could have done differently. Those are Frameview captures of manual run throughs. I could have done something with the camera, or did something slightly different that caused that in some of the runs. If you run into a wall and stare at it in most games your FPS shoots up, or if you explode an enemy at point blank, it can drop substantially. That’s why I prefer canned benchmarks for these types of things, but not every game that people are interested in has built in tools for that.

Leave a comment