Gaming Benchmarks Part 1
Now we come to the meat and potatoes of the Core i9-10900K. This is a follow up to the world’s fastest gaming processor. Intel claims it is the new king of that arena. Frankly, it has to be as it gets pummeled pretty badly by the Ryzen 9 3900X which has been around for the better part of a year now.
Generally, we are usually more GPU bound than CPU bound, but as you’ll see the CPU still matters even in GPU bound conditions. Also, many people still game at 1080P, which highlights stronger CPU’s as this is a somewhat CPU bound resolution.
3DMark 10
Intel CPU’s tend to outscore their AMD counterparts in this test. Therefore, I am not terribly shocked by the outcome. We can also see an all-core 5.1GHz overclock earns us some extra points in this test.
Destiny 2
I apologize for the ugly graphs. The logarithmic scales were necessary given the huge gap between the minimums and the maximum frame rates here. As you can see, the 10900K achieves the best performance when overclocked across the board. Even so, our FPS can drop really low in areas where there are lots of enemies, explosions, and weapons fire. Intel does gain quite a bit over the 9900K with the improved clock speed and cache added to the 10900K.
Again, at 4K we see solid gains going from the 9900K to the 10900K. Interesting to note that the 9900K has the worst minimum frame rates here. This is a sharp contrast to almost a year ago when I first tested the 3900X in this game. Aside from the maximum frame rates, the 3900X is a better choice than the aging 9900K is. However, 10900K is the fastest where it counts. Although, I can’t explain why the 9900K still has higher maximums.
Oof missed that. Was the platform unable to hit 3600?
Yes, it can easily hit DDR4 3600MHz speeds and more. I’ve addressed the Ryzen 3000 / X570 memory speeds in previous CPU and motherboard review articles. Given the time allotted for getting the 10900K review done by the embargo date, I was not able to retest the 3900X and 9900K under overclocked conditions. Even if I had, memory overclocking is handled separately as we try to keep that variable out of the benchmarks unless that’s what we are testing.
It will be interesting to see what happens to code that has been heavily optimized for a 10+?? year old instruction set actually has to run on something new.
This is what AMD is doing and I think that is a large reason so many of the normal work and Gaming examples were performing better on Intel. (Other than raw execution speed)
I might be way off base in thinking that coders are using older optimizations that simply don’t exist on the newer AMD silicone.
It’s worth noting that Ghost Recon Breakpoint was optimized for AMD silicon and it shows. The results between the 9900K and the 3900X are quite similar. The only reason why the Core i9 10900K beats either of them comes down to clock speed and additional cache. That and the extra threads don’t really matter. If I recall correctly, Ghost Recon Breakpoint only sees 12t or at least, that’s all it shows in the in-game performance metrics. Something like that.
Any chance of a quick retest when the new doom patch hits next week orso to see if that did anything?
Yes. I’d have looked more into the anomalous performance if I had the time. That said, its easily something I could have done differently. Those are Frameview captures of manual run throughs. I could have done something with the camera, or did something slightly different that caused that in some of the runs. If you run into a wall and stare at it in most games your FPS shoots up, or if you explode an enemy at point blank, it can drop substantially. That’s why I prefer canned benchmarks for these types of things, but not every game that people are interested in has built in tools for that.