Power Consumption

Generally, it is this author’s opinion that the power consumption of a CPU matters very little given the role they take. Battery life isn’t usually a concern and the ability to employ more substantial cooling methods is available for standard desktop form factors.

That’s not to say that I don’t believe power matters. It does. Performance per watt for Intel’s Core i9 10980XE was absolutely awful compared to AMD’s Ryzen 9 3950X. While the 10980XE was often faster (especially when overclocked), it pulled nearly twice as much power for only having two more cores. This is an extreme example, but it does make a difference. In my opinion, you absolutely need custom liquid cooling to get the most out of any 10980XE build. Of course, all things being equal between two CPU’s, it makes sense to get the one that uses less power.

As I stated earlier, Intel increased the TDP of the 10900K from 95w to 125w. This is only when considering stock clock speeds. Once turbo frequencies come into play, that number gets quite a bit larger.

Power Testing

Our power consumption methodology is quick and dirty. It involves using a Kill-A-Watt device with only the test machine connected to it sans monitor and any external devices. The idle power is tested at the Windows desktop on a clean system while doing nothing but running background tasks. Load testing is done by using Cinebench R20 in a multi-threaded test. The power is then observed on the Kill-A-Watt instrument. 

These devices are not known for being super accurate, so this is a ballpark measurement. Cinebench R20 is largely used because it doesn’t utilize the GPU, which keeps the impact of the component to a minimum. All the systems we’ve tested from day one have been tested in this manner.  

Intel Core i9-10900K Power Testing

While idling at stock speeds, the power consumption of the 10900K was surprisingly low coming in at 85w. This is a fairly substantial improvement over its predecessor which idles at 115w in this example.

Intel Core i9-10900K Power Testing Overclocked

However, things get ugly when you overclock the 10900K. It quickly increases the system’s total power draw to 366w. Idle increases to 106w, which is still below that of the 9900K. Clearly Intel has improved its efficiency enough to allow for two additional cores while still gaining some reduction in consumption. That said, this is in no way a good result. The CPU uses far more total power under load than the Ryzen 9 3900X does despite the latter having two more cores. In fact, while not shown here, these numbers are closer to a Ryzen 9 3950X which has 6 more cores and threads than the 10900K does.

Temperature Testing

Temperature testing is quick and dirty. Using Cinebench R20 to load cores, we checked the package temperature via AIDA64. While I do have an overclocked result, it’s important to note that these aren’t final overclocking temperatures and aren’t indicative of long term stability testing results.

Intel Core i9-10900K Temperature Testing

At stock speeds, the temps for the 10900K were extremely low. This shows Intel has definitely increased its thermal efficiency somehow. The older 9900K in the same circumstances has two fewer cores and is considerably warmer. I had thought the reading might be in error until the 10900K was overclocked. At which point it reached much higher temperatures that were more in line with the other test systems.

Join the Conversation

8 Comments

  1. So the memory on the Zen system was 3200 or 3600? I know the kit was 3600 but I am just double checking.
  2. It was set to DDR4 3200MHz speeds which is our testing standard for everything unless otherwise noted. If you look at the specification table, I list the part number for the RAM and then the speed used. That’s how I do it for all of these.
  3. It was set to DDR4 3200MHz speeds which is our testing standard for everything unless otherwise noted. If you look at the specification table, I list the part number for the RAM and then the speed used. That’s how I do it for all of these.

    Oof missed that. Was the platform unable to hit 3600?

  4. Oof missed that. Was the platform unable to hit 3600?

    Yes, it can easily hit DDR4 3600MHz speeds and more. I’ve addressed the Ryzen 3000 / X570 memory speeds in previous CPU and motherboard review articles. Given the time allotted for getting the 10900K review done by the embargo date, I was not able to retest the 3900X and 9900K under overclocked conditions. Even if I had, memory overclocking is handled separately as we try to keep that variable out of the benchmarks unless that’s what we are testing.

  5. Good review and well written. Nothing stood out as a glaring inconsistancy.

    It will be interesting to see what happens to code that has been heavily optimized for a 10+?? year old instruction set actually has to run on something new.

    This is what AMD is doing and I think that is a large reason so many of the normal work and Gaming examples were performing better on Intel. (Other than raw execution speed)

    I might be way off base in thinking that coders are using older optimizations that simply don’t exist on the newer AMD silicone.

  6. Intel has always pushed software companies to optimize for Intel silicon going back at least as long as I’ve worked with computer hardware. There are all kinds of SDK’s and programs for doing that. Intel even mentions this in the product brief we got. What little there was of it anyway. But this is one reason why I think that Intel achieves so much despite the lack of cores and threads compared to AMD. Sure, clock speed and cache are part of that too, but I think that optimization for Intel silicon comes into play in cases where we know something is multi-threaded, but Intel still manages to pull a big win vs. AMD.

    It’s worth noting that Ghost Recon Breakpoint was optimized for AMD silicon and it shows. The results between the 9900K and the 3900X are quite similar. The only reason why the Core i9 10900K beats either of them comes down to clock speed and additional cache. That and the extra threads don’t really matter. If I recall correctly, Ghost Recon Breakpoint only sees 12t or at least, that’s all it shows in the in-game performance metrics. Something like that.

  7. I find the 400 fps difference in Doom quite huge for the little difference between the CPU’s but I guess the average tells another story and the min’s are even stranger.

    Any chance of a quick retest when the new doom patch hits next week orso to see if that did anything?

  8. I find the 400 fps difference in Doom quite huge for the little difference between the CPU’s but I guess the average tells another story and the min’s are even stranger.

    Any chance of a quick retest when the new doom patch hits next week orso to see if that did anything?

    Yes. I’d have looked more into the anomalous performance if I had the time. That said, its easily something I could have done differently. Those are Frameview captures of manual run throughs. I could have done something with the camera, or did something slightly different that caused that in some of the runs. If you run into a wall and stare at it in most games your FPS shoots up, or if you explode an enemy at point blank, it can drop substantially. That’s why I prefer canned benchmarks for these types of things, but not every game that people are interested in has built in tools for that.

Leave a comment