Jump to content
Corsair Community

Delta between CPU and Block - Question/Problem


Recommended Posts

Hello!

 

I built a new system last month with a h110i Elite Capellix cooler and an Intel i9-10900K CPU. - I love this new system, but I have concerns about the cooling.

 

When the CPU cranks up, the difference between the CPU case temp and the cooler pump temp looks like it's becoming too wide. When the system is a rest, those temps are within a few degrees of each other. When the CPU is at 100%, (Not OC'd,) the case temp goes up to nearly the acceptable max for this processor.

 

Since the pump temp stays fairly cool at idle (~32C) and the disparity between the pump and CPU gets wider (>40C) as CPU use increases, it suggests that the problem could lie in two areas. #1: Maybe the thermal paste is not conductive enough, or #2: maybe the H150i isn't capable of handling all the heat this CPU gives off.

 

I have an older machine running an i7 and an older H100 and I don't recall there being such a big difference between the two temps.

 

And, I am using one of the most highly rated thermal pastes, according to one of the OC websites.

 

If anyone has a good answer/suggestion, please reply.

 

Thanks,

Link to comment
Share on other sites

CPU and coolant temperature are not related in the way you think. The only time they will ever be equal is when the system is completely powered off and both become the temperature of the local environment.

 

1) In any given instant, CPU temperature is the result of the amount of voltage applied to the CPU and it's physical properties. In that one instant, cooling type, design, and size do not matter. 1.30v to your 10900K@100% will produce 70C (or whatever temperature) on every cooler in existence. The cooling device comes into play as time is factored in and the heat must be transferred elsewhere. A "hot CPU" does not necessarily mean more heat into the cooler. The CPU's temperature is a factor of it's own properties and voltage. Heat into the cooler is a function of watts. A 2 core duo CPU at 100C will output a lot less watts and heat than a 16 core CPU at 60C. This is why you cannot compare different CPUs on different coolers, or even different CPUs at all as a measure of whether the cooler is working.

 

2) The coolant temperature is a comparative measure of how much waste heat has been conducted into the coolant stream. It's relationship is that it serves as the baseline of lowest possible CPU temperature. If your coolant is 35C, then the CPU cannot be below 35C. As such, when coolant goes +1, CPU temp goes +1C. Same in the other direction at -1 and -1C. If you add X watts to Y amount of water, it will raise the temperature by Z degrees, less the amount of heat expelled by blowing it off through the radiator. You don't have to solve that calculation, just remember the 1:1 increase and decrease when trying to assess required fan speeds and what is or is not necessary.

 

3) The difference between the coolant temperature baseline and the actual CPU temperature is often referred to as CPU/Coolant Differential. It's value as a metric in comparing different CPUs and systems is a bit slippery. It gives you some idea of the physical conductivity of a particular CPU (a 10900K that is 65C@1.30v may have more overclock room than one that is 75C@1.30v). The range will vary quite a bit between different CPUs and sample to sample, but generally speaking most people will see a differential between 30 and 50C above coolant. The dominant factor in this is voltage. Underclock your 10900K to 1.05v, watch the number drop to +25C. Stretch out the voltage as high as you can and you wind up around +50C or slightly more. However, what the Coolant/CPU differential is most useful for is understanding the highest coolant temperature you can allow before the CPU hits your limit.

 

I like to CPU-Z bench stress test for this, but any fixed load or Linpack type test can work. Note the coolant temp, start the test. 2 seconds is all you need to determine the differential. So if my differential is +40C and my personal limit for CPU temp is 85C, then I know I must always keep my coolant below 45C. Again, voltage is the primary factor and the only one you can really change. You can play with different TIM compounds all day, but +-1-2C is the most you will ever see. That's not really what most people are looking for. Delidding is more substantial increase to conductivity and one that will lower the differential, but not something for everyone. However the most effective step is always going to be leaning out any unnecessary voltage. Auto settings will always apply more than is necessary by a decent margin. That is the purpose - make even the worst CPU made boot up and not crash.

 

I can share some differential points and various voltage levels, but I need to know where you are at now.

Link to comment
Share on other sites

c-attack,

 

I was seeing many of my video rendering applications cranking the GPU up to 100% and watching the case temp and cores crawl uncomfortably up to 99C and the H100i seemingly unable to cope with it, I dropped all of my OC settings from 5.1 back to default.

 

Even with overclocking, I rarely ever change the voltages away from default unless absolutely necessary. - Funny thing is that the extended CPU stress tests running at 100% never got the temps that high. I guess that may be due to the MB's power management circuitry cranking the voltages up 'automatically.' (As a retired programmer, I appreciate the fact that there is really no such thing as automatic.)

 

I did finally (almost) remedy the video rendering problem by tweaking some of the GPU setting on most of them. Even so, one of them now uses 11% CPU and works the 3090 GPU much higher, but the CPU temps still go significantly higher than they should.

 

Unfortunately, dropping to optimized defaults did very little to affect the CPU temps. I would think that default setting would be better than that.

Link to comment
Share on other sites

I do not have a GA board to compare, but optimized defaults or standard settings rarely mean most efficient or least power hungry. On many boards, it is quite the opposite. My Asus Z490 will release all power limits in that state and suggests 1.56v for my 5.2x10 setting, instead of the 1.375 I actually need. If you are overclocking on auto voltage, you will see high CPU temp and a large differential.

 

I am assuming your H110i is in the top of the 750D and that is the only logical placement. However, that does put in the GPU heat zone and coolant temperature will rise as the ambient local area temperature does as well. It is fairly common to see users with heavy, long duration GPU loads see their highest coolant temperatures when in that state and notably less when running a pure CPU stress test. The problem is making adjustments here is difficult and a victory would be clawing back a couple of degrees. This is small change compared the instant reduction you get my getting your voltage dialed in.

Link to comment
Share on other sites

I am currently running a 3090 on air while waiting for the waterblock and this thing just heats up the loop and the whole case like crazy.

 

In heavy GPU rendering, you basically have a glorified CPU heater keeping your AIO warm.

Of course we can't directly compare temps but if like me you have 45 - 50°C air blowing on the radiator, that's all the cooling headroom you're losing. even a moderate load will send CPU and water temps pretty high because the radiator can't get rid of heat.

 

It would be interesting to know what kind of water temps you get on the AIO while doing a GPU rendering. Maybe tweaking airflow would work (like setting the AIO as intake, cranking the exhaust fan more when rendering, etc.. )

Link to comment
Share on other sites

Well, I am running an H150i Capellix cooler, and I do understand what ambient room temperature, heat soak and heat transfer through various materials.

 

My first hunch about my overheating is MOL as c-attack has described. I'll try doing a bit of that, first.

 

If that doesn't seem to cut it sufficiently, I may take a look at whether I trapped a bubble in the thermal compound when I applied it. - I used the (gasp) spread method recommended by the paste manufacturer, which seems to have gotten a reputation as a big no-no in the thermal paste application world. It appears that it was able to extract heat from the CPU more efficiently when I first applied it than it does now. - It may be a bubble.

 

In any case, thank you both for your help! I'll reply if I discover anything significant. - I've also started a support ticket with the software vendor that actually has the CPU overheating at 11% use. (I didn't even think that could be done.)

Edited by PDWhite
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...