Jump to content
Corsair Community

iCue crashing EVGA x79 system/black screen.


Recommended Posts

I recently retired my x79 machine and went to a X570 system for myself.

 

Knowing the x79 system has a LOT of life left in it I decided to build a (new to her) system for my girlfriend. I had been using a custom water loop in the system but this time around I was "lazy and cheep" and wanted to use AIO coolers, I installed an EK AIO in my system and used OpenRGB for RGB and all is working for me.

 

For the x79 system though...... Not so much.

This system has been rock solid for 9 years, over clocked to 4.5Ghz on a 4930k, 32GB of Dominator Platinum and just worked. It's got a 1080Ti in it (I upgraded to a 3090Ti in my rig) and it's still a VERY good computer.

 

We got a Corsair iCue H115i Elite Capellix for the x79 Dark motherboard and then because I like to stress test things I noticed a problem..... Every time I would run Super Position or the stress test in 3DMark the system would crash to a black screen. I had gave her my old 1200 Watt Xigmatek PSU (I got a Corsair AX1200) that I had in storage and noticed the 12 volt rails were sagging a LOT under load, so we went to Best Buy and picked up a Corsair RM850x.

 

The power didn't sag under load but we would keep getting Black Screen crashes. I reinstalled Windows 10 with a fresh download from Microsoft, installed the driver's from (in one install Windows Update, another install EVGA directly, used Driver Fusion on another) installed GeForce Experience, installed the new Driver there (yes tried without GFE) and then.... iCue each time. I would set the quiet fan profile and set the RGB to something basic like Rainbow.

 

Each test would result in the same thing. A Black Screen crash.

At this point I started considering throwing hardware at it till my girlfriend calmed me down and I then started looking at what all changed, I was using a custom water loop before, no Corsair software was installed before, so fresh install of windows later, leaving whatever built in profile the Commander Core has to manage thermals (confirmed it ramps up and down with load/temperature) and not installing the iCue software, the system was stable, it ran a stress test of Super Position for 24 hours, just fine.

 

At this point, the system is "done" I wish we could use iCue but something is seriously wrong with x79 based systems and iCue compatibility. I have set her up with OpenRGB because we're not excited to use SignalRGB with it's 30something a year subscription cost.

 

This has been a week of my life I wish I could get back and after knowing what to search for online I see that a LOT of others have this same issue on this platform (x79). I know it's old but can this be looked at? X79 machines are a cheap way for a lot of people to build cheap machines. The only thing I could tell in the event viewer was that the GPU driver was being crashed by iCue.

 

Sorry for the long post, the only way I could explain it was to just tell the story of what happened.

  • Like 1
Link to comment
Share on other sites

  • 1 month later...

Same problem here, for a long time. 

But i think there is a problem not only between x79 systems and iCUE, but between iCUE + x79 + windows 10. 

Time ago I used a x79 system with iCUE, but with windows 7 and all works fine.

 

Link to comment
Share on other sites

  • Corsair Employees

Unfortunately this is an issue that is only applicable to very specific, older, and discontinued X79 platforms, which we do not officially support for use with iCUE.

https://help.corsair.com/hc/en-us/articles/360040957051-iCUE-compatibility-and-installation-requirements

 

 

Link to comment
Share on other sites

41 minutes ago, Corsair Notepad said:

Unfortunately this is an issue that is only applicable to very specific, older, and discontinued X79 platforms, which we do not officially support for use with iCUE.

https://help.corsair.com/hc/en-us/articles/360040957051-iCUE-compatibility-and-installation-requirements

 

 

Here's a question though, why include the hardware to mount to this platform and why not list "Minimum System Requirements" for iCue? "Recommended" means that's where the "Sweet Spot" of support starts. What about AMD platforms? From that site linked it would suggest that you all don't support AMD at all.

I'm legit just curious, not upset, as I said, issues is resolved by using third party software.

Link to comment
Share on other sites

  • 4 weeks later...
On 11/9/2022 at 11:13 AM, AnnabellaRenee87 said:

Here's a question though, why include the hardware to mount to this platform and why not list "Minimum System Requirements" for iCue? "Recommended" means that's where the "Sweet Spot" of support starts. What about AMD platforms? From that site linked it would suggest that you all don't support AMD at all.

I'm legit just curious, not upset, as I said, issues is resolved by using third party software.

What Corsair Notepad means to say, is that Corsair is out of their depth and cannot fix the issue as it lies in the blackbox CPUID SDK that corsair themselves do not have any control over, apart from providing the option to not use it at all.

 

Further more, the issue is not inherent to x79 platforms as it also affects more recent Intel and AMD based systems as well.

 

 

Edited by squall leonhart
Link to comment
Share on other sites

Done some digging.

 

This issue occurs because the HWMon/CPUID kernel driver by default enables Bank switching on the embedded controller used on some mainboards, this is not limited to Asus and is found also on some msi and evga products.

 

This bank switching enables access to the PCH temp sensor on affected boards, as well as subzero sensors

This EC is also utilised alongside the FIVR implementation on Haswell E and Broadwell E cpu's.

 

Bank switching is not a safe thing to perform, interaction from other sensor drivers can cause the EC to hang causing interrupt service failures, and even the FIVR to corrupt and send excessive core voltage to the cpu.  In the case of HWmon/iCUE/CAM, the result is corruption on PCIE registers that trigger the video card into crashing into protect mode.

 

changes to Windows and how it interacts with hardware is also a factor in whether or not the EC bank switching can cause issues, with the addition of the gpuenergydrv.sys to windows itself being a correlating factor in the starting occurrences of the issue.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...