Jump to content
Corsair Community

Can a LNP cause a PC to suddenly shut off?


gutcheck

Recommended Posts

Long time guys.

 

So, In anticipation of my pending 5900 CPU I "upgraded" my x570 hero's bios to the latest version. I also moved my computer around, tried to clean up the back etc. So I did two things at once, but one of them has caused my system to just power off randomly (during a zoom meeting, etc). It's a violent shutdown. Just total power loss, then a reboot with no warning or BSOD. I noticed, that when this happens the top set of fans on my rad and my CPU's RGB strip go off and do not turn back on with the rest of the system until iCUE starts after said reboot. Those two things were plugged into a USB splitter which I have since yanked, and the indecent did not happen again for days, but just now, it happened again and windows reports: a kernel 41 errror "The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly."

 

Decided to yank LNP, and it has now been a full 48 hours without incident. I've had LNP's die on my before, and I have a new one in the box but am a little scared to put it in there.

 

This really has me worried as I have a 3090 in there and would prefer it doesn't die a horrible death. I leak checked everything and that seems ok.....

Link to comment
Share on other sites

It's possible and it does seem like the PSU went into a safety shutdown. However, isolating the root cause is probably going to be a major headache. This type of problem almost always is.

 

If the LNP has an internal problem, it should shut itself down, blink out of CUE, or disappear. That wouldn't normally make the PSU enact its safeties unless there was a short in the unit or the SATA cable powering it. However given the circumstances, the most probably cause may be something in the back end got slightly moved or adjusted and created the incident. That could be the LNP or it could be something else. I think you have to keep going in the current state until you're sure that was it or it hard resets again. That certainly beats pulling the motherboard or PSU and trying a 'back-up'.

 

We have seen one or two people trigger the PSU safeties by overloading the 5v rail on a particular line. This doesn't seem too likely, they all had lower end PSUs, but I don't know what you're running. If you were running 12 LL fans, 3 HDDs, and AIO on the one SATA line from the PSU, it might be something to look at.

 

Unless the PSU is the origin of all the trouble, the 3090 should be safe. No way for any current issues on the SATA to pass back through the PSU to the GPU. The PSU is the breaker/fuse and the current level should not be large enough to be a worry for a device that robust.

Link to comment
Share on other sites

It's possible and it does seem like the PSU went into a safety shutdown. However, isolating the root cause is probably going to be a major headache. This type of problem almost always is.

 

If the LNP has an internal problem, it should shut itself down, blink out of CUE, or disappear. That wouldn't normally make the PSU enact its safeties unless there was a short in the unit or the SATA cable powering it. However given the circumstances, the most probably cause may be something in the back end got slightly moved or adjusted and created the incident. That could be the LNP or it could be something else. I think you have to keep going in the current state until you're sure that was it or it hard resets again. That certainly beats pulling the motherboard or PSU and trying a 'back-up'.

 

We have seen one or two people trigger the PSU safeties by overloading the 5v rail on a particular line. This doesn't seem too likely, they all had lower end PSUs, but I don't know what you're running. If you were running 12 LL fans, 3 HDDs, and AIO on the one SATA line from the PSU, it might be something to look at.

 

Unless the PSU is the origin of all the trouble, the 3090 should be safe. No way for any current issues on the SATA to pass back through the PSU to the GPU. The PSU is the breaker/fuse and the current level should not be large enough to be a worry for a device that robust.

 

Thanks man. Running a HX1000i and yes, I am running a boatload off of one sata plug. 12QL fans all on one CP, which is also plugged into the mobo, a d5 pump, the case lights, and previously a LNP.... basically all sata plugs were plugged into something.

 

Also, I had my PSU plugged in as well, so maybe Corsair's software is smart enough to tell the PSU to shut off if it detects an issue in any other corsair hardware? At any rate, still strong. 12 shutdowns in 5 days, none since yanking the LNP yesterday AM.... so weird.

Link to comment
Share on other sites

Afraid the PSUs aren't that smart to where they can detect Corsair specific hardware events. However, they certainly will do the normal current protections.

 

Where the 12 QL + Commander + D5 + LNP (?) all on the same SATA multi-connector extension from the PSU? Theoretically the numbers might work, but that's a theoretical 3.6A for motor current on the fans, 1.5-1.8A on the D5 (assuming 18-23W model), then 5v load from 12 QL + LNP handling case lights? Also 5v?

 

I have gotten myself into trouble with this before. I was below the specs by a good margin, but nevertheless the end result was clear. Whump, click, darkness. Ever since I try to keep a better eye on balance between heavy 5v and 12v loads. I keep my pump and the Commander on different lines. If I have 10-12 RGB fans on one device, I keep everything else that uses 5v someplace else. 12 QL is a pretty decent 5v load when on 100% white. Would be curious to know what the Commander reads on 5v when in that state.

 

Of course, it's easy for me to say this as I usually run dual chambered cases. Life is a bit more tedious for mid-tower. Still, that seems like the leading cause which is definitely better than a genuine hardware issue. What was the LNP powering? Case lighting and/or strips?

Link to comment
Share on other sites

Afraid the PSUs aren't that smart to where they can detect Corsair specific hardware events. However, they certainly will do the normal current protections.

 

Where the 12 QL + Commander + D5 + LNP (?) all on the same SATA multi-connector extension from the PSU? Theoretically the numbers might work, but that's a theoretical 3.6A for motor current on the fans, 1.5-1.8A on the D5 (assuming 18-23W model), then 5v load from 12 QL + LNP handling case lights? Also 5v?

 

I have gotten myself into trouble with this before. I was below the specs by a good margin, but nevertheless the end result was clear. Whump, click, darkness. Ever since I try to keep a better eye on balance between heavy 5v and 12v loads. I keep my pump and the Commander on different lines. If I have 10-12 RGB fans on one device, I keep everything else that uses 5v someplace else. 12 QL is a pretty decent 5v load when on 100% white. Would be curious to know what the Commander reads on 5v when in that state.

 

Of course, it's easy for me to say this as I usually run dual chambered cases. Life is a bit more tedious for mid-tower. Still, that seems like the leading cause which is definitely better than a genuine hardware issue. What was the LNP powering? Case lighting and/or strips?

 

I understood about half of that. I had everything but one node pro for 6 fans on the single rail, yes. So I thought I would be fine. The LNP was powering only the CPU RGB strip and my distro's rgb strip but the connection between the RBG hookups was pretty loose. Maybe that happened when I moved it around. Still no reboots!

 

Looking at the CP (funny you say all white, because that is my default) they are at 12V=11.93V 5V = 4.92V 3.3V = 3.25V

Edited by gutcheck
Link to comment
Share on other sites

If you're holding 4.92 with 12 QL in white, you are in good shape.

 

I wouldn't think the current draw from the strips + distro would be particularly large. Maybe it was as simple as two things making contact that should not. Presumably you were running this exact set-up before, but the issue only arose after the tidy up in back.

Link to comment
Share on other sites

If you're holding 4.92 with 12 QL in white, you are in good shape.

 

I wouldn't think the current draw from the strips + distro would be particularly large. Maybe it was as simple as two things making contact that should not. Presumably you were running this exact set-up before, but the issue only arose after the tidy up in back.

 

It has to have been exactly that. Those connections between the RGB connectors from the EK products are EXTREMELY loose. I ordered some of these just in case that is what it was:

 

https://www.ekwb.com/news/ek-releases-a-perfect-cable-management-system-for-rgb-cables/

Link to comment
Share on other sites

So after having every LNP out of my PC etc and using a new SATA rail and plugging it into a different port on the PSU it happened again. So, I discovered my origional BIOS version was actually on my box. I re-installed my bios back to the original version and all is well. All LNP's back in etc.

 

WTH is going on? I notice asus rather quickly released a "beta" bios for the x570 hero only a couple weeks later.

 

Kinda has me worried for the new CPU's tomorrow. Or do you think it could NEED a new xen 3 and my 3900x was causing the issue?

 

I give up.

Link to comment
Share on other sites

×
×
  • Create New...