MKDKI Posted October 13, 2019 Share Posted October 13, 2019 (edited) I have this problem where my PC runs fine with one GPU in, but as soon as I connect my second GPU it shuts off immediately after boot. These cards are 980Tis that draw around 250W max each so the AX1500i should be nowhere near its power limit. I've: -Swapped out all cables -Tried all PCI slots -Tried three different SLI bridges -Tried using older graphics drivers -Tried different BIOS versions -Tried different Windows versions -Cleaned all components and slots with compressed air Eventually I got my multimeter out and checked the voltage on all the power supply connectors. For some reason I'm measuring 0V on pin 4 and 6 instead of the expected +5V. All other pins have the correct voltage. Any idea what this means and how/if I can fix it? I'm at a loss. Edited October 15, 2019 by MKDKI Link to comment Share on other sites More sharing options...
Corsair Employee Corsair Notepad Posted October 14, 2019 Corsair Employee Share Posted October 14, 2019 Have you tested the system with just the second GPU in the system? Link to comment Share on other sites More sharing options...
MKDKI Posted October 15, 2019 Author Share Posted October 15, 2019 Yeah, both cards run with no issues on their own. Passed multiple stress tests in 3DMark when overclocked without any hiccups. I've returned them to default clock, tried a different motherboard and removed all but one RAM stick (memtest found no errors) and I'm still running into the same problem every time when both GPUs are in. Link to comment Share on other sites More sharing options...
c-attack Posted October 15, 2019 Share Posted October 15, 2019 If you are getting shutdown at boot, it is not likely to be TDP kind of issue. No full GPU power up then. The immediate shutdown or hard restart is usually indicative of protection action by the PSU or possibly the motherboard via the PSU. Usually you get a bios warning screen if the motherboard OCP kicks in, but it’s possible there is a PCI-E issue. However, if you have already run card 2 alone in PCI 3/4 (whatever slot), this seems unlikely. That leaves the PCI-E cables from the PSU to the GPU. I can’t remember if that GPU is a 2 or 3 cable card, but make sure you aren’t daisy chaining and try moving the cables to each card to opposite ends of the PSU output (1-2-3 and 6-7-8). Link to comment Share on other sites More sharing options...
MKDKI Posted October 15, 2019 Author Share Posted October 15, 2019 Alright so my GPUs require two 8-pin and one 6-pin PCI-e cables. First tried connecting the GPUs with the original cables that came with the PSU. It arrived with 4x600mm and 2x800mm 6+2 PCI-e cables. Length should be irrelevant, but I've connected two 600mm and one 800mm cables to each card. Would it make a difference if I replaced the 6+2 with a separate 6-pin PCI-e cable? I don't see why it would since they're designed to work both ways. Next I tried a custom kit from another brand. This kit only came with 4 x 6+2 Pin PCI-e cables (600mm) and 2 x Dual 6+2 Pin PCI-e cables (750mm). Meaning I had to use a dual cable for the final 6-pin on both cards. This isn't optimal, but should it pose a problem? I'll try moving the cables to opposite ends first thing tomorrow, haven't tried that yet. I guess the easiest way to troubleshoot if the problem is indeed the PSU would be to test my system with another PSU. I'll find someone to borrow one from next week if I still can't get it to boot. Appreciate the replies. Thank you for your time. Link to comment Share on other sites More sharing options...
c-attack Posted October 15, 2019 Share Posted October 15, 2019 Other thought... in iCUE or Link, go to the PSU settings and make sure you are in single rail mode and not multiple. Possible it doesn’t like the combo cables if they are on the same rail. Link to comment Share on other sites More sharing options...
MKDKI Posted October 15, 2019 Author Share Posted October 15, 2019 I'm not sure which one of the solutions that did the trick, but everything seems to be running smoothly now: Fingers crossed nothing breaks after installing the remaining three RAM modules and peripherals or during overclocking. I'm glad it was an user error and not a hardware error. I can't say I understand why I had to make these changes as I used the same setup and cables with my old 970s. I'm always happy to learn something new though even if it might've been an obvious fix to others. Still confused why I would measure 0V on two of the pins that are supposed to supply 5V. Ah well. As long as it's not something that can damage my components I guess it doesn't matter. Must admit I feel really stupid that the fix was this easy, but I'm glad everything is seemingly working as it should. Will pick up some proper cables to replace my duals and tidy this mess up later. Thank you very much. You've been really helpful. Consider this solved. Link to comment Share on other sites More sharing options...
c-attack Posted October 15, 2019 Share Posted October 15, 2019 So is the PSU the new component? I suspect multi-rail mode was the problem and others have run into this in the past, unaware the feature is there. I am not sure on the 5v issue and was hoping someone else had something to add. Link to comment Share on other sites More sharing options...
Vegan Posted October 16, 2019 Share Posted October 16, 2019 I do not understand why Corsair uses rails, the single rail solves the problem of load balancing Link to comment Share on other sites More sharing options...
Recommended Posts