Jump to content

AP1916

Members
  • Posts

    47
  • Joined

Reputation

10 Good
  1. This just popped up for me yesterday out of nowhere. It's a bit hard for me to just chalk it up as a false positive when it caused all of my fans to start running at max rpms with no ability to slow them down. Forcing a firmware update resolved the issue for now. But im submitting a ticket none the less.
  2. What does this mean? I updated my computer in November and put in some of the link ecosystem, which is amazing.. there are 4 components on one port and 3 on the other. Out of nowhere today, my fans all suddenly spiked up to full speed and i received two alerts.. port 1 sensor in overcurrent state and port 2 sensor in overcurrent state... what does this mean? Why is it happening? Core temps were all fine. Internal temp and gpu temp was totally fine. Disconnecting the hub connections and then powering off didnt work. I forced a firmware update for the hub, and it seems to have reset and is now working correctly. Should I be concerned? Are there known problems with the hubs going bad? Or is this an issue with the latest icue software?
  3. Agreed. Took your advice and separated the front fans into their own hub port. So now instead of 7 one side zero on the other, it's 4 on one side and 3 on the other. No idea if there's added stress due to max load or whatever, but i like this config better anyway.
  4. I just swapped from having 7 LL120s with a copro and rgb hub to using the icue link hub. As it turns out, you dont even need to connect anything to a copro or rgb hub; the icue link hub handles all of that. My copro and rgb hubs are sitting on my desk now and the new system runs fine without them. It just needs to be connected to the mobo via usb and also to the cpu fan header. From there, you just chain your devices together and put them in the hub ports. One thing that i will mention that i found out yesterday is that the cooler radiator / pump and the cooler fans are actually counted separately, and you need to use an icue cable to connect the radiator to its own fans.
  5. Thanks for the response. I figured this out, and it is in fact related to the AIO cooler and radiatior fans counting as separate devices. It's working correctly now, but i may switch some things around because the current config has 7 devices (3 front fans, radiator, 2 radiator fans, and rear exhaust) all on one side of the hub. When you're chaining these components together, you actually need to connect the radiator to its own fans. I had it setup with hub > front fans > radiator fans > rear fan. What I needed to do was something like this. Hub > front fans > radiator > radiator fans > rear fan. This probably should have been obvious... since the whole time i was setting it up i was like, "that doesnt make sense how does the cooler get power?" Lol
  6. How in the world, first of all, do absolutely NONE of the components that are supposed to work with this system come with any sort of instructions AT ALL? I just got a 3 pack of the qx 120, another qx 120, and the icue link h115i. I am convinced that the h115i DOES NOT WORK with icue link. The 3 pack is on the front of my build, the cooler at the top, and the other qx120 as a vent fan on the back. The h115i is plugged into the hub with an icue link cable from each end going to the other two linked components. The hub is connected to the psu and cpu fan header. NOTHING TURNS ON. AT ALL. When i switch this configuration and plug the hub into the front set of fans , they turn on, but the cooler and rear fan do not. Did i get a defective cooler?
  7. Update: I just individually tested each PSU pin with a multi meter. Everything is exactly as it should be per seasonics tables.
  8. Completely at a loss here. Cannot boot at all.... problem sprung up yesterday out of nowhere. No, I cannot download things to run checks and no, I cannot update drivers because again, I cannot boot up at all. Sometimes the boot stops inmediately and kicks out the basic whea error, sometimes it shows 0xc0000001, sometimes it just freezes and doesn't show a bluescreen at all. System recovery does not work. Fresh OS install does not work. Both processes blue screen when the system restarts. Here is what I've tried, and every single attempt has yielded exactly the same results: Reset BIOS to default Ran chkdsk (no errors) Ran the memory diagnostic from the win10 thumb drive (all passes) Removed all components for barebones start up to pinpoint the issue including: removing the video card and plugging the display into the integrated graphics on the mobo - bluescreen. Individually tested every RAM stick by putting a single stick into the A2 slot - blue screen. Attached a different aio cooler and disconnected the current one - blue screen. Individually removed each m.2 ssd drive and booted with only one installed - blue screen. Attempted to repair the currently installed os - blue screen. Attempted fresh install of win10 from the thumb drive on each of the m2 ssds - blue screen. BIOS shows core temps completely fine. Mobo seems to run through codes on startup fine. No signs of PSU failure (clip tested, everything works). Re-seated the CPU, cooler, video card, and RAM - blue screen. (Doing this detected a new cpu installation correctly). Did my Mobo crap out or did my processor somehow die? I am at a loss. No individual component seems to be the culprit at all.
  9. Couldn't really find a better spot for this since it's hardware related.. In the old forum you used to be able to see complete specs of a user's build. There was a way to enter all of your specs and you could see it in a user's profile.. is that gone?
  10. So, I've seen this topic pop up a few times... some of them by myself.. I figured out how to fix this problem, so I figured I would offer the steps to take. 1. Go into icue and set the channel with the fans to static white. 2. Look at the fans and you should be able to see which ones have a color other than white. For me, fan 2 and fan 6 had red and green in them. * what happens is that each subsequent fan, after and including the defective one, in the series will exhibit the odd behavior.. sort of like how one bad bulb in a strand of christmas lights ruins the whole thing. 3. Verify that this is the source of the issue by changing the fan sequence in the rgb hub. I put the 2nd fan last, and everything in the series worked besides the last fan. Then I put the 6th fan first, anticipating that all fans would exhibit the odd behavior, and they did. Solution: replace two defective fans. All this said, you really could probably just set the lighting to static white, identify the busted ones, and replace them without doing anything else.
  11. I had this exact same problem today. 6 fans in channel 1. First 2 fans work ok, but fan 2 has an odd random colored led in it, and 3 through 6 just start going nuts. I would suggest changing to static white and looking for the first fan in your series that has an led that isnt static white, then putting that one at the end and seeing if all others work. If they do, rma the one with the busted led
  12. so today i updated the firmware through icue for my corsair vengeance rgb pro. After the update, the channels for the copro are completely whacked. Even with the lighting effect channel selected for the case fans, it will work correctly for a second, then half of the fan colors just start going crazy
  13. My favorite is how they tell you to open a ticket but then when you do it for this issue, they tell you they cant do anything
  14. It's not a software issue, nor is it the omron switch. It's the cheap plastic that pushes the omron plunger down. Mine did it, and when i opened it, this is what i found. There was a big indent in the plastic that pushes the plunger down..... yeah, that isnt supposed to be there. There is white, worn off plastic that's all around the blue plunger.
×
×
  • Create New...