Jump to content

LeDoyen

Members
  • Posts

    3,808
  • Joined

  • Days Won

    71

Everything posted by LeDoyen

  1. That one went over my head 🙂 i didn't see the reply. THe hot temps on CPUs are mostly due to excessive voltage, not lack of flow. GPUs are a bit more sensitive in the flow area but the threshold at which temps will increase is quite close to the minimum speed of the pump usually. Even in a 1000D decked with radiators, a single D5 or DDC is way more than enough. Even workstations with 5 or 6 GPUs don't need more than that. If a CPU overheats on custom loop, it requires a bit of tweaking to keep the Vcore in check, more than water flow.
  2. I doubt the PSU is at fault there. that soft shutdown feature has been a thing since the dawn of ATX, in the windows 95 era and hasn't changed since. As long as the CPU will keep the PS ON contact closed, the PSU will not shut down, it only does that the CPU tells it to do. Some people have had luck on old boards with disabling windows fast startup, or by enabling fast boot in bios, but it's an annoying fault that can have a multitude of causes.
  3. It's an X3D.. they run hotter due to the Vcache stacked on top. It's common to all X3D CPUs to be consistently warmer, and that throws us off when we're used to conventional CPU temperatures.
  4. What is the old PSU? Are you using the cables that came with the RMe ?
  5. just to be sure, the sticks are on slots A2 and B2 ?
  6. you got that correct. the CPU cable goes to the EPS12v connector of the motherboard. you would soon realize if you force a PCIE connector in there. it is not keyed the same so you'd have to jam it in, and it would trigger the PSU short curcuit protection since the pinouts are different between the two. CPU = EPS12v PCIE = graphics card only
  7. i am not sure the motherboard defaults to PWM. It's better to check in bios to be sure the PWM/DC switch on each channel is set to PWM. otherwise, if you get replacements they may get damaged (if that's the issue of course)
  8. well, they shouldn't grind at all after 5 years. How are they controlled ? PWM or DC control? Using maglev fans on DC control can damage them.
  9. there you have Corsair's cable chart for PSUs : https://www.corsair.com/us/en/s/legacy-psu-cable-compatibility Your AX1200i uses type 3 or type 4 cables for EPS12V (CPU cables). Type 3 is pretty old by now so you'll probably have an easier time looking for type 4 CPU cables.
  10. the spec is 150w max per pcie connector, remembering that the card also draws some power from the motherboard ( often 50 - 60w off the 75 possible). so yea. loads of headroom with 2 cables.
  11. the card has a power draw of 290W by default so, without overclock it would only need 2 cables. It's fine to use two cables with one pigtail yes. I wonder what they were thinking 😛 my 3090 (370W) only uses two connectors.
  12. with this CPU, people often used 3000mhz without too many issues. 3600 is pushing it quite a lot. DDR4 was still new when it came out. it natively supports ddr4 2400. You can see the massive jump 3600 is. you could try enabling XMP, then dropping down the speed manually to have it run stable.
  13. does the motherboard box say what bios version is installed? the original bios is F3, but the 14900k is supported starting with F4. From the manual memory initialization starts at post code 2B, but you don't even go there yet as you hang at code 11 which is CPU initialization. i don't see anything wrong with the layout you have besides the GPU using a single pigtail cable instead of two discrete cables (but you're not there yet since it doesn't boot at all and it's a moderate overload of the cable at worst). I would probably try to flash the bios to the latest version and see if the CPU kicks in (look in the manual for Q-flash, you can flash the bios without a working CPU on this motherboard).
  14. there's a post there that explains the steps : https://rog-forum.asus.com/t5/intel-700-600-series/z790-e-gaming-wifi-ii-pl1-pl2-power-settings/td-p/996000 Factory values for the 13900k is Long duration : 125W, Short duration : 253W https://ark.intel.com/content/www/us/en/ark/products/230496/intel-core-i9-13900k-processor-36m-cache-up-to-5-80-ghz.html If the cooler works well, you can then increase the long duration power limit.
  15. it is probably the bios settings, which on Asus often have power limits totally unlocked and multicore enhancement (overheating insurance) turned on. It's worth a shot having a look what those are set to (should be 253W PL2 on 13900k), and making sure MCE is set to "Off, enforce all limits". This CPU is very easy to cool if you don't let the motherboard cook it. Also, if you're not using a contact frame, it's a good idea to install one to replace the current CPU retention mechanism. It prevents CPU bending and allows for lower temps. Some see small gains, i personally reduced the temps on mine by a good 20°C under full load, but 10-15°C reduction with that alone isn't rare.
  16. if i had to chose it'd be RMx. it's a good tier above, and doesn't get bad reputation for noisy fan 🙂
  17. une bonne pratique est d'éviter au maximum les pliages aussi. par exemple si au départ du waterblock CPU on utilise un raccord a 90, ca fait un cintrage en moins (mais ca garde le tube plus proche de la carte mere). 1 cintrage, c'est assez simple, on coupe a la bonne taille. 2 cintrages ca devient tout de suite plus facile de se rater. 3 ou 4, il faut prévoir un bon paquet de tube en rab pour recommencer 🙂 Une solution assez commune aussi est d'utiliser des raccords femelle/femelle là ou c'est judicieux. Par exemple, faire 2 tubes a 90° avec un raccord coudé pour les joindre est plus simple que de faire un seul tube avec 3 cintrage exacts. Cela dit, les raccords ont d'habitude un peu de jeu, et le tube n'est pas forcé d'etre totalement aligné pour etre étanche. Bref, si on peut trouver moyen de limiter les cintrages à un minimum, c'est mieux. ca peut passer par reorienter le radiateur du haut peut etre, des fois qu'avoir les raccords plus proche du CPU faciliterait les connections. pas forcément, mais on peut envisager differents montages pour rendre les connections les plus simples possibles.
  18. ca sert seulement pour ceux qui ont besoin de capacité plus que de vitesse. mais pour jouer ,4 barettes c'est la misere oui. Mais on va bientot voir arriver des barettes de 64Gb donc ca pourra peut etre aider a avoir capacité + vitesse en 2 barettes. Mais du coup 4 slots, ca ne sert plus a rien pour la majorité, et c'est totalement inutile pour un PC de jeu.
  19. The holes are for the gpu cooler on the other side. the die is on the other side so there's nothing to cool on the back. And yes you have to use a GPU block. CPU blocks are not meant to be mounted there. The hole spacing is off too, and a cooler on the die alone will leave the VRM not cooled. Your graphics card is small 🙂 it's just that it has a massive radiator. If you want to watercool it, the air cooler will have to go regardless.
  20. + Better to do it before putting the build together inside. It's obviously a one-way trip, and have a look at where the cards will screw in. By opening the PCIE area you will only have a little tab with a threaded hole to hold cards. Not sure it will be very rigid.
  21. you explain a privacy policy you obviously do not understand, so why ? you only give a tinfoil hat interpretation of it with no hard evidence of what iCUE does. The very basic is to get one's facts straight, then argue what corsair is actually doing. The privacy policy, just like the terms of use are very generic and consist in a lot of CYA lingo. It's not because i disagree with the way you think that i defend Corsair. Again, that's a second layer of tin foil. I got tired of them and sold all i had from the brand except my PSU. I am here only to help poor souls that got suckered in like i was by the past reputation of the brand for quality which they lost years ago. You know, actually helping with troubleshooting problems users have, not problems you believe users should have.
  22. Are you an iCUE dev yourself? do you have any insights on the inner workings of iCUE to argue that telemetry has performance impacts? All you've said so far is a big pile of "if" and "maybes", with no experience or evidence. Just personal interpretations of their privacy policy with no knowledge of its real implementation in software. Of course the privacy reddit has lots of users, but that's people interested in the matter that joins it. Corsair community forum is not a privacy centric platform. It's just people talking about how to set their system, and troubleshooting it. Having a lighter version of iCUE without all the unnecessary services (only device drivers and RGB settings) has been asked for donkeys years but Corsair is clearly not interested. So, you know basically nothing about iCUE at this stage, at least not more than the people you think are argueing with you, which makes every singe of your interventions basically.. useless? It's not about telemetry or whatever iCUE does, it's about why are you bothering to lose hours of your time to post walls of text that help nobody since up to now it's only a big opinion piece with no evidence of what iCUE ACTUALLY does ? Corsair can very well provision in their privacy policy for stuff they may need to do in the future, or did in the past software versions but don't do anymore. If i had time to lose i would install iCUE (and have to buy a bunch of devices to make it work) and start packet sniffing to see what goes in and out instead of rambling about a webpage that tells things one has no idea how and if they are implemented in any shape or form.
  23. Just concerned about how some people love to lose their time arguing in great length about things nobody cares about? i hope you understand we are a very thin minority who looks at data harvesting and care to stop it at a very local level ? Most people share their entire life on social media and only care about having nice shiny lights in their PC. And what makes iCUE thrash is not the telemetry part. That has virtually no impact on performance. The core of the software has, to varying degrees, plus the instability and crashes that come back every few updates. So yea, not offended, just wondering why coming to white knight on a users forum where Corsair reps and employees rarely intervene anyway.
  24. if you hate telemetry, or care about privacy in any way, just block the domains used by corsair for it and you're done, or better yet, read the privacy policies before installing a software, they show them on every installer, where people usually click "accept" without looking. If they are not acceptable, well return the products and get something else ? Accepting an EULA including the privacy terms, then complaining about them is a bit weird. They are presented to every user so they can make that choice before installing said program. Don't get me wrong, iCUE is hot garbo and the amount of telemetry they implemented certainly didn't help in making it better or more stable over the years, but creating an account to unearth an old post and rage on it is weird.
  25. the old HXi are type 3 but really, only the motherboard 24 pin differs from type 4. you can get type 4 cables for PCIE and sata since they are probably easier to source (notice even type 4 psus use "type3" pata/sata)
×
×
  • Create New...