Jump to content
Corsair Community

sebastiannielse

Members
  • Posts

    173
  • Joined

  • Days Won

    1

sebastiannielse last won the day on February 7 2016

sebastiannielse had the most liked content!

Reputation

14 Good

Converted

  • Optical Drive # 1
    Asus BW-16D1HT Blu-ray writer with M-disc support
  1. Sometimes, it happens randomly, then I hear lots of sound same as you plug out a USB memory and plug it in again. Then I see a message "Configuring: Corsair Integrated Bridge" in the taskbar. Then later on "Configured: Corsair Integrated Bridge is ready to use". (Same with every other USB device I have in system, like card readers, webcam, keyboard, mouse and such) After that Corsair Link completely resets all profiles. Eg all fan curves, all personalized settings are reset to factory default. Why does it do this stupid? If it for some reason loses connection with the pump/fan controller, it should keep the settings. In the case theres multiple devices of the same type making it impossible to reassign them, then ask the user which of them belongs to which.
  2. the best way to implement this would be via command line parameters. Like Corsairlink.exe -setprofile MaxFans Corsairlink.exe -setprofile SilentProfile Corsairlink.exe -setprofile Performance Corsairlink could then be made so if a "-setprofile" command line was specified, it will look for a running copy of Corsair Link, and signal it to set a profile. If its not running, it could simply say that Corsair link is not running. The advantage of this is that you can use any macro software you prefer, or even have desktop icons to set different profiles.
  3. I really don't think Pascal will be such a enormous leap forward. They always say that "omg our next line of card will be 10x faster" but in reality, the card is only like 2x faster than the previous generation. It has with physical limits to do. There is 2 alternatives when it comes to chip making: 1: Either make the litography smaller. Here you will reach a limit where it will be more and more difficult to make it smaller, since when you reach the wavelength of light in detail size (currently they use UV light to reach smallest possible wavelength), its simply not possible to increase the detail any further. EUV light can reach 10nm, and that is simply the smallest size chip you can do. The GTX 980 Ti for example, is 28nm, and if you halve that to 14nm, you gain a 4x performance increase, but might lose about 100-200% in heat, giving 2x-3x performance. Note that there is limits on how near you can go to the limit. Pascal will be 16nm so there you will still be VERY near limit. 2: Or make the chip larger, but keep the size of the details, but that instead means the whole chip work a little bit slower, since the electricity signals needs time to reach from the far end of the chip to the near end of the chip. Im really not that impressed of Pascal. I think Pascal will just be pretty much faster than 980 Ti, but it will not be groundbreaking, it will be like GTX 970 vs GTX 980 ti. Same will be with pascal.
  4. mattlach: Since that scew does not screw into the HG10 bracket, it does not neccessary need to be identical to the screws shipped with the bracket. I can say a standard M3 screw and nut fit in that hole on the card. I had to do the same on my card since I had the exact same "issue" to so say. I actually used a CD drive screw, and then a random m3 nut found in my toolbox, to screw together that loop and the card.
  5. Hyncharas: No Q1 = Jan Feb Mar Q2 = Apr May Jun Q3 = Jul Aug Sep Q4 = Oct Nov Dec But I guess Lapdog and Bulldog got severly delayed due to the N980 problems, guess we wont see any Lapdog or Bulldog before Q2 2016. I suspect its the same people working on Lapdog/Bulldog, as who worked on N980, so if N980 gets more delayed, Lapdog/Bulldog will get more delayed aswell as they are in queue.
  6. >>though it would be good to get some actual measurements. For that, the temp probes that is shipped with Corsair Commander Mini is excellent.
  7. the h80i has a thicker radiator, so the surface size is very near the surface size of h100i. But if you have the space for a 240mm radiator, so why not? So youur 120mm radiator is able to get the temps below 40C in full load with uingine?
  8. mattlach: The idea behind a benchmark is to get how a card perform in "worst case". Because that is whats matter, because you cant know if your´e getting so unlucky so you get a dozen high-poly enemies spawned in a specific part of the game. Many games use random to decide such things, thus to get consistent result and not a card get a good score because you were lucky (with regards to graphic load) in a game, you need to use benchmarks. Even if you would use a bot to play a game, you would still get different loads from play to play. One example is GTA V that random spawns civilian cars. So this is another evidence that a h100i GTX is a great thing to put on a video card, especially if you get the fan settings just right to get the temps down to ~40.
  9. I think 60c is very bad, even if you get 80c with stock cooler. 80c then you are literally TOASTING the silicon into a bit of coal over time... (come on! Look at the rubbish nvidia stock cooler. The surface area of the stock cooler is so small so you can literally laught at it, Nvidia really need to hire a new designer for their cooling) 70c is like "You are atleast TRYING to cool the card, but come on!" 60c is more like "Yeah, a little bit of cooling but not great!". 50c is like "Nice cooling dude, but it can be improved." 40c is like "Finally! Now the card will survive very long time." <35 in load is like "You are a cooling champion!". After tinkering a little with the settings in Corsair link with fixed fans instead of temperature controlled fans, I have managed to achieve a maximum temp of 40*C in unigine. Should experiment a little more, connect the pull fans of the h100i GTX to the Corsair Commander mini instead, and make the pull fans run a bit faster than the push fans, to create a lower pressure inside the radiator, thus lowering the temperature more.
  10. mattlach: That review seems to use only gaming to measure the temperature, not a benchmarking software. Since a game does not put full load at the card always, the temps will of course be lower. Also their ambient seem to be lower, my ambient is more near 23*C. I did a second test to see if I could get the temps lower, and I actually got the temps lower now. By running unigine for a hour, I did a "burn-in" of the card so the TIM gets to settle propely (as you know, Artic Silver 5 needs a longer session with higher temps to "settle" propely), and now the temps during unigine stays around 35-42. The HG10 N980 is VERY effective cooling the VRM and VRAM too. My card does not have any VRM/VRAM probe, but I can feel that the bracket itself gets hot to the touch when the card is under load. Thats a strong suggestion that solutions without VRM/VRAM cooling such as those "fits-all-card" products with just air blowing at the VRM and no heatsink, and no cooling of VRAM at all, that will mount to any NVIDIA or AMD card, will shorten the life of your card.
  11. The PCB must be reference, but the cooler doesn't need to be reference. But I would still recommend getting a card with a reference cooler if you are going to remove the stock cooler anyways, because then you can be sure its a reference card with reference cooler and all Components just at the right Place.
  12. 71.5% is mine. Its was a completely reference card, with ref cooler and Everything, Before I fitted the N980.
  13. Mattlach: Ambient is 22,7*C And ASIC.... I don't know how to check that? How I do to check that in Gigabyte cards? I looked in nvidia Control panel and Nvidia Experience but couldn't find anything. Also, im NOT using the stock SP120L's that is delivered with the h100i GTX. Rather im using purchased SP120's HP PWM edition, those fans are better. Another thing to notice is that its a quite difference between h100i and h100i GTX, since both the radiator, hosing diameter and pump is improved on the GTX. Thus I suspect your H90's will propably be maybe 7-10*C off. About the mounting: Its more that there is not enough mounting Points on the card to give a enough counterpressure. Its just that the PCB is "too soft" with regards to the spacing between mounting Points. A solutiion if its current design should be kept, eg if you are not going to sacrifice Coolit support, then: 1: Supply a frame (eg a metal piece) with a round hole (slightly larger than a asetek cooler head) and m3 studs, that can be placed in the Square hole, and the frame will overlap the Square hole a Little bit, and then you get 4 extra mounting Points for the bracket, but that frame can then only be used with asetek coolers. This frame could be pretty large and even have holes so it fit over the 8 studs, then it would be super-stable. The Square hole of course needs to be a bit larger. 2: Move the whole cooler assembly about 0.5-1.5mm closer to the VRMs, and extend the threaded pipe on one of the corners with 2mm. Then, one of the standoffs supplied, should then have a very long screw section so it screws through the bracket and comes out on the back of the card. Then a nut is provided. I don't Think moving the cooler assemply this littlle would affect cooling. Also you need to supply 5 thumbscrews then, one thumbscrew with a very long screw section, that is used for one of the corner when mounting a Square cooler, and then 4 normal thumbscrews. This because one of the intel screw corners is over one of the mounting holes for the PCB, but slightly off. If the design of the bracket would be changed so this corner aligns with the PCB hole, a long standoff and a long thumbscrew can go through the threaded hole in bracket and then through the card, and then a nut is screwed on. Implementing these design Changes would give 5 extra mounting Points for asetek owners and 1 extra mounting Point for Coolit owners. The cooler would still be mounted to the bracket, but since more mounting Points are provided for the bracket, the card wouldn't bend or flex when attaching the cooler.
  14. Makalaure: Agree. However, if Corsair implements a design change, it does not neccessarly mean the pump should be mounted directly to the card. The important thing are that the holes around the GPU is used, so its possible to design a working cooler->bracket->card solution. So the design change could aswell mean that the hole in the bracket for the GPU chip is made round instead of Square, and then studs and 4 m3 screws are added so the bracket is also affixed to the 4 corner holes around the GPU chip. (that would however make the bracket incompatible with the Square Coolit coolers)
  15. Yeah, I did take a screenshot immediately after exiting Valley. During valley, the top temp was 45 and the lowest temp was 35. First I tought Corsair Link averaged the temps over a larger timeperiod (like a couple of seconds) while Valley takes instans temps, but actually, it seems it cool very quickly. I tought it couldn't cool that quickly after letting go of the load.
×
×
  • Create New...