Jump to content
Corsair Community

Search the Community

Showing results for tags 'maximizing performance'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


    • Community Rules & Announcements
    • Live Event Announcements
    • New Product Releases
    • Non-CORSAIR Tech, Benchmarks, and Overclocking
    • Games & Gaming
    • Battlestation and Build Showcase
    • iCUE Software Troubleshooting
    • Build Hardware Troubleshooting
    • Gaming Peripherals & Audio Troubleshooting
    • Furniture and Ambient Lighting Troubleshooting
    • CORSAIR Pre-Built Systems Troubleshooting
    • Build Hardware
    • Gaming Peripherals
    • Audio Devices
    • Battlestation Hardware: Ambient Lighting, Furniture, etc.
    • CORSAIR Pre-Built Systems
    • CORSAIR Technologies Q&A
  • Alternative Languages
    • Deutscher Support (German Support)
    • French Support
    • Spanish Support
    • Corsair Enthusiasts Section
    • Corsair Product Discussion
    • Alternate Language Support
    • Customer Service


  • System Build Inspiration
  • Memory
  • Cases
  • Maximizing Performance
  • Peripherals
  • Storage
  • Liquid Cooling
  • Gaming Mice
  • News and Events
  • How-tos and DIY
  • USB Drives
  • Extreme OC Mods
  • Gaming Keyboards
  • Power Supply Units
  • Gaming Headsets
  • Uncategorized

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start



About Me



Optical Drive # 1







  1. If you’re planning to build a new PC with a new Zen 3-powered processor, or upgrade your existing PC, CORSAIR has everything you need ranging from high performance coolers, high-speed memory kits, and power supplies that’ll keep your system running at peak performance. AMD’s Zen 3 desktop processors will continue the AM4 legacy, meaning that our entire lineup of CPU liquid coolers will be compatible out of the box! And with new additions, like the ELITE CAPELLIX series liquid coolers, you’ll get a high-performance pump, an all-new dedicated fan/pump and lighting controller called the Commander CORE, as well as powerful magnetic levitation fans for optimal cooling to keep your next Ryzen processor’s turbo clocks at maximum. If you want to keep your Zen under water, our Hydro X Series XC7 CPU block is a fantastic choice that’s compatible with the AM4 socket. And of course, the rest of our HYDRO X Series lineup includes everything you need to make a beautiful, high performance liquid cooling loop. Ryzen processors thrive with high speed memory thanks to AMD’s Infinity Fabric and we’ve got plenty to offer with a wide range of styles, speeds, and capacities to choose from. For the most bang for your buck, we recommend kits rated for 3200MHz – 3600MHz. Of course, if your next PC has an AMD B550 or X570 motherboard, you’ll want to take full advantage of PCI-Express 4.0’s raw bandwidth with a blazing fast M.2 NVMe SSD like the MP600 which can hit sequential read speeds of up to 4,950MB/s and sequential write speeds of up to 4,250MB/s! Our lineup of power supplies are widely recognized for their performance and reliability amongst enthusiast PC builders. Our new CX-F Series of RGB power supplies offer a great combination of affordability and performance topped with a splash of color. If you want something with a few more features and higher efficiency, look at our award-winning RM, HX, and AX series power supplies which feature Zero RPM fan mode for quiet operation and fully modular cables to make cable management a breeze To keep up to date with the Zen 3 discussion or to get tips and tricks for your next build, join our community over on Reddit, Discord, or our User Forums!
  2. With the new Nvidia RTX 3080 and 3090 graphics cards finally on the market, it’s important that your system is kept as cool as possible with sufficient airflow. If you’re building a completely new system from scratch or upgrading your current system, you might want to double check your PC case to make sure it can facilitate enough airflow. Here are a few of our top recommendations for PC cases. The new 4000D is a great choice if you prefer a clean, minimalist aesthetic (take a look at the airflow version if you want even better cooling). If you have a slightly bigger budget and want to “bling out your rig”, then the 4000X will give you everything the 4000D has to offer, along with a tempered glass front panel and extra accessories like three RGB fans with AirGuide anti-vortex technology and a Lighting Node Core to control the lighting. The 4000D, 4000D AIRFLOW, and 4000X are available in black and white. The 275R Airflow is a budget-friendly mid-tower case that has proper airflow, as the name suggests, with its ventilated cutout design on the front panel. Since it has 370mm of clearance for the graphics card, it can fit even the monster RTX 3090 with room for up to six 120mm cooling fans. Another great mid-tower case is the 465X RGB, especially since it comes with three of our highly popular LL120 RGB fans. Instead of a mesh grill or cutout design for the front panel, it features solid tempered glass that’s positioned with enough space to allow good airflow and not smother the front intake fans. If budget is not a problem and you are willing to go all out on aesthetics without compromising on performance, then the Obsidian 500D SE mid-tower and Obsidian 1000D super tower are fantastic choices for ideal air and water-cooling. The 1000D is just an absolute beast and can hold up to 18 cooling fans, dual PC systems, up to 480mm sized radiators, and an advanced custom cooling loop like our Hydro X Series if that is what you’re going for. In conclusion, we’ve prioritized cases that offer a good amount of ventilation on the front panel since that’s the driving force for pulling in lots of cool air. That said, if you’re planning on mounting a front panel radiator with a push, pull, or push-pull configuration, it’s always best to double check the clearance to make sure you still have enough room to accommodate your new graphics card. If you need help with your next build or upgrade, check out our community on Reddit, Discord, or our User Forums!
  3. NVIDIA’s GeForce RTX 30-Series cards are their latest, high-performance GPUs and feature all sorts of new technologies and a completely new architecture (codenamed “Ampere”) that promises impressive improvements in gaming and rendering performance, specifically in ray-tracing capable apps. How does this affect my PSU?New graphics cards mean new, better things for gaming. But this time around there’s another interesting change. In addition to being exceptionally cool looking and supporting all new RTX-focused features and performance, the new RTX 30-Series cards up the power requirements over 20-Series significantly with the RTX 3070, 3080, and 3090 rated for 220W, 320W, and 350W for GPU power respectively. Recommended PSU ratings have also been increased this generation, with the RTX 3090 we recommend at least a PSU rated for 850 watts. To accommodate the new power load without just adding more 8-pin PCIe connectors, a new, smaller 12-pin power connector has been introduced on all three of NVIDIA’s brand new RTX 30-Series Founders Edition graphics cards, the GeForce RTX 3070, 3080, and 3090. Standard PCIe power connectors (screenshot from NVIDIA)New 12-pin power connector (screenshot from NVIDIA) While a small adapter is included with each GPU for your existing power supply cables, that adapter converts two dedicated PCIe power cables to the new 12-pin connector per NVIDIA’s recommended configuration. But adapters can be clunky, ugly, and increase resistance, so we’ve taken the extra step and made a much more elegant cable for all our modular PSUs. Our thoroughly tested solution is designed for those that have an eye for cable management and don’t want adapters cluttering up their clean builds. It connects two PCIe / CPU PSU ports directly to the new 12-pin connector. Fully compatible with all Type 3 and Type 4 CORSAIR modular power supplies, our new cable provides a clean direct connection without adding the resistance of a messy-looking adapter. What are the benefits?The new 12-pin connector is roughly the same size as a single industry standard 8-pin PCIe but can provide significantly more power in that same space. This allowed NVIDIA to allocate the space previously used for power connectors to other things, like improved cooling. And with our solution being a complete cable rather than a pigtail-style adapter, it makes it easier for you to get the ball rolling without having to compromise on your pristine cable management. When/Where can I get it?Soon! We’re hard at work making these cables right now, and our goal is to have them available as soon as you can buy your new RTX 30-series card. If you have a modular CORSAIR PSU and want a notification of when the cable is available, click HERE to sign up and we’ll send you an email as soon as they’re ready.
  4. Is your system ready for Microsoft Flight Simulator 2020? If it isn’t or you’re not quite sure, it can be with these helpful tips we’ve put together in addition to a brief overview of our component lineup ranging from SSDs to store large game titles, coolers to keep your CPU at peak performance, and memory kits at different speeds and capacities to help you keep up with increasingly demanding virtual experiences. Let’s take a quick look at the game’s minimum and recommended system requirements: MINIMUM: Requires a 64-bit processor and operating system OS: Windows 10 Processor: Intel i5-4460 | AMD Ryzen 3 1200 Memory: 8 GB RAM Graphics: NVIDIA GTX 770 | AMD Radeon RX 570 DirectX: Version 11 Storage: 150 GB available space RECOMMENDED: Requires a 64-bit processor and operating system OS: Windows 10 Processor: Intel i5-8400 | AMD Ryzen 5 1500X Memory: 16 GB RAM Graphics: NVIDIA GTX 970 | AMD Radeon RX 590 DirectX: Version 11 Storage: 150 GB available space To sum up the system requirements, Flight Simulator 2020 doesn’t exactly require the latest and greatest graphics card which would make sense for a simulator title that’s heavy on CPU-based calculations (though it would be a good idea to have a card with a good amount of VRAM onboard, especially if you want to game at a higher resolution). That said, Microsoft does require a quad-core processor at minimum and the required storage space for all the game assets is considerable. PerformanceFlight Simulator 2020 is an experience clearly built with next generation hardware (and current generation PCs) in mind. Sim titles tend to rely heavily on raw CPU resources where core count and core clock are king. Flight Simulator 2020 will make use of all the threads it can get and in this instance we see considerable thread usage across all available threads on our test system’s 1920X, however a few threads seemed to be doing more of the work, indicating that a higher all-core clockspeed would potentially see higher performance in-game. StorageAdding an SSD to your system is a relatively simple upgrade and with the latest Flight Simulator title requiring 150 GB of space for installation, you might want to consider picking up any of our SSDs in the 480GB – 2TB range to help with load times (especially if you’re still relying on a mechanical hard drive). CoolingTo keep your processor from thermal throttling and maintain all-core boost clocks or manual overclocks, consider using one of our all-in-one liquid coolers like the H150i or jump into a fully custom liquid cooling system with our Hydro X Series of custom cooling components (which can also help you pin your GPU at its boost clocks as well for more GPU-focused titles). MemoryMicrosoft recommends 16GB of RAM for Flight Simulator which our friends over at Guru3D have confirmed through their own testing. Microsoft Flight Simulator 2020 Performance w/16GB/32GB/64GB DRAM courtesy of Guru3D.com (https://www.guru3d.com/articles_pages/microsoft_flight_simulator_(2020)_pc_graphics_performance_benchmark_review,5.html) Of course, if you intend on streaming your flights, you might want to consider upgrading to a 32GB kit depending on your streaming/recording setup (or build a dedicated streaming machine with a capture card). That said, total available RAM isn’t the whole picture, RAM speed for CPU-bound titles such as Flight Simulator can see a noticeable bump in performance at higher speeds, especially if you’re using a Ryzen processor (pro-tip: aim for 3200MHz- 3600MHz if you’re on Ryzen 2000 or newer). Final ThoughtsGames are more demanding than ever and Flight Simulator 2020 is no exception, raising the bar for performance in nearly every metric. No matter if you’re a novice builder just needing a quick upgrade or you’re building a completely new system to power your sim rig, our products are ready to help you get the most performance out of your system and our awesome community on Reddit, Discord, and our own User Forums can help you get the info you need!
  5. Last year, we launched CORSAIR ONE to much critical acclaim, earning several industry awards for its compact, quiet, powerful design, and innovative convection-assisted liquid cooling solution for both its CPU and GPU. Today, we’re launching the CORSAIR ONE ELITE and PRO+ which are powered by the latest Intel 8th-generation Core Series processors with a few additional tweaks under the hood. SIX CORES: STREAM+GAME+CREATE The biggest difference between the CORSAIR ONE ELITE and PRO+ models and the previous CORSAIR ONE PRO is a platform upgrade from the Intel Z270 chipset and an 4-core, 8-thread i7-7700K processor to Intel’s new Z370 chipset and the 6-core, 12-thread gaming and multitasking beast, the Core i7-8700K. With a clock speed of up to 4.7GHz (up from the i7-7700K’s 4.5GHz), the i7-8700K in the new CORSAIR ONE ELITE and CORSAIR ONE PRO+ allow you to push out as many frames as their NVIDIA GeForce GTX 1080 Ti video cards can create for ridiculously high fps gaming or smooth ultra-high definition 4K experiences. It doesn’t stop at gaming though, the CORSAIR ONE ELITE and PRO+ are multi-tasking champions, allowing for smoother experiences for tasks such as rendering massive 3D/video projects, streaming, and even day-to-day tasks such as launching office software and web browsing with an excessive number of tabs open. The CORSAIR ONE ELITE and the CORSAIR ONE PRO+ are housed in the same compact and sleek aluminum chassis and make use of our patented convection-assisted cooling solution for both the processor and video card. The CORSAIR ONE ELITE and PRO+ feature up to 32GB of VENGEANCE LPX DDR4 2666MHz memory (the PRO+ model comes with 16GB of memory) and a 480GB M.2 NVMe SSD that’s supplemented by a 2TB 2.5” SATA HDD for mass storage. Both models of the new CORSAIR ONE are upgradeable with easy access to the memory and an unpopulated 2.5” drive slot for expanding storage as the system grows with you. Here’s a quick breakdown of the specifications of both the CORSAIR ONE ELITE and CORSAIR ONE PRO+: CORSAIR ONE PRO+ (CS-9000013) CORSAIR ONE ELITE (CS-9000014) CPU Intel Core i7-8700K, Liquid-cooled (6 Cores, 12 Threads, Up to 4.7GHz) Chipset Intel Z370 Memory 16GB (2x8GB) CORSAIR VENGEANCE LPX DDR4-2666 32GB (2x16GB) CORSAIR VENGEANCE LPX DDR4-2666 Graphics NVIDIA GeForce GTX 1080 Ti 11GB GDDR5X Liquid-cooled Storage 480GB M.2 NVMe SSD 2TB 2.5” 7mm SATA HDD Power CORSAIR SF500 500W SFX 80 PLUS GOLD @ 50C Network 802.11ac 2x2 Wi-Fi, Bluetooth 4.2, Gigabit Ethernet Ports Front: USB 3.1 Gen 1, HDMI 2.0a Rear: PS/2, 2x USB 2.0, 2x USB 3.1 Gen 1, USB 3.1 Gen 2 Type-A, USB 3.1 Gen 2 Type-C, Ethernet, Audio, 2x DisplayPort 1.2, HDMI 2.0 Software Windows 10 Home 64-bit CORSAIR LINK PC Doctor Size 200 x 176 x 380 mm (7.9” x 7” x 15”) 7.4 kg (16.3 lbs.) Warranty 2 Years MSRP $2,799 $2,999 The CORSAIR ONE ELITE and CORSAIR ONE PRO+ are both available from our webstore. For more information about CORSAIR ONE, join our community at the CORSAIR USER FORUMS.
  6. Cooling and lighting control for a modern system can be convoluted and messy. One program controls your lighting, another controls your fans, and maybe even a third monitors your temperatures. Our CORSAIR LINK software and the new Commander PRO combine these features into a single device and software combo. The Commander PRO is the key to controlling almost every aspect of your build’s cooling and lighting. You can monitor temperatures with the included thermistors, control PWM and DC fans, RGB LED strips, and even connect other CORSAIR LINK USB devices such as our intelligent power supplies and Hydro Series coolers with the integrated USB 2.0 HUB. Connectivity After unboxing the Commander PRO, you’ll notice that it’s surprisingly compact. Measuring in at 133mm x 69mm x 15.5mm with a single internal USB 2.0 cable and SATA power cable. Taking a quick look at the Commander PRO’s onboard connections, you’ll notice the following: 2x RGB LED channel ports 4x Thermal sensor headers 6x 4-pin fan headers 2x USB 2.0 headers Inside the Box The Commander PRO includes the following: 2x RGB LED hub cables 4x Thermal sensors 5x Fan extension cables 2x pieces of mounting tape Physical Installation Installing the Commander PRO is simple, find a flat surface inside your case and stick it in place with the included pieces of double-sided mounting tape. Make sure to plan ahead and pick a location that is reachable by all the fan and LED cables (the included fan extension cables provide added flexibility). Once you have everything connected, you’ll see something like this screenshot when you launch CORSAIR LINK. Lighting Just like the Lighting Node PRO, the Commander PRO unlocks a plethora of lighting effects that you can sync across your compatible LINK RGB devices, or you can control each device individually for wild effects. All the classic modes are there, and as of Corsair LINK 4.7 (which you can download here) you’ll have access to: - Sequential - Marquee - Strobing - Visor After you’ve given your build some personality, you’ll want to dive right into temperature sensors and fan control. This is where the Commander PRO really shines. Fans and Temps Several preset fan curves can be chosen from to auto adjust fan speeds. However, you can also choose fixed RPM, fixed percentage, and a custom curve mode so you can have your fans set exactly where you want them. For example, if you want a quiet system and your system temperature runs within acceptable limits, you can have the Commander PRO turn off all your system fans with a custom fan curve. You can assign this curve to multiple fans with the “Copy to” buttons so if you want all your intake fans to dynamically spin up based on your GPU temperature, you can group this configuration to a thermal sensor and using the drop-down menu to the right, copy this configuration to the appropriate fans. There’s no limit to the customization you can do. It’s your build, cool it how you want to. Extended Connectivity In addition to all the onboard devices you can plug directly into the Commander PRO, there’s also an integrated USB 2.0 HUB so you can plug in other USB devices that would otherwise take up a USB header on your motherboard. This is especially handy if you want to have one of our intelligent power supplies or Hydro Series liquid cooler plugged into a single USB header. Mapping out Your System Once you have all your devices connected and configured, you can create a map of your system by going to the “Configure” tab in CORSAIR LINK. You’ll be able to pick an empty view of your chassis (with a selection of CORSAIR cases available to choose from) or you can upload your own image of your case. You can then drag and drop items from the sidebar to the left to their appropriate position on your case image. You can see I went ahead and put my fans, temperature sensors, and lights all where they should be in my Vengeance C70. The Commander PRO combines powerful fan controls and advanced lighting modes into a single device powered by CORSAIR LINK. The Commander PRO is available now and if you have any questions, feel free to ask us in the CORSAIR Forums.
  7. Performance DRAM such as our Vengeance and Platinum series can be found at rated speeds of up to 4333MHz. However, you might notice that when you first install your RAM and boot to your system’s BIOS, the RAM is running at its standard speed (2133MHz/2400MHz in the case of DDR4 memory). Why does memory initially run at this slower speed? *DDR4 memory running at its stock 2133MHz speed (shown as 1066.7MHz in CPU-Z). To answer this question, we must consider the many different combinations of motherboards, processors, and memory that could be possible. A set of memory can be installed on numerous different processor/motherboard combinations, only some of which could actually handle the onboard changes needed for the memory modules to run at their rated speed. To avoid a bad combination resulting in an unbootable system, memory is set to run at a standard speed out of the box, which would put the modules within spec and work universally with all motherboards that support that type of memory. blog_DRAM_XMP_vs_SPD-Content-2.BMP Intel XMP (Extreme Memory Profile) is a predefined high performance profile that’s been tested to work with that particular module or set of modules. To enable XMP, you must install your high-performance memory on a motherboard that supports XMP in some form (usually an Intel Z or X-series chipset) and enable XMP within your motherboard's overclocking utility. *DDR4 memory running at 3000MHz with XMP (shown as 1498.5MHz in CPU-Z). If your motherboard supports overclocking, but doesn’t offer the ability to read the XMP of a module, as is the case with most AMD motherboards, the label on the modules will denote the rated speed, CAS timings, and voltage. These settings can be applied manually within the overclocking utility in your motherboard’s BIOS to enable the rated speed of the module, however, adjustments may need to be made for non-Intel platforms. High speed memory can provide significant gains in various workloads from gaming to content creation. With the help of XMP, unlocking more performance can be as simple as turning it on in your system BIOS.
  8. This build log is going to be a bit on the personal side. The fact is, at its core, Corsair is a cadre of geeks with shared interests trying to make cool stuff. A lot of companies want to project being “cool” or “rock stars,” but the reality here is that our products are conceived and designed by a bunch of people who are just trying to produce something they’d use. Why am I laboring over the notion that Corsair is ultimately a fairly human organization? Because, well, human things happen to us. At the end of August, I had a very good friend die in a motorcycle accident. He was in his early thirties, driving home from work as a district supervisor for DHS out of Oakland, California. Hit a bad patch of asphalt, lost control of his motorcycle, went under a semi, and that’s all she wrote. Odds are you don’t know him, but given the number of people I saw at his memorial service, I wouldn’t be surprised if one or two of you did. His name was Benjamin Moreno. Ben was a fairly serious gamer. We got into Mass Effect 3 multiplayer together, then graduated to MechWarrior Online with some of our friends. He and his wife were into Star Wars: The Old Republic and Elder Scrolls Online, and near the end had spent considerable time playing Dota 2 and Heroes of the Storm. He got me to give Dragon Age II another chance (and was right on the money). He was also a big part of my choice to join Corsair. Outside of that, he was – regardless of your politics – an exceptional cop. Tough-minded, fair, and directly responsible for saving many lives. Before that, he was in the Air Force. Through his life, he had friends who he’d set on the right path when they’d strayed, and was generous with his time and attention. There are an awful lot of people who would be far worse off today if it hadn’t been for him. Unfortunately, Ben left behind a widow, Risa, and a very young daughter, too young to really comprehend that her father’s not coming home. His family lives on the outskirts of the bay area, which unfortunately played a role in his passing due to the long commute. Gaming was and is a very large part of how they stayed in contact with friends. He and I often talked about someday building him a ritzy custom loop system when circumstances and finances permitted. Since Risa is an avid gamer and plays a healthy amount of Dota 2, it seemed like building her a proper, custom loop gaming machine was the right thing to do. It didn’t have to be as fancy as his would have been, but should have plenty of horsepower for gaming, photo editing, and coding. You’re going to find the custom loop is excessive for this build, but I haven’t built a custom loop for performance reasons for a long time. The fact is that it looks cool – not just to fellow geeks, but to just about everyone. With that said, here’s the component breakdown for the “Blight” Memorial Build, after his handle: Corsair Carbide Air 240 His old gaming PC was built in an Air 540, so it seemed appropriate to go with its more compact cousin for the new one. This would also be an opportunity to show a custom loop operating inside this substantially smaller chassis. Intel Core i7-5775C We had a couple of spare Broadwell chips from internal testing. These are both remarkably powerful and remarkably efficient, and while it’s not the latest and greatest available, the i7-5775C is mighty close. Four cores, eight threads, that massive L4 cache, second in IPC only to Skylake, and a 65W TDP. The odds of being CPU limited with this chip are very low. ASRock Z97E-ITX/ac Mini-ITX We did our internal testing on Broadwell using this platform and found it rock solid with good overclocking potential. Given the cramped quarters of the Air 240, it seemed necessary to go with a smaller motherboard. Corsair Dominator Platinum 2x8GB DDR3-2400 C10 with Lightbars In my testing, I’ve found 2400MHz to be the perfect speed for DDR3 on Haswell and to a lesser extent Broadwell. 16GB of DRAM provides plenty of memory to work with for almost any task. EVGA GeForce GTX 970 It didn’t make sense to put some monster graphics card in the build, but we definitely needed one that would be plenty powerful for gaming for the foreseeable future. NVIDIA’s GeForce GTX 970 was that card, and we went with an EVGA model because of EVGA’s tendency to adhere to NVIDIA’s reference design (improving waterblock compatibility). Corsair Force LS 960GB SSD The Force LS was our budget line up until our TLC-based Force LE drives, but make no mistake – these drives, and the 960GB one in particular – are plenty fast. We’re at the point now where nearly a terabyte of solid state storage is no longer outrageous, and the 960GB Force LS is a highly capable drive. Corsair HX750i 80 Plus Platinum Power Supply The HXi series isn’t quite as popular these days with the more affordable RMi and RMx series floating around at 80 Plus Gold efficiency, but the HX750i was chosen for its compatibility with our Type 3 sleeved cables, its higher efficiency, and its ability to run fanless at the loads this system was likely to produce. Corsair Link Commander Mini A powerful system need not be loud. The Commander Mini lets me spin the violet SP120 LEDs in the system at minimum speed as well as control the RGB lighting strips placed on the inside of the side panel, surrounding the window. XSPC 240mm Radiator For this build we’re looking at a rated maximum combined TDP for the CPU and graphics card of just 210 watts. Since even an H100i GTX can cool a 350W overclocked i7-5960X without too much difficulty, I felt a single 240mm radiator in the front would be fine for these highly power-efficient components. EKWB FC970 GTX Waterblock The PCB of the GTX 970 is so small, and the EKWB block really shows that off. The clear acrylic surface lets the end user see the coolant running through the graphics card, which is very cool. Because the block is so much shorter than the stock cooler, it affords us room in the case to optimally place the pump/reservoir combo. XSPC Raystorm CPU Block w/ Violet LEDs Since this build was intended to be more showy as opposed to a crushing performer, I opted for XSPC’s Raystorm water block and violet LEDs to give the CPU the right glow. EKWB D5 Vario XRES 100 Pump and Reservoir I’ve had great experiences with the D5 Vario pump in my own liquid cooled build, and this combo seemed to be the perfect choice for an attractive, efficient system. In addition to the parts used in this build, we also included a Corsair Vengeance K70 RGB keyboard, Sabre RGB Optical mouse, and our new Void RGB headset in black. With all of the components installed, the “Blight” build looks like a fun size version of a more beastly Air 540 liquid cooled build, and that achieves exactly the intended purpose. Because of the highly efficient components, the fans never have to spin up, and everything still stays running cool and fast. The violet (which I confess can look pink in some light) coloring was chosen for its significance to both Risa and Ben, as it’s their favorite color. It undoubtedly seems at least a little unusual to build a computer as a memorial for the passing of a dear friend, but gaming is fast becoming an integral part of our culture. I can think of no better tribute to a community gamer than to keep his wife connected with their friends and loved ones.
  9. IntroductionWith Intel’s 6th Generation Core processors – code-named Skylake – now out in the wild, we have an opportunity to directly compare DDR3 technology against DDR4. On paper, DDR4 is certainly more exciting: DDR3L offerings for Skylake stop at 2133MHz, while DDR4’s clock ceiling just keeps rising. The question now isn’t just whether or not DDR4 offers an appreciable improvement over DDR3L for Skylake users, but whether it’s price-performance competitive. Comparing Price Points The modern memory minimum for enthusiasts and gamers is really 16GB. 8GB is fine for most games, but there are newer games that have issues with that low amount, and more are expected to emerge. Dual 8GB DIMMs tend to be the best for price and for performance, so let’s compare how expensive our Vengeance DDR3L kits are to our Vengeance LPX DDR4 kits. DDR3L Price Point DDR4 16GB 1600 MHz $85 $90 16GB 1866 MHz $95 16GB 2133, 2400, 2666 MHz $100 16GB 2133 MHz $120 16GB 3000 MHz DDR3L is still our low price leader, but DDR4 is already extremely price competitive, and when we look at price-performance you’re going to see why most vendors are treating Skylake’s DDR3L support essentially as “legacy.” Also, remember that DDR3L is functionally identical to DDR3, it simply runs at a lower voltage. Testing Configuration We used the following system configuration to test performance: CPU: Intel Core i7-6700K @ 4.6GHz Motherboard: DDR3L: ASUS Z170-P D3 DDR4: ASUS Z170-DELUXE DRAM: DDR3L: 4x8GB Corsair Vengeance Pro DDR3L-2133 DDR4: 4x8GB Corsair Dominator Platinum DDR4-2800 Graphics Card: NVIDIA GeForce GTX 980 Storage: 480GB Corsair Force GT SSD CPU Cooler: Corsair Hydro Series H110i GT Power Supply: Corsair AX1200i Enclosure: Corsair Carbide Series Air 540 We then used these kits to test scaling down to 1600MHz C9 on the DDR3L and down to JEDEC (2133MHz C15) for DDR4. Overall Performance First, we’ll look at synthetic memory bandwidth tests just to get a feel for how the technology compares on a one-to-one. Raw memory read/write bandwidth is ever so slightly lower on DDR4 than it is on DDR3 at the same speed, but try to remember that DDR3’s entry level is actually 1600MHz. The real question and concern most users have when it comes to DDR4 is the higher latency, but as it turns out, this isn’t a very significant issue. DDR3-1600 has higher latency than any DDR4 on the market, while DDR3-2133’s latency is only marginally lower than DDR4-2400. Right away I’ll say that in practical game testing – including testing with the integrated graphics – Skylake just doesn’t seem to benefit substantially from faster memory. This may change with DirectX 12, but modern games seem to be more capacity intensive than speed intensive. However, for any kind of multimedia work, memory speed becomes much more relevant. Unless you’re running DDR3-2133, DDR4 is going to be consistently faster across the board, although its advantage does wane fairly early on. The difference isn’t staggering, but it’s measurable. The same trend occurs with Adobe Premiere CC and Adobe Media Encoder, though more pronounced. DDR3-1600 is just too slow for Skylake and leaves significant performance on the table. Finally, our baseline for the price-performance metric. PCMark 8’s Adobe Suite is consistently faster on DDR4 and continues to scale up gradually with each speed grade. The crux is that there’s a very modest performance delta between DDR3-2133 and DDR4-2133, but it’s negligible and easily remedied by just going up a single speed grade on DDR4. Price-to-Performance Now that DDR4 has hit essentially mainstream pricing, DDR3L’s price advantage has become negligible. At the time of this writing, 16GB of DDR4-2666 can be had for the same price as 16GB of DDR3L-1866, and the same amount of DDR4-3000 can actually be had for less than the same amount of DDR3L-2133. While higher speed memory tends to be less of a value than lower speed – that’s just been historically true – it’s worth noting that DDR4 gives you more performance-per-dollar over any DDR3 speed grade except for 1600MHz. And the flipside of DDR3-1600 is, as you saw earlier, a notable performance hit. Conclusion We’ll be continuing to test DDR4 against modern games as they come out, but regardless of performance scaling in games, DDR4 ends up being faster in virtually any other task than DDR3 and is a better price performer than the DDR3L needed to run Skylake. Users building new rigs with Skylake CPUs should really only be considering DDR4 and the associated boards.
  10. NVIDIA’s second generation Maxwell architecture has turned out to be beyond formidable. Their engineers have a habit of leaving gas in the tank for partners, enthusiasts, and overclockers; the card is fast out of the box, and if you’re willing to play with it a bit, it can be even faster. Speaking of partners, we partnered with MSI and licensed our HG10 technology to them to bring to you the Hydro GFX, one of the fastest and most overclockable GeForce GTX 980 Ti cards on the planet. The Hydro GFX comes out of the gate with boundary-pushing factory overclocks, and impressively, there’s still substantially more performance to be had for users willing to push the limits of their card. Reference GTX 980 Ti Hydro GFX Difference GPU NVIDIA GeForce GTX 980 Ti NVIDIA GeForce GTX 980 Ti - CUDA Cores 2816 2816 - Default Clock 1000 MHz 1190 MHz +19% Boost Clock 1075 MHz 1291 MHz +20% Memory 6GB GDDR5 6GB GDDR5 - Memory Clock 7000 MHz 7096 MHz +2% Memory Bus 384-bit 384-bit - Rated TDP 250W 260W +4% Maximum TDP 275W (10%) 280W (7%) +2% As you can see, the Hydro GFX carries a massive core clock speed increase above the stock GTX 980 Ti that catapults it well past even Titan X performance. The increased TDP headroom helps to cover the higher clocks, but liquid cooling does even more. While a reference 980 Ti will hit 83C and then thermally throttle, reducing boost clocks to keep thermals in check, a Hydro GFX never hits that throttle point. It’s solely power limited. Yet because the Hydro GFX uses a closed loop liquid cooler to cool the GPU, the blower fan and baseplate – licensed from us – only have to cool the power circuitry and video memory. Bottom line: Everything runs cooler. Everything runs more efficiently. Power limits are hit less frequently as a result. We ran the 980 Ti and the Hydro GFX through three benchmark runs in Unigine Valley at 4K resolution. You can see the reference 980 Ti plateaus at about 83C and then clocks drop a couple of bins and never hit the same peaks. Meanwhile, Hydro GFX’s core temp doesn’t really plateau until about 50C, but clocks stay within the same tight band for most of the run. The Hydro GFX still has some headroom beyond that, though. The sample we used for testing was able to do offsets +100 on the GPU and +500 on the GDDR5 with TDP set at 107%. Resulting peak clock was a very healthy 1467 MHz on the GPU and 8.1 GHz on the GDDR5. For our testing, we used the following system: • CPU: Intel Core i7-6700K @ 4.6 GHz • CPU Cooler: Corsair Hydro Series H110i GT • DRAM: Corsair Vengeance LPX 4x4GB DDR4-3600MHz C18 • Motherboard: ASUS Z170-DELUXE • Storage: Corsair Force GT 480GB • Power Supply: Corsair AX1200i • Enclosure: Corsair Carbide Series Air 540 • Operating System: Windows 10 64-bit So how does the Hydro GFX work out in practice? A stock Hydro GFX can get you as much as 15% more performance over a reference 980 Ti depending on the title – well ahead of a Titan X – but our overclocked settings bumped that up to more than 20%. We ran our tests at three resolutions: 1920x1080 (1080p), 2560x1440 (1440p), and 3840x2160 (4K). Our least graphically intensive game, we tested GRiD: Autosport as kind of a baseline with 4xMSAA and all settings maxed out. Each card produced fluid framerates even at 4K, so the Hydro GFX’s victory is mostly academic here. At lower resolutions, overclocking doesn’t earn us a whole lot and we’re really fairly platform limited, but once you hit 4K and the workload hits the GPU, the cards start to spread out. We tested Grand Theft Auto V with almost all settings maxed, except for FXAA only (no MSAA) and grass was set as Very High instead of Ultra. Because of how variable GTA’s framerate is, every additional average frame can really matter. A stock 980 Ti sits at 41.7 fps at 4K – not terrible and certainly playable, but a little choppy. Meanwhile, the stock Hydro GFX can bring that up to 44.8 fps, and overclocking will get you an even bigger gain: 48.4 fps. Finally, Shadow of Mordor was tested with all of its settings maxed, and it showed some of the biggest gains from going to the Hydro GFX. This is a heavily GPU-limited game, and at 4K, the Hydro GFX can run it as much as 22% faster with our overclock. We’ll be clear: the Hydro GFX is among the fastest graphics cards you can buy, and overclocking makes it faster still. That, and even with a heavy overclock and TDP jump, the integrated Hydro Series cooler can keep the GPU running well below 60C. If you simply must have the highest performance you can get in a single-GPU package, Hydro GFX is certainly a way to go. To learn more or pick up one of your own, you can visit our product page here.
  11. We’ve been collecting data on memory bandwidth for some time now – of course we have – but one of the big questions hanging over Skylake is what the DDR4 support really brings to the table. It’s also worth comparing four generations of memory controllers – two dual-channel and two quad-channel – and seeing what the weaknesses and strengths of each one are. With all that in mind, we compared Intel’s Ivy Bridge-E (quad-channel DDR3), Haswell (dual-channel DDR3), Haswell-E (quad-channel DDR4), and Skylake (dual-channel DDR4) at a variety of speed grades in synthetic testing in AIDA64 to isolate raw memory bandwidth. You may have heard by now that Skylake has a very robust memory controller, and that’s turned out to be true as you’ll see. The following CAS latencies were used for each speed grade: MEMORY CLOCK DDR3 CAS LATENCY DDR4 CAS LATENCY 1600 MHz 10 1866 MHz 11 2133 MHz 11 15 2400 MHz 11 15 2666 MHz 11 15 2800 MHz 12 16 3000 MHz 16 3200 MHz 16 3333 MHz 16 3466 MHz 18 3600 MHz 18 One crucial thing to point out with DDR4 is that it has an oddball “CAS latency hole.” You’ll notice we jumped directly from C16 to C18; C17 isn’t officially supported. The result is that there is a substantial jump in CAS latency moving up to 3466MHz that needs to be ameliorated, amusingly enough, by driving the memory at even higher clocks. Read Speed The blue bars represent our DDR3 configurations, while the red bars represent our DDR4 configurations. This should hopefully lay to rest some concerns about DDR4’s higher latencies negatively impacting performance when compared to DDR3. There were situations where DDR3 could be faster than DDR2 during that transition, but DDR4 is a different animal. It offers consistently higher read bandwidth at the same clock. Note also that Haswell’s memory controller has a hard time going past 2400MHz, which really has been the performance sweet spot in DDR3. Yet there’s no point where the wheels start to shake on Skylake’s controller; it continues scaling, even up to and beyond 3600MHz. Finally, one more trend you’ll see: DDR4-3000 on Skylake produces more raw memory bandwidth than Ivy Bridge-E’s default DDR3-1600. We now have a mainstream, dual-channel platform capable of generating nearly as much memory bandwidth as last generation’s quad-channel. Write Speed Interestingly, it seems like memory write operations have consistently been a minor sore spot. Haswell-E’s memory write performance capped at ~48000 MB/s and basically stayed there regardless of speed. That’s mighty fast, but Skylake is able to actually exceed it at 3200MHz and beyond. Skylake also easily eclipses Haswell and Ivy Bridge-E. Copy Speed The memory copy operations look basically the same as the read operations. Haswell has the same drop at 2666MHz, and the DDR4-equipped platforms are consistently faster even at the same speed. Skylake’s exceptional ability to scale up in clock speed allows it to make up bandwidth and, at a high enough speed, put it in striking distance of Haswell-E. Latency This is arguably what DDR4 skeptics are going to gravitate toward despite the immense raw bandwidth of the technology. DDR4 latency is a bit higher than DDR3, but not catastrophically so. What you need to focus on is essentially mapping the curve of DDR3 against the curve of DDR4. DDR3 more or less starts at 1600MHz for mainstream platforms, while DDR4 doesn’t go below 2133MHz. So at the entry level for each platform, latency is more or less the same, while bandwidth is significantly better on DDR4. Conclusions First, while Skylake’s instructions-per-clock gains are a little underwhelming, its memory controller is something else entirely. We’ll need to see how it handles DDR3L – and we’ll be testing that in greater detail soon enough – but it has none of the scaling hiccups any of its predecessors have. Skylake’s memory controller is incredibly robust, and Skylake seems to overall be more efficient with memory in general. Second, DDR4 just doesn’t have the latency issues the transition from DDR2 to DDR3 did. In fact, it’s only when you’re making the C16 to C18 jump that overall latency starts to creep up, but that’s solved almost immediately by just going to the next speed grade. Ultimately, DDR4 draws less power, runs cooler, and delivers more bandwidth-per-clock than the venerable DDR3, and it has the scaling headroom that DDR3 lacked in both capacity and raw bandwidth. In other words, it’s a worthy successor.
  12. Intel’s long-awaited Skylake processors are finally arriving in the channel. Details about the new architecture are distributed a bit unevenly, with reviews going up well before information on the actual architecture itself did. You can play with the Intel Core i5-6600K and Core i7-6700K now, but what are you getting into? As it turns out, Skylake is a little bit different from its predecessors. What Changed One of the major differences between Skylake and the preceding Haswell/Broadwell generation is the removal of the Fully Integrated Voltage Regulator. Voltage regulation is off-die and back on the motherboard again, giving motherboard vendors another way to differentiate products. The upshot of this is that it should reduce CPU temperatures. A minor change is a full range of BClk adjustment. The BClk has been decoupled from the peripherals and PCI Express bus, allowing you to fine tune your overclock. It’s not strictly essential – the multiplier has been getting the job done for years now – but it’s useful for extracting a higher overclock from your CPU and DRAM. Another major difference we’re finding is high VID. With the move to a smaller, 14nm manufacturing process, we expected lower voltages and tighter ranges, but that’s not the case. Our i7-6700K at stock settings alternates between 1.296V and 1.312V, voltages that would push last generation’s processors close to their limits on air cooling. The Haswell-E-based i7-5960X wound up being a solid overclocker because it tended to start at a low VID – around 1.1V – but the architecture could handle going up pretty high if you were willing to pay for the wattage. With the i7-6700K, we’re left without a whole lot of headroom. Finally, while the i7-6700K at least appears to run fairly cool (or at least our sample does), per core and package CPU temperatures don’t appear to be as accurate this generation, with our reported per core temperatures idling a few degrees below ambient. That said, a good cooler will apparently work wonders. Even if you assume the reading to be a beefy 10C lower than what it actually is, the i7-6700K still runs at a reasonably safe temperature. Test Platform CPU: Intel Core i7-6700K (4GHz, turbo to 4.2GHz, 14nm, 8MB L3 Cache) Motherboard: ASUS Z170 Deluxe (BIOS 0504) Graphics Card: NVIDIA GeForce GTX 980 (Reference) DRAM: 4x4GB Corsair Vengeance LPX DDR4-3600 CAS 18 1.35V CPU Cooler: Corsair Hydro Series H110i GT Storage: 480GB Corsair Force GT SSD PSU: Corsair AX1200i 1200W 80 Plus Platinum Digital PSU Chassis: Corsair Carbide Air 540 All testing was done in a thermally-controlled laboratory with a steady ambient temperature of ~19C. Power Consumption We used OCCT 4.4.1 to stability test our overclocks, but note that the temperature monitoring in OCCT hasn’t kept up with Skylake yet. As a result, our power consumption and thermal monitoring was logged through Corsair Link. With all that said, despite a bump to 1.375V on the CPU core, power consumption in OCCT only increased about 30W. Our fully loaded system drew 169W under OCCT load. Despite fairly incremental jumps in voltage to hit 4.6GHz, even 1.45V wasn’t enough to get 4.7GHz stable, and I wasn’t willing to risk the chip to go past that even though it seemed to have the thermal headroom. Incidentally, these results map fairly well against the good Dr. Ian Cutress’s experience over at AnandTech. When looking at his results, though, it seems like Intel may be setting unnecessarily high VIDs on the i7-6700K, as he was able to solidly undervolt his chips and still overclock, while my own overclocks didn’t require much massaging of the VCore until ~4.5GHz. Performance In a weird way, overclocking the i7-6700K is a lot like polishing a cannonball. Intel has made great strides in making Skylake an overclocker-friendly architecture, but performance was already high with Devil’s Canyon and Skylake just drives it higher. We can still get some good performance scaling going from stock to 4.6GHz, but that’s only 500MHz off of stock Skylake’s quad-core turbo speed, or about 12.2% faster. Interestingly, bordering on inexplicably, Adobe’s suite gets some good love out of hitting higher speeds, and we’ve found that it does get some extra juice out of high speed memory. Conclusion Intel’s new Core i7-6700K is screaming fast, but weirdly underwhelmingly so. Unlike the past couple of launches (Broadwell, Haswell, Ivy Bridge), we’re able to hit the high clocks we absurdly crave and see top end performance increase generationally, but the crazy high clocks we want – 4.8GHz and better – still seem to be out of reach. On the bright side, it’s still fairly efficient, and our high end coolers are absolutely up to the task of taming this beast. Users on Devil’s Canyon systems are probably perfectly fine unless they want the enhanced connectivity of the Z170 chipset, while users with Haswell-E systems are going to be perfectly content. The interesting part of Skylake is how much of a jump from Devil’s Canyon it’s not. In most benchmarks, the i7-6700K is only slightly faster than the i7-4790K. Because it’s not a major architectural leap in performance, and because the Z170 chipset doesn’t offer appreciably more important features than X99, Haswell-E continues to be a viable enthusiast alternative. Skylake and its faster memory subsystem show the most benefit in content creation tasks, where Haswell-E’s quad-channel memory bus and extra cores will still go much further.
  13. Intel’s 6th Generation Core processors and platform, known to most of the enthusiast community as “Skylake,” are en route. These new processors bring about a new microarchitecture and are manufactured on Intel’s cutting edge 14nm process, but they also need new chipsets and new memory. We’re ready over here at Corsair, but are you? The 6th Generation Core Memory Controller These new processors support both DDR4 and DDR3L natively, but there are caveats. While you’re likely to see motherboards come out that support both standards, DDR3L development has been deprecated. Most vendors are focusing on DDR4, which lets the new processors stretch their legs. Our existing 4-up DDR4 kits should run on the new platform without issue; they may actually even run a little better for the overclockers in the house, since these processors run on a dual-channel memory controller instead of the quad-channel one found in the Core i7-5960X and related chips. That said, the XMP 2.0 profiles for those kits were designed for Haswell-E and may necessitate having timings entered into BIOS manually. Existing 4-up Vengeance LPX kits should have no trouble running on the new platform. Users who want to bring their existing performance DDR3 to Skylake are going to have a much tougher time, though. Because the new memory controller only supports DDR3L at 1.35V and 1.5V (XMS) speeds, the 1.65V required to get DDR3 to hit high speeds rules them out. Some vendors are working on making their DDR3L-based Intel 100 Series boards compatible with existing 1.5V DDR3 kits, but expect these to be in the minority. Finally, only the new K-suffix chips will support DDR4 speeds beyond 2133 MHz. In order to run memory at higher than 2133 MHz on DDR4 or 1600 MHz on DDR3L, you’ll need to have a K-suffix chip and a motherboard with a Z170 chipset. What does all of this mean for you? Ultimately, if you want to jump to the new platform, it’s going to necessitate a new processor, new motherboard, and new memory. LGA 1151: Keeping Cooler Compatibility Intel’s new chips may necessitate a new socket, but they’ve done right by the enthusiast community by sticking with the same mounting system as their previous mainstream platforms. That means that any cooler that was compatible with LGA 1150, LGA 1155, or LGA 1156 will be compatible with the new LGA 1151. The shiny new Hydro Series H110i GTX mounts to the new processors using exactly the same hardware it needed on the old ones. Users with existing Hydro Series coolers have nothing to worry about; all of our coolers retain their compatibility with the new socket. So if you’re planning on upgrading, you can keep your cooler. Haswell-Ready Power Supplies and Sleep States Way back when the Intel Core i7-4770K and its kin first launched, there was some concern over power supply compatibility. Specifically, these processors added additional low power sleep states that could cause trouble with some power supplies. The new processors inherit these sleep states and the complications therein. Our new RMi series power supplies are just the ticket for delivering clean, stable power to the new chips while supporting all of their features. The overwhelming majority of our power supplies support these sleep states without issue. However, users with our entry-level CX (600W and below) and VS series power supplies will need to disable them in BIOS. Conclusion Intel’s new platform brings some big changes to the market. A faster architecture is always appreciated, but with the 6th Generation Core processors and 100 Series chipsets, they’re bringing a new memory standard into the mainstream in DDR4. We’ve already been playing with the new platform internally and we’re very optimistic about it: it’s fast, stable, and powerful, and the new memory controller brings healthy overclocking headroom. Performance users will be pleased, and we’ll be sharing more information soon, so stay tuned.
  14. It goes without saying that NVIDIA’s Maxwell architecture is incredibly efficient and extremely powerful, and that this performance is realized most effectively in the GeForce GTX TITAN X and the newly released GeForce GTX 980 Ti. We’ve already overclocked and tested the GM200-based TITAN X and the GM204-based GTX 980, and now it’s time to take a look at what happens when we strap an HG10-N980 and Hydro Series H75 cooler to a reference 980 Ti. People who remember the GeForce GTX 780 are going to have a big of déjà vu with the 980 Ti. The 780 was a cut down GeForce GTX TITAN that was able to meet and beat the better-enabled chip when overclocked, and the same is true of the 980 Ti and TITAN X. The GM200 powering the TITAN X is a monstrous chip, with 3072 CUDA cores, 96 ROPs, and 12GB of GDDR5 on a 384-bit memory bus benefiting from the substantial improvements in compression that GM204 and the GeForce GTX 980 brought with it. Modern graphics cards are intensely heavy with shader hardware; AMD’s recently released Fury and Fury X have freakishly high shader counts and correspondingly high shader performance. Source: AnandTech Historically, though, cut-down variants like the 980 Ti can have at least as much going for them if not more than the fully enabled parts. Modern graphics cards are intensely heavy with shader hardware; AMD’s recently released Fury and Fury X have freakishly high shader counts and correspondingly high shader performance. The same is true of the GM200, so when the 980 Ti gets two of the GM200’s 24 shader clusters fused off, it’s an extremely safe cut to make. Going down from 12GB of GDDR5 to 6GB is an equally smart move. Maybe the unkindest cut is the 16 texturing units that get disabled with the two shader clusters, but that’s substantially ameliorated by the 980 Ti’s higher stock clock speed. The flipside of a cut-down chip is that it can use less power and typically will generate less heat. This was part of what made the GTX 780 such a great overclocker; under water it had a very hard time actually hitting its TDP limit. My pair actually hit the limits of the voltage and silicon before the TDP. What makes the 980 Ti such a compelling overclocker is that it’s easier to get 6GB of GDDR5 to run at a high speed than it is to get 12GB (especially when cooling with an HG10-N980), and it’s easier to get 2,816 CUDA cores to run at a higher speed than 3,072. I found in our testing that unlike the 780 but like the GM204-based 980, you’re largely limited by the 980 Ti’s TDP limit, and that makes liquid cooling it that much more important. We’ve established that more efficient cooling and the resulting lower temperatures mean the hardware requires less power to operate. The testbed used will be featured in an upcoming build log, but the core components you need to be aware of to examine these test results are: CPU: Intel Core i7-5960X @ 4.4GHz, cooled with a Hydro Series H100i GTX DRAM: 16GB (2x8GB) Dominator Platinum DDR4-2666 Motherboard: ASRock X99E-ITX OS: Windows 10 Insider Preview Build 10165The GeForce GTX 980 Ti we’re using is a reference model from Gigabyte, and we’re using a prototype HG10-N980 to cool it along with an H75 cooler. Stock Overclocked Peak GPU Clock 1215MHz ~1500MHz GDDR5 Clock 7GHz 7.9GHz When overclocking, I raised the TDP limit to 110% and the voltage by the maximum allowed 87mV. You can see these are pretty massive overclocks and I’m not convinced the 980 Ti can’t go higher still. The GPU core clock is up 23.5% while the memory gets another 12.9%. It’s important to note that the TDP limit has consistently been the bottleneck on our overclock, though; we peak at 1.5GHz but clocks typically hover around one or two bins down. So what does our beastly overclock earn us? We tested four demanding games at two resolutions: 3440x1440 for comparison purposes in a future blog, and 4K as the target for single GPU performance. Games had all of their settings but anti-aliasing maxed out, excepting Grand Theft Auto V, which we trimmed back slightly. At 3440x1440, the performance differences are huge. The 980 Ti essentially jumps another class. Grand Theft Auto V’s average sails past 60fps, while Shadow of Mordor becomes much more playable. When we move to 4K, we find the extra performance even more vital, and every game gets a healthy performance jump. BioShock: Infinite breaks 60fps and Shadow of Mordor easily breaks 40fps. If we break down the performance difference between a stock 980 Ti and one heavily overclocked under an HG10-N980, it’s pretty staggering. As I mentioned before, this is more or less the kind of performance difference that characterizes the gulfs between classes of cards. And it’s worth mentioning that during the entire duration of the testing, the 980 Ti’s GPU temperature never exceeded 61C. The fact is, NVIDIA’s GeForce 980 Ti is a beast with a tremendous amount of gas left in the tank. More than either TITAN X or the GTX 980, the 980 Ti seems to be the ideal mate for our upcoming HG10-N980 bracket.
  15. Massive, monolithic air coolers – and even some smaller, more affordable ones – are still extremely popular in the marketplace. While system integrators have largely moved away from shipping air coolers in their high performance systems as the weight of a massive air cooler can cause damage to the motherboard in shipping, enthusiasts still keep the torch burning. And there are absolutely reasons to go with air cooling: a good entry-level air cooler is typically about $20 less expensive than an entry-level liquid cooler, and has the potential to be quieter to boot. Where air coolers run into trouble is the sheer mass of a high performance air cooler, coupled with a potentially fraught installation process. Air coolers can cause clearance issues with DRAM with tall heatsinks or even other components. I tested our current premium line of liquid coolers – the Hydro Series H80i GT, H100i GTX, and H110i GT – against three of the most popular air coolers on the market. This testbed was used: CPU Intel Core i7-5960X at stock and overclocked to 4.3GHz with 1.35V on the core Motherboard Gigabyte X99-SOC Champion DRAM 4x4GB DDR4-2666 Corsair Vengeance LPX Chassis and Cooling Corsair Graphite Series 760T with 2x SP140L and 1x AF120 as intakes Additional SP140L added as an exhaust with air coolers and top-mounted radiators Top cover panel removed for all testing Graphics Card NVIDIA GeForce GTX 980 (Reference) Power Supply Corsair AX1200i Fan Control Corsair Link Commander Mini Storage 128GB Force LX SSD “Competitor 1” retails for ~$90 and is absolutely massive. While installation wasn’t too difficult, it wound up essentially resting against the backplate of the GeForce GTX 980 in our test system. Removing either component wound up being an exercise in frustration. It’s loaded with heatpipes and uses two 150mm fans. “Competitor 2” is typically found for between $30 and $35 and is extremely popular among budget users. It’s not especially large and only has a single 120mm fan on it, but the design is efficient and prior to coming to Corsair it was my personal go-to. “Competitor 3” has an MSRP of $89.99 and is somewhat rarefied on American shores but very popular in Europe. It features a pair of 120mm fans, but installation was so involved that I had to actually remove the testbed motherboard from its chassis, and the Graphite Series 760T isn’t exactly cramped. And the basic specifications for the H80i GT, H100i GTX, and H110i GT: [attachment=41276:name] Hydro Series H80i [attachment=41277:name] Hydro Series H100i [attachment=41278:name] Hydro Series H110i 120mm x 49mm Radiator 240mm x 25mm Radiator 280mm x 25mmm Radiator 2x 120mm SP120L PWM Fans 2x 120mm SP120L PWM Fans 2x 140mm SP120L PWM Fans MSRP $99.99 MSRP $119.99 MSRP $129.99 During testing, the system was idled for 15 minutes to reach a stable temperature, then stress tested for 15 minutes with OCCT. The average of the eight peak core temperatures was recorded. Ambient temperature remained roughly 19C in the lab. I started testing with the CPU cooler fans set to run at 100% and the processor at stock speeds. The Intel Core i7-5960X is rated to dissipate 140W at stock speed, which is basically where the Intel Core i7-4790K stops. That said, the 5960X also has better thermal interface material and lower heat density; this is why reviews of coolers on an i7-4770K or i7-4790K are often less reliable, as they don’t adequately stress the coolers and those chips are at the mercy of their own heat transfer issues long before a cooler’s real potential is revealed. Right away you can see that our coolers offer a minimum of 2.8C better performance at this low heat load. You’ll see the H80i GT and H100i GTX trading blows, too; this is normal. These two coolers actually have roughly the same surface area, but the H100i GTX spreads it out, requiring less static pressure from the fans. This isn’t relevant when you’re running the fans full bore, but… …when you run them at lower speeds, the H100i GTX separates from the H80i GT. Running the fans at low speeds only costs you about 3C on the Hydro coolers, but the air coolers all run hotter and take a harder hit. Part of the explanation for this is the active cooling in a Hydro cooler vs. a conventional air cooler. The air cooler relies on heat pipes to transfer heat into the fin array, while a Hydro cooler has a pump that actively moves coolant through the radiator and waterblock. The fact remains that almost all of these coolers are still overqualified for a 140W processor. The i7-5960X is notorious for drawing massive amounts of power and generating tremendous heat when overclocked, so we’ll do just that and ramp power consumption up another ~110W. With fans maxed out, the Hydro coolers retain their lead while two of the air coolers start to seriously buckle, running more than 10C hotter than the H100i GTX. We’re at the point where we’re reaching the limits of what the i7-5960X can transfer to the cooling device and certainly the limits of two of the air coolers. Even if we slow the fans down, the Hydro coolers are still able to provide silent but efficient performance for the i7-5960X and a stable 4.3GHz while Competitors 2 and 3 are actually speed throttling. Air cooling’s best and brightest can get within striking distance of our H80i GT, but typically trails between 2C and 3C. The answer is pretty clear: Hydro series liquid cooling starts where air cooling gives up, offering quiet and efficient performance where the competition stops. And end users concerned about potential leaks need not worry; all of our coolers are leak tested before they leave the factory, and if one of our coolers does leak inside your system during the cooler’s five year warranty period, we’ll warrant against damage to your components on a case by case basis. Corsair Hydro series liquid cooling offers a whole lot of upside and very little in the way of drawbacks. If you haven’t checked it out yet, you can visit our cooling page here.
  16. NVIDIA’s Maxwell architecture is a wonderfully impressive piece of engineering for efficiency geeks, and it reaches near-apotheosis with the GM200-powered GeForce GTX TITAN X. This is an architecture that has very clearly been tailored and tuned to maximize gaming performance per watt, and while it loses some steam on the compute side to AMD’s more flexible but also more power hungry GCN architecture, it’s very hard to not be at least a little impressed by the monstrous TITAN X. Better still, across the board, NVIDIA’s Maxwell cards have left plenty of gas in the tank on release that a hypothetical HG10 would certainly help take advantage of. Can’t imagine why anyone would want to produce something like that. But since this mythical sea creature doesn’t exist, we have to look at how a GTX 980 and TITAN X overclock when under a reference card or under some kind of heretofore unannounced watercooling apparatus. I’ve had a decent amount of experience playing around with overclocking the GTX 980 (under reference and under water) and the TITAN X (again, under reference and under water), and there are some modest differences – especially with the TITAN X – compared to the Kepler generation cards. Last generation’s Kepler cards could have their overclocks tested fairly reliably with just 3DMark Fire Strike Extreme; if your VRAM clock was unstable, the sparks in Graphics Test 2 would flicker, and if your VRAM and/or GPU clock were unstable, the NVIDIA driver would crash. That’s not happening with the 980s or TITAN X. For sure, just completing a run of 3DMark Fire Strike Extreme or Ultra typically means you’re about 75% certain of a stable overclock still. But the TITAN X especially can seem deceptively stable and be having problems, and that’s why you expand your stability testing a little more – and pay attention to the testing. Likewise, my GTX 980s under water will rocket through 3DMark Fire Strike Extreme and then artifact or crash in something else. If you’re overclocking either card, it’s smart to first see what your top GPU overclock is, and then your top VRAM overclock, and then combine the two. You may have to notch one down; given how efficient NVIDIA’s memory compression is on Maxwell, the VRAM overclock is the safer one to reduce. As a stability testing procedure, I recommend these steps: First, 3DMark Fire Strike Extreme will catch really unstable overclocks; assuming it doesn’t crash, artifacting will be most prominent in Graphics Test 2. Next, BioShock Infinite has an automated benchmark that’s been very handy. Run the benchmark at the highest settings you can; pay attention to the sunshafts in the very beginning coming through the glass. If your overclock is unstable, these will artifact. Tomb Raider also has an automated benchmark. Run the benchmark at the highest settings you can, and make sure TressFX is enabled. The shadows cast by Lara’s hair will flicker in an SLI system; that’s normal. But if your overclock is unstable, black triangle artifacts will materialize out of Lara’s hair. Finally, Far Cry 4. Seriously, just play the game for a minute or so. If your overclock is unstable, it’ll crash in a heartbeat. By using these testing methods, I’ve been able to pretty reliably run these cards at high speeds without issue. So what can you typically expect as far as overclocks go? What kind of performance increases are we looking at? First things first: either water cooling or just running your cooler at its highest speed will allow your card to maintain higher clocks for longer periods of time. Cooling the power circuitry efficiently results in the card drawing less power in general because the VRMs don’t heat up as much and thus don’t have to work as hard; for you, this means you’re less likely to hit the TDP wall, and a high overclock is easier to sustain. On a stock-clocked GeForce GTX 980, you can probably get your Core up to +200 before instability sets in, which results in peak clocks just north of 1450MHz. Because the 980 has a narrower memory bus and less VRAM overall than the TITAN X, you can also get some additional mileage out of the VRAM. Ballpark +300 on the Memory (for 7.6GHz GDDR5), but most of the 980s I’ve played with have been able to go up to +500 (for an even 8GHz GDDR5). The higher you push your core clock, the more benefit you’ll get out of pushing the memory clock; combined, I’ve been able to get 15%-20% higher performance out of the 980. The TITAN X is about 50% more card in general than the 980 and so it has a bit less headroom, but headroom it still has. +200 Core still seems to be the way to go, resulting in a peak clock of ~1420MHz. But I wouldn’t touch the memory. If you remember Kepler, 7GHz of GDDR5 on a 384-bit memory bus was next to impossible to saturate. Factor in Maxwell’s vastly improved memory compression, and performance gains from overclocking the memory become very minimal. If you want that last frame or two per second of performance, you can try it, but it’s imperceptible. Overclocking the Titan X can get you between 7% and 15% more performance. NVIDIA’s Maxwell-based cards are performance monsters, and once again NVIDIA has left plenty of gas in the tank for us to play with. We lose a lot of Maxwell’s trademark efficiency, but not all of it, and in exchange we can gain a very respectable amount of performance. Of course, if you put it under water, suddenly your temperatures look like this over the course of two runs of Unigine Valley: But why would anyone enable something like that?
  17. At Corsair, we make all kinds of stuff, but at our core, at our heart, we’ve been a memory company since the beginning. So when someone comes up with what appears to be a fantastic solution to using the largesse of memory modern machines are capable of supporting, we’re interested. With that in mind, I took DIMMDrive for a spin. It’s been garnering very positive reviews on Steam, and the $29.99 buy-in isn’t too unreasonable. I tried it on two different testbeds: Testbed #1 Testbed #2 CPU Intel Core i7-4790K @ 4.5GHz Intel Core i7-5960X @ 4.4GHz DRAM 2x8GB Vengeance Pro DDR3-2400 8x8GB Dominator Platinum DDR4-2400 Motherboard ASUS Z97-WS ASUS X99-Deluxe Graphics GeForce GTX 980 2x GeForce GTX 980 Storage 240GB Force LS SSD 512GB Force LX SSD 4x 480GB Neutron GTX SSD in RAID 0 Cooling Hydro Series H110i GT Custom Liquid Cooling Loop PSU HX750i AX860i Chassis Obsidian 450D Obsidian 750D What DIMMDrive does is provide a smart front-end between Steam and its games and an old school RAM drive. You load it up, toggle which games you want loaded into the drive, and then toggle DIMMDrive on. And therein lies your first problem: you’ve just front-loaded your loading times. The games you’re loading have to copy – in their entirety – to the RAM drive, and that loading time continues to be gated by the speed of your storage. The second issue is the footprint of the modern triple-A title. While DIMMDrive offers some small allowance for this by letting you choose which individual files in a game you want copied to the drive, the solution is a clunky one. But look at the storage requirements for these modern games: Battlefield: Hardline – 60GB Battlefield 4 – 30GB Far Cry 4 – 30GB Counter-Strike: Global Offensive – 8GB Elder Scrolls V: Skyrim (assuming no mods) – 6GB Watch_Dogs – 25GB The Witcher III: Wild Hunt – 40GB Grand Theft Auto V – 65GB Dota 2 – 8GB World of Warcraft – 35GBFor users playing less intensive games, you’re still looking at a minimum of 16GB of system memory just to have enough to handle the game’s footprint. And how does it work in practice? I tried using it with a few games that seemed like they might benefit from faster access time: Sid Meier’s Civilization V has basically no loading time during the game, but takes an eon to load initially. Wolfenstein: The New Order uses id Tech 5’s texture streaming and thus by its nature desperately needs all the bandwidth it can get. Even old school Left 4 Dead 2 tends to take a while to load. /corsairmedia/sys_master/productcontent/blog_DIMMDrive_Review-Content-1.jpg The biggest problem was that whether I loaded these games off of my RAIDed SSDs or just the one, the longest load time was always by and large just copying the game into memory when DIMMDrive was enabled in the first place. Switching to a single SSD from a mechanical hard disk improves virtually every aspect of the computing experience and brings game load times in line, but going beyond that to the RAID or the DIMMDrive just doesn’t feel any faster. The most noticeable aspect of DIMMDrive was how long it took to load a game into RAM in the first place. Beyond that, Wolfenstein: The New Order would just crash when I tried to run it from DIMMDrive, so I wasn’t able to see if DIMMDrive could at least improve the texture pop any. So why doesn’t DIMMDrive make a homerun impact on gaming and loading times? Quad-channel DDR4-2400 is, at least synthetically, capable of being almost 100x faster on read than a good SSD. But the answer is more complex, because when games load, it isn’t just loading a game from storage into system memory. Many modern games already use system memory intelligently to smooth out load times in the first place. From there, data needs to be copied either from system memory or system storage to the graphic’s card’s video memory, and that’s going to be gated by the PCI Express interface among other things. A PCI Express 3.0 x16 slot is capable of transferring a ballpark 16GB/s. A quad-channel memory bus will outstrip that in a heartbeat, while a more mundane dual-channel DDR3-1600 configuration is still capable of a ballpark ~25GB/s. Even then, though, actually copying/moving data between system memory, system storage, and through the PCI Express bus is only a part of what a game does when it’s loading. There are countless other operations to consider: compiling shaders, connection speed and latency for online games, and so on. My ultimate point is that by the time you’re done taking all of these other operations into account, the amount of time DIMMDrive might save you could be a few seconds at best, or it may actually cost the time it requires to copy the entire game into system memory in the first place. If you’re on mechanical storage, DIMMDrive could definitely demonstrate an improvement, but it would still require a substantial amount of investment in DRAM in the first place. Ultimately, getting value out of DIMMDrive – assuming you’re on a platform that supports enough memory to make it viable for larger games – requires greater expense and more complexity than simply buying a high capacity SSD. While I’d love to sell you our enormous memory kits, and I continue to recommend 16GB of system memory as a baseline for those that can afford it, the more sensible option continues to be solid state storage
  18. While 4K displays are obviously making huge waves for gaming on the PC, we’re also in an ecosystem where the possibilities for how we even display our games are better than they’ve ever been. 4K gaming as it stands still requires a tremendous amount of horsepower – two GeForce GTX 980s or an AMD Radeon R9 295X2 are basically required – but there are also other alternatives worth investigating. We have tremendous flexibility as far as aspect ratio, panel quality, and even add-ons like Gsync and FreeSync. If you want, you can even just stretch your game across multiple displays. I decided to investigate performance and the overall gaming experience on multiple different setups. To me, the four big options right now in terms of just aspect ratio are 16:9 (1080p or 2560x1440), multi-monitor surround, 21:9 ultra-wide, and 4K (technically 16:9, but different due to high pixel density). First, experientially, it’s basically a draw between 21:9 and 4K. My 21:9 experience is colored tremendously by the fact that we used a Samsung S34E790C, a 34” curved bear of a monitor that nonetheless offers almost all of the benefits of triple-monitor gaming with none of the drawbacks. My bottom line is if you want something immersive, one of the curved 34” displays running at 3440x1440 is going to give you the right mix of detail, picture quality, and overall field of view. Internally, we have people split between these two standards. A large 4K display offers something 21:9 displays currently can’t: substantially improved pixel density resulting in a much more detailed image. If raw detail is what you want and you’d rather stick with the conventional 16:9 aspect (and avoid any potential compatibility issues with wider aspects), then you should be shopping for a 4K panel. There is, however, another reason why 21:9 is picking up a bit of a following over 4K: 21:9 displays simply require less horsepower than 4K does. If you look at the number of pixels that have to be rendered at each step, a garden variety 2560x1080 display only requires about 33% more than a standard 1080p display does. Even the “deluxe” 34” 21:9 panel is still only about 2.4x the rendering, a far cry from the demands of 4K. So how does that translate to the gaming experience? Just a single GeForce GTX 980 is enough to keep framerates well over 40fps up to the 34” resolution of 3440x1440; things are exceptionally comfortable at 2560x1080, where you’d likely be fine with even a single 970 or R9 290. So if you’re not looking to break the bank on graphics hardware, you could arguably invest more in your display. Moving to a pair of GTX 980s, you’re now looking at essentially what I’d consider the bare minimum for comfortable gaming at 4K. Any other display setup runs beautifully on these two, but they should, since this is essentially the most performance you can buy before you start going into the truly treacherous waters of triple- and quadruple-card configurations. BioShock Infinite is always going to be an oddball with low minimum frames, but everything else is buttery smooth once you drop below 4K. From a practical perspective, a non-TN 4K panel is still going to cost you north of a grand, at which point you’re in the ballpark of these supersized 21:9 displays. Personally, I find the Samsung monitor infinitely more appealing than a TN-based 4K or any of the larger 4K displays, as it offers more immersion without going into the weeds the way triple-monitor surround can. Which direction would you go?
  19. When we first took a look at the Intel Core i7-5960X octal-core processor based on Intel’s Haswell-E architecture, we found a processor that seemed to be almost entirely limited by voltage and cooling. The i7-5960X has a staggering large 356mm² die, nearly twice the size of the die inside Intel’s Devil’s Canyon i7-4790K. It’s also soldered to the heatspreader instead of using TIM. What all of this comes down to is lower heat density and better heat transfer. Haswell-E was made for liquid cooling. We tested the i7-5960X here first under a Hydro Series H100i with fans set to push-pull and were consistently capped at 4.4GHz. Three chips hit 4.4GHz with 1.35V; the fourth needed only 1.3V but was still unable to do 4.5GHz. Load temperatures under OCCT were in the mid-80s, and the H100i was running at full bore. But recently we upgraded the Yamamura 750D build to feature the i7-5960X. The consensus has historically been that a custom liquid cooling loop will always outperform an all-in-one liquid cooler. The EK Supremacy EVO waterblock we used costs almost as much as a Hydro Series H80i on its own, let alone the D5 Vario pump and all copper radiators. Yamamura has a ridiculous amount of heat capacity. Does going under custom cooling afford us any more headroom? Can we hit that 4.5GHz point? Actually, no. The i7-5960X is already pushing 300W on its own at 4.4GHz, requiring a massive boost in voltage just to hit that point. Despite a massively powerful pump, a cumulative 120mm x 840mm of radiator surface area, and an expensive, high-performance waterblock, the voltage required to hit 4.5GHz pushes the i7-5960X essentially past its breaking point. We’ve seen this on other processors, but basically there’s a point where the processor simply generates more heat than can be transferred to the waterblock. When you’re dealing with something like a GPU that has no lid, lower heat density, and nearly direct contact with the waterblock, you can get a tremendous performance boost by putting it under water. There’s very little impeding heat transfer from the die to the cooler. But CPUs are different. Heat density is higher, and direct contact isn’t possible (unless you de-lid a Haswell/Devil’s Canyon). It’s respectable that the i7-5960X can be dissipating ~300W before it runs into a wall, but it does hit that wall. The result is thermal performance under a custom loop that parallels the thermal performance you might see under a closed Hydro Series H110i GT. Custom loops like Yamamura are beautiful, but with high performance closed loop coolers and brackets like HG10 on the market, they’re increasingly difficult to justify. Does putting an i7-5960X under water make sense? Absolutely. But under a custom loop? That’s a much harder sell.
  20. Recently, the actual computer part of the Obsidian Series 750D “Yamamura” custom water-cooled system began having issues with random shutdowns and reboots, as detailed in this earlier blog. Ordinarily those types of problems are a frustration, but when your system looks like this… …the increased difficulty of swapping any parts out, potentially requiring you to actually drain the loop entirely, may even make you question why you built your system up like this in the first place. However, as any die-hard builder knows, part failure always has a silver lining: an excuse to upgrade. And that’s what I did, giving me a chance to rectify a few pain points in the original build, things I felt like I could’ve done or specced better. The system was already close to unimpeachable, but we can certainly do more. Swapping the CPU, Motherboard, and DRAM Before After CPU Intel Core i7-4790K (4 GHz) 4 Cores, 8 Threads, 84W TDP Intel Core i7-5960X (3 GHz) 8 Cores, 16 Threads, 140W TDP Motherboard ASUS Z97-WS ASUS X99-DELUXE DRAM 4x8GB Dominator Platinum DDR3-2400 10-12-12-32 1.65V 8x8GB Dominator Platinum DDR4-2400 14-16-16-31 1.2V The only way to “upgrade” past Intel’s monstrous Core i7-4790K (overclocked to 4.7GHz in our build) is to change your platform entirely, so that’s what I did. While the i7-4790K tops out at between 120W and 130W when overclocked, the i7-5960X starts there and pulls considerably more when overclocking is applied. But that’s fine: Yamamura enjoys a custom liquid cooling system with massive heat capacity. Changing the platform means swapping to the even more capable ASUS X99-DELUXE motherboard as well as jumping from DDR3 to DDR4. Latency does increase, but so does capacity and overall bandwidth. It’s a net gain, and our DDR4-2400 kit even includes an extra XMP profile that pushes the voltage to 1.35V and speed to 2666MHz. Incidentally, due to the spacing of the video cards, we actually lose a little bit of bandwidth to the pair of GeForce GTX 980s. The slot arrangement results in the bottom GTX 980 only getting PCIe 3.0 x8 instead of the full sixteen lanes, but thankfully this produces virtually no measurable decrease in performance. Upgrading the Storage A lot of people didn’t care for the way the LG blu-ray burner broke up the front of Yamamura, and I can see why. At the same time, I also found myself needing a little bit more storage for a documentary I’m editing in my off hours. Thankfully, there’s a way to serve both masters, and it comes from SilverStone. SilverStone produces a 5.25” drive bay adapter that can fit a slimline, slot-loading optical drive and four 2.5” drives. By purchasing a slimline, slot-loading blu-ray burner and installing a spare 512GB Force LX SSD we had in house, I was able to clean up the front of the case and increase storage. Fingerprints notwithstanding, it's a lot cleaner than it was before. Improving the Cooling and the Bling While the original build called for a Dominator Airflow Platinum memory fan, we weren’t able to find clearance for one owing to the ASUS Z97-WS’s layout. Happily, the ASUS X99-DELUXE doesn’t have this problem, and that meant we could add two Dominator Airflow Platinums. Because they’re PWM controlled, they’re a perfect match for our old Corsair Link Cooling Node, and because they use the same RGB LED connector as our other lighting kits, a single Corsair Link Lighting Node is able to control them. The end result isn’t just increased bling: even at minimum speeds, the airflow from the fans helps keep the DDR4 cool (with individual DIMMs peaking at just 38C), while also shaving at least 10C off of the power circuitry surrounding the memory slots. Getting fresh airflow into the motherboard’s VRMs never hurts. Yamamura 1.5 I was immeasurably thankful that I didn’t have to drain the loop to make these upgrades, thus reaffirming my belief in flexible tubing. Hard acrylic is frequently argued as the way to go in modern builds, and people say it looks nicer, but it’s not functional. I use this computer on the daily, and I am possessed by a relentless appetite for tweaking the hardware. Given just how bloody fast the Yamamura is now (and stable, mercifully), I don’t foresee making any major changes to the system until Skylake and Big Maxwell at the earliest, at which point there may be a newer, more exciting chassis to move into…
  21. Building an expansive, gorgeous custom liquid cooling loop in your PC has its perks. For one, it looks awesome. It also gives you the opportunity to maximize and perfect the cooling capacity of your enclosure. That, in turn, gives you the opportunity to maximize and perfect the performance of your system. And honestly, again, it looks awesome. You can show it to people who don’t even know anything about computers and get their eyes to bug out. Of course, this is all predicated on the idea that the system works. That the motherboard, graphics cards, memory – that everything is functioning properly. And for a little while, my monstrous Obsidian Series 750D build, “Yamamura,” was working perfectly. For a little while. Then the random shutdowns and reboots came. And the POST loops. Killing the overclock on the i7-4790K seemed to largely solve the problem, but it’s hard to feel proud of your monster of a computer when the CPU is running at stock under a custom loop. And this is where the custom loop becomes a problem. When troubleshooting this… …there’s only so much you can test before things get…inconvenient. The DDR3 was known good and wasn’t being pushed beyond XMP, so it was ruled out early. My boss handles power supplies, so I opted to blame that last. The graphics cards are part of the loop and can’t be removed without draining the whole thing, so that necessitated basically hoping the cards weren’t the problem. So the first thing I did was test swapping out the CPU for a known good one: an i7-4770K that had barely been overclocked. Swapping out the CPU was frankly very easy; you just remove the CPU block from the CPU socket. Unfortunately, that didn’t solve the problem. Since I’m used to seeing POST loops being a motherboard problem, and since the board I was using had been having initialization issues with USB pretty much since the get go, it seemed like that was the culprit. Uh oh. As it turns out, swapping out the motherboard was easier than I’d expected, and I took the opportunity to switch from Haswell to Haswell-E and give the loop a chance to really stretch its legs. Due to the long, flexible tubing and arrangement of the loop, I was able to “fold” the CPU block and graphics cards over the pump and reservoir and free up the motherboard. An alternative (and arguably smarter) route would’ve been to install spill-proof quick-release connectors around the video cards, as I had in my previous system. This would’ve isolated the graphics cards in the loop, allowing me to remove them entirely, and even replace them without draining the loop. But folding works in a pinch. Some cabling behind the motherboard tray had to be snipped and rerouted, and the 8-pin CPU line needed some extra give, but I was able to swap in the new board, CPU, and DDR4 memory. It’s not perfect. Because of the spacing of the graphics cards, one is running at PCIe x8 instead of x16, but thankfully that’s a pretty negligible difference. And imagine my delight when the system booted up! It was working perfectly fine, everything was going great, and then…it shut down again. Now if you look at that photo above, you’ll see the PSU cables are crammed very tightly between the AX860i and the bottom radiator. Unfortunately, that AX860i was the only component left that we could replace without draining the loop. …and so it was replaced. And sure enough, swapping in another AX860i actually did correct the random shutdowns and reboots. It’s hard to say what went wrong, but even the best power supplies can have bad days, especially when they were randomly picked up from the tech marketing lab and likely exposed to all kinds of hilarious and awful circumstances. Of course, with all of these changes to the system come new opportunities to upgrade, test, and improve performance…
  22. This is the fifth and final part of our build log for the Obsidian Series 750D “Yamamura.” The previous four chapters: Part Selection Assembly Overclocking Optimization There are essentially four reasons to build a custom liquid cooled system: The pleasure of constructing something with your hands. The unique aesthetic of a liquid cooled system. The potential for improved performance as a result of the larger heat capacity. The ability to quiet or silence an extremely high performance system.On this front, how did the Obsidian Series 750D “Yamamura” build do? The pleasure of constructing something with your hands. Yamamura proved to be a more difficult build than I expected. While the 750D is uniquely well suited to high performance liquid cooled builds, cramming a third radiator into the bottom of the case resulted in clearance problems for deeper power supplies as well as forcing the pump/reservoir to be mounted to the motherboard tray instead of the bottom. The 750D has very healthy dimensions, but we're still trying to cram a lot into it. Thankfully, the AX860i power supply turned out to be an all-star. The reduced depth coupled with high capacity and best-in-class performance allows a power supply with only 160mm of depth to handle the demanding job of powering multiple high performance overclocked components. That, and we get to keep the third radiator. As a result of having to cut two fans and the Dominator Airflow Platinum, though, I wound up ultimately being able to go down to just one Corsair Link Commander Mini unit. This is fortunate, as the NZXT USB 2.0 header splitter simply didn’t play well with the USB controller on the ASUS Z97-WS (note that the USB controller itself on my board seems to have issues with resolving hubs in general). The unique aesthetic of a liquid cooled system. This is hands down the most beautiful liquid cooled rig I have ever built. The 750D’s large side window allows you to really see and appreciate the glowing XSPC waterblocks, Dominator Platinum memory kit with lightbars, blue sleeved cabling, SP120 LED fans, and the XSPC Photon 170 reservoir. My girlfriend had worked with me on my last build and was skeptical that this one would look better, but Yamamura is a gorgeous beast and excellent showpiece. The Corsair Link lighting kit set to white allows all of the blue components to really pop. I have found over and over again that even people who aren’t die-hard DIY enthusiasts can still be impressed by a beautiful, well-built system with a custom loop. The potential for improved performance as a result of the larger heat capacity. While I wasn’t able to reach the mythical 4.8GHz on my Intel Core i7-4790K, nor was I able to get higher overclocks on my GeForce GTX 980s even with modded BIOSes, the waterblocks on the 980s do their job with aplomb. They may hit the same overclocks that they did on air, but those overclocks are much more stable now. XSPC's Razor GTX 980 waterblock does a stellar job of keeping every heat generating component incredibly cool. I feel better being limited by the silicon more than by the heat, and I now have two GeForce GTX 980s that spend their lives pushing 8GHz on the memory and 1.5GHz on the GPU. I’m looking forward to putting them through their paces at 4K. The ability to quiet or silence an extremely high performance system. Until we produce the greatest silent case the world has ever seen, one that effectively marries best-in-class airflow with smart acoustic design, the best way to make a quiet system is by controlling airflow. Having twelve fans and a pump decoupled from the chassis allows me to run the Yamamura extremely quietly. No high end build is complete without the Corsair Link Commander Mini. By employing a Corsair Link Commander Mini, I can run all of the fans at minimum speed until absolutely necessary, and this much heat capacity takes a very long time to reach a “steady state.” The result is that Yamamura is barely audible when running and certainly in no way obtrusive. Conclusion I actually have one regret as far as the Yamamura goes, and it’s a semi-silly one: I wish I had gone with Haswell-E instead of Devil’s Canyon. It arguably would’ve pushed the AX860i to its limits, but even a 4.7GHz i7-4790K feels oddly underpowered and modest in a build like this. An i7-5960X or even an i7-5930K, when overclocked, can start to really tap into the extra cooling potential of more elaborate cooling systems, while the i7-4790K can reasonably be handled by something as modest as a Hydro Series H75. Somehow I'll get by. With all that said, though, the system is still bracingly fast and handles just about anything I throw at it. I can’t complain too much. Except about my power bill.
  23. This is the fourth in a series of blogs about the Yamamura Obsidian Series 750D build. The first details component selection and can be found here; the second details the assembly and can be found here; and the third details overclocking and system performance and can be found here. While one big reason to assemble a custom liquid cooling loop is the ability to massively increase your system’s capacity for dissipating component heat, and thus improve overclocking headroom (or at least clock stability with NVIDIA GPUs), another less talked about key benefit is the potential to run your system far more quietly. This is perhaps one of the biggest reasons to “overdo it” on cooling capacity. Yamamura would have more than adequate cooling capacity with just the single 360mm radiator and three fans. Instead, the massive cooling capacity of the Yamamura’s loop and use of push-pull fans on two of the three radiators lets us substantially reduce the amount of noise the system produces. The D5 Vario pump, potentially one of the loudest components in the system, is largely decoupled from the chassis and produces virtually no noise at its midrange setting. To keep noise levels under control, the twelve SP120 LED radiator fans are connected to a single Commander Mini (note that this does exceed spec on the fan headers and comes dangerously close to exceeding the overall unit spec) using fan header splitters. The loop is also arranged with heat generating components essentially isolated between radiators. The heat generated by the Intel Core i7-4790K immediately feeds into the 360mm radiator in the top of the enclosure where it’s dissipated and the coolant flows into the two GeForce GTX 980s. After that, the coolant flows into both 240mm radiators in sequence before heading back into the reservoir. The radiator-component arrangement lets me isolate fan groups to compensate for the heat generated. Fans are controlled in pairs, and I used Corsair Link to program very permissive fan curves: these fans all run at their minimum speed (~750RPM) until the CPU exceeds 75C or the GPUs exceed 70C. Meanwhile, the almost ornamental SP140 in the rear exhaust of the Yamamura has had its voltage reduced to just 5V: enough to spin and move some air, but not enough to generate any real noise. How does it work out in practice? Incredibly well, actually. Yamamura’s massive cooling capacity and push-pull on two of the three radiators means the fans frankly never have to spin up. It’s only when doing extreme stress testing on the CPU that any of the fans spin up; otherwise, everything runs at its lowest speed and component temperatures remain very reasonable. I have to be seriously stressing the GPUs to get them to break 55C; the CPU jumps around a little more due to the high overclock, but spends most of its time under 60C. This is why I maintain that the Commander Mini’s true mission in life is to handle fan control for a custom loop. A PWM-controlled pump like Swiftech’s MCP35X could be operated by the Commander Mini and kept at its lowest speed until heat in the loop reaches a certain point, at which time it can kick into a higher gear. In the next and final chapter of the Yamamura build log, we’ll do a postmortem and look at what worked – and what didn’t.
  24. (This is the third part of a multi-part blog. The first part talks about component selection and is here, and the second part details the assembly and is here.) With the Obsidian Series 750D “Yamamura” built and fully operational, we have a powerful Intel Core i7-4790K and two NVIDIA GeForce GTX 980s all under water and a mountain of cooling capacity at our disposal. Certainly it’s completely unreasonable to keep these incredibly fast components just running at stock speeds – they need to be pushed to their limits. Well, within reason. Specifically, they need to be pushed as far as they’ll go for 24/7 use. And at the same time, it doesn’t make sense to leave the fans cooling two 240mm radiators and a 360mm radiator running at full bore all the time. The Intel Core i7-4790K we’re using is an engineering sample, and it had already been running at a 4.7GHz overclock under my last loop. This time I have access to an arguably better waterblock, more cooling capacity, and a more efficient pump. Yet 4.8GHz remains elusive and ultimately out of the reach of this chip. This i7-4790K will do 4.7GHz all day at 1.3V on the core and 1.9V on the VRIN, but that’s already a healthy jump from 1.225V on the core just for 4.6GHz. The essential problem the chip ran into is heat. Devil’s Canyon does a far better job of conducting heat off of the die than conventional Haswell did, but you’re still looking at a combination of mitigating factors: high heat density stemming from a small die having to travel through Intel’s TIM and heatspreader into more TIM and then the EKWB Supremacy EVO block. There’s only so much heat that can be efficiently removed from that chain and unfortunately, the extreme voltage required to flirt with 4.8GHz just generates heat faster than it can be safely removed from the die. So we remain at a still speedy 4.7GHz. NVIDIA’s GeForce GTX 980 is an odd duck when it comes to overclocking. The 980 is an extremely efficient chip, but NVIDIA tuned its stock design to maximize that efficiency. This means a very restrictive power cap and a reference PCB design that is unfriendly to high overclocks. By watercooling the GM204 die, VRAM, and arguably most importantly, the power circuitry, we can circle around and eke out more performance. My experience with overclocking watercooled GTX 780s and 680s was that you’d be able to find the top speed the GPU could hit even under air, but watercooling ensured the chip spent the majority of its time at that top speed. You can ramp the fan on the stock air cooler to keep the thermal limits in check, but power limits were more likely to be hit, and the GPU clock would bounce between a few speed bins. The unsung hero of watercooling graphics cards is the VRM cooling: by keeping the power circuitry running at substantially lower temperatures, it will operate far more efficiently. This actually ekes out headroom at the top of the power curve, where NVIDIA’s TDP limits come into play. Where the GTX 980 differs is that this area is still restricted, even with the 125% power limit applied. That top speed stabilizes a bit more, but the 980 still smashes its idiot head into the power cap. Our only recourse is to modify the BIOS of the graphics card, which is something that should be done with extreme caution and care. The modified BIOS I used raises voltages along with the power cap, but in practice was not able to unlock higher speeds overall for the GTX 980. 1540MHz continued to be the highest speed the GPU core would run at, but the raised power cap further stabilized operation at this high speed. Incidentally, the modified BIOS also allowed VRAM speed to hit a staggering 8GHz, up from the 7.8GHz overclock on the stock BIOS. This is primarily academic, but 8GHZ on GDDR5 looks cooler than 7.8GHz. Stock Overclocked Improvement CPU Intel Core i7-4790K 4GHz stock, 4.4GHz turbo ~1.1V Core, 1.78V VRIN Intel Core i7-4790K 4.7GHz 1.3V Core, 1.9V VRIN 12% 500MHz under turbo DRAM Corsair Dominator Platinum 4x8GB DDR3-2400 10-12-12-32 2T @ 1.65V N/A Graphics 2x NVIDIA GeForce GTX 980 4GB GDDR5 ~1.3GHz peak boost clock, 7GHz GDDR5 2x NVIDIA GeForce GTX 980 4GB GDDR5 1.54GHz peak boost clock, 8GHz GDDR5 18% GPU, 14% VRAM 240MHz maximum boost clock 1GHz GDDR5 Motherboard ASUS Z97-WS N/A Storage 480GB Neutron GTX (System) 3x 480GB Neutron GTX in RAID 0 (Games/Scratch) N/A Power Supply Corsair AX860i N/A You can see we can get pretty substantial increases across the board, but remember that the GTX 980 SLI overclocking doesn’t tell the whole story: the modified BIOS and increased power cap also help the GTX 980s sustain that 1540MHz peak. Stock Overclocked Improvement PCMark 8 Adobe 6747 PCMarks 8055 PCMarks 19.4% Handbrake 2162 seconds 1938 seconds 10.4% 3DMark Fire Strike 17306 3DMarks 20423 3DMarks 18% 3DMark Fire Strike Extreme 9747 3DMarks 11755 3DMarks 20.6% BioShock Infinite (Avg/Min) 87.19 fps (11.95 fps) 103.9 fps (14.61 fps) 19.2% (22.3%) Tomb Raider (Avg/Min) 59.6 fps (48 fps) 71.8 fps (60 fps) 20.5% (25%) Performance improvements when overclocked are absolutely staggering. We’re looking at almost 20% across the board, with the lone exception being Handbrake and its more modest 10% improvement. Minimum framerates improve even more; Tomb Raider now never ducks under 60fps. Keep in mind that BioShock Infinite and Tomb Raider were both run at 5760x1200 at maximum settings, with Tomb Raider running both TressFX and 2xSSAA. While the Intel Core i7-4790K would likely be perfectly served by an all-in-one cooler like a Hydro Series H80i or better and still hit these performance levels, the GeForce GTX 980s pretty much demand to be put under water. A full cover waterblock, or at least a watercooling solution that properly accounts for VRM temperatures (like a forthcoming HG10 model), is key to unlocking Maxwell’s true performance potential. In the next chapter, we’ll talk about power consumption and heat, and how Corsair Link and the Commander Mini are deployed to smartly optimize noise levels and fan power and keep the Yamamura running at peak performance.
  25. Just recently we took a look at AMD’s FX-8350 CPU and came away more impressed than we’d expected. If power consumption isn’t a deal-breaker for you, AMD’s Vishera CPUs can provide stellar bang for the buck. Today we’re putting the screws to their true mainstream champion, the $149 FX-6300. I continue to be impressed by how well the FX CPUs respond to liquid cooling. The FX-8350 and now FX-6300 both ran under an H100i and never peaked higher than the mid-50s. They’ve both been fairly solid overclockers as well. Our testing was done with the same testbed used for testing the FX-8350: CPU: AMD FX-6300 (3.5 GHz, turbo to 4.1 GHz, 95W TDP) CPU Cooler: Corsair Hydro Series H100i Motherboard: Gigabyte GA-990FXA-UD3 AM3+ RAM: 16GB (2x8GB) Corsair Vengeance Pro DDR3-2400 CAS11 @ DDR3-1866 CAS11 GPU: Sapphire Tri-X Radeon R9 290X 4GB GDDR5 SSD: Corsair Force LX 512GB SSD PSU: Corsair AX760i 760WAnd again, we used PCMark 8’s Adobe Suite, Handbrake, BioShock Infinite, and Tomb Raider for benchmarking and power consumption testing. PCMark 8 shows a mostly clear trending with an odd flatline from 4.4GHz to 4.6GHz. Peak power climbs noticeably, while the average is much more conservative. Incidentally, the PCMark 8 Adobe suite doesn’t seem to benefit meaningfully from the extra module (two integer cores) in the FX-8350; the FX-6300 posts roughly the same scores at the same clocks, but draws less power in the process. Handbrake, on the other hand, will use every last core and clock it can get its hands on. Overclocking the FX-6300 from its stock speeds all the way up to 4.8GHz introduces massive performance gains; at 4.8GHz, our benchmark takes roughly 2/3 the amount of time it takes to run at stock. Because Handbrake is the most CPU-dependent task in the suite, it also drives up power consumption considerably. Even then, power only increases about 80W for the blistering overclock. BioShock Infinite receives modest gains in performance, but Tomb Raider is flat. Once you’ve confined the bottleneck to the GPU, CPU speed becomes less relevant. At stock speeds, the FX-6300 is plenty to get the job done, and power consumption in gaming doesn’t increase appreciably as the result of overclocking. Much like the FX-8350, going past 4.4GHz required greater and greater increases in voltage on the FX-6300, but the headroom is there. I suspect on a beefier motherboard, 4.9GHz or 5GHz would’ve been attainable without hitting a heat wall. Actual power consumption only really takes off at about 4.6GHz, though, excepting the entirely CPU-bound Handbrake, which sees the largest increases in power draw. It’s easy to go on and on about how fast and efficient modern Intel processors are, but while AMD’s Bulldozer-derived architectures do underwhelm somewhat, they’re not actually bad and certainly powerful enough for any modern task or game. At $149, the FX-6300 is an excellent alternative to Intel’s clock-locked lineup. Power consumption does increase, but it’s not as dramatic as you’ve heard, and the performance is there. Enthusiasts looking for something to extract performance out of below the price tag of Intel’s Core i5-4670K and i5-4690K would do well to check out AMD’s FX-6300.
  • Create New...