Jump to content
Corsair Community

4k Bulldog?


mattlach

Recommended Posts

The advertising seems a little optimistic at best.

 

You can't do 4k gaming with any current single GPU.

 

Heck, I have TWO 980ti's and Metro 2033 (a 5 year old title now) crawls at only 40fps in most places at 4k resolutions.

 

Maybe this is intended for next generation Pascal GPU's?

 

You really shouldn't be claiming "4k" unless the hardware can support frame-rates never (even for a fraction of a second) dropping below 60fps with all effects turned on, as well as decent AA (at least 4x MSAA, or more modern FXAA type AA) and 16x AF.

 

That simply is not possible with a single GPU slot today

Link to comment
Share on other sites

The advertising seems a little optimistic at best.

 

You can't do 4k gaming with any current single GPU.

 

Heck, I have TWO 980ti's and Metro 2033 (a 5 year old title now) crawls at only 40fps in most places at 4k resolutions.

 

Maybe this is intended for next generation Pascal GPU's?

 

You really shouldn't be claiming "4k" unless the hardware can support frame-rates never (even for a fraction of a second) dropping below 60fps with all effects turned on, as well as decent AA (at least 4x MSAA, or more modern FXAA type AA) and 16x AF.

 

That simply is not possible with a single GPU slot today

 

Yeah you won't be able to max out a game at 60FPS with most setups, but you'll certainly be able to attain 60FPS with settings adjustments.

Link to comment
Share on other sites

  • 2 months later...

The MSI Z170I Gaming Pro AC has also been optimised for 4K and has an insane number of features to help with hardware performance.

 

If people are interested in getting a Bulldog for enthusiast gaming, or perhaps for computer animation, they should probably get that motherboard. It's nearest competitor is then Gigabyte's Z170N Gaming 5.

Link to comment
Share on other sites

The MSI Z170I Gaming Pro AC has also been optimised for 4K and has an insane number of features to help with hardware performance.

 

If people are interested in getting a Bulldog for enthusiast gaming, or perhaps for computer animation, they should probably get that motherboard. It's nearest competitor is then Gigabyte's Z170N Gaming 5.

 

Lol.

 

 

How can a motherboard possibly be optimized for 4k?

 

They have nothing to do with each other.

Link to comment
Share on other sites

  • Corsair Employee

Yeah, the optimization is pretty much going to be all in the graphics card and drivers.

 

So my home machine is running two overclocked 980s in SLI, but I've played around with replacing them with an overclocked 980 Ti (like a Hydro GFX, but overclocked with a custom BIOS). I will say that there have been situations where the SLI-powered rig generated a notably better gaming experience, but other times where the 980 Ti seemed to be stronger. Multi-GPU isn't 100% flawless and I don't think it'll ever be.

 

When we talk about 4K gaming, at least for this generation of Bulldog, remember a couple things:

 

1. The target we need to hit is 4K resolution, >30 fps, with at least high settings. This sets us miles ahead of consoles.

 

2. What Bulldog brings to the table is the ability to plug a liquid cooled GPU into your PC alongside a liquid cooled CPU, in a (relatively) small form factor. If we're at a situation where every last drop of performance is needed to have the best 4K gaming experience, then it stands to reason our product needs to be overclocking-ready and capable. So that's what Bulldog is meant to do: get you better than best.

Link to comment
Share on other sites

The reason SLI does work in some places but not all, is because VRAM on the slave cards isn't used. So if you SLI up 4 cards with 6 Gb VRAM each, you will still have 6 Gb VRAM in total.

 

So if you play a VRAM-hungry game, it will go very sloppy in a SLI configuration, but if you play a game that more uses raw GPU processing power, and don't need to store so much in VRAM, then a SLI will boost up that configuration pretty well.

 

Thats why most cards add up so little in performance when SLI'ed, because the bottleneck is in the VRAM, not in GPU.

 

Google "VRAM stack SLI" for more information on this topic.

Link to comment
Share on other sites

The reason SLI does work in some places but not all, is because VRAM on the slave cards isn't used. So if you SLI up 4 cards with 6 Gb VRAM each, you will still have 6 Gb VRAM in total.

 

So if you play a VRAM-hungry game, it will go very sloppy in a SLI configuration, but if you play a game that more uses raw GPU processing power, and don't need to store so much in VRAM, then a SLI will boost up that configuration pretty well.

 

Thats why most cards add up so little in performance when SLI'ed, because the bottleneck is in the VRAM, not in GPU.

 

Google "VRAM stack SLI" for more information on this topic.

 

Partially correct.

 

The VRAM on all cards in SLI is actually used, you just cant add it up, as the same data is duplicated in the VRAM of all the cards, because the PCIe bus is far too slow for one card to rely on RAM that exists on another card.

 

This is not the reason for poor SLI scaling though. Neither amount of nor speed of VRAM has a significant impact on performance.

 

Poor SLI scaling is usually due to a combination of CPU limitations and overhead caused by having to transport the rendered frame buffer from each card over the PCIe bus to the card connected to the monitor and outputting it there. The higher the resolution, the poorer the scaling will be as more data needs to be transported.

 

It may not sound like much, but when you are doing it 60+ times per second it adds up.

 

A single uncompressed 4k frame is 3840*2160*24bit(no gamma on rendered frames) = 24883200 bit, or ~23.73MB

 

In AFR every other frame will be rendered on the GPU connected directly to the monitor, and every other will be fetched from the secondary card, so 30 times per second a full frame traverses the bus, or 712MB/s

 

The CPU also adds up to this. Not only will running the game at a higher frame rate shift the bottleneck from the GPU's over to the CPU, but the calculations required for SLI in the drivers also increase the load on the CPU significantly in SLI, even at the same frame rate.

 

It was so much so that at the time of launch of my favorite game, Red Orchestra 2, back in 2011, my brand new AMD system could not keep up with the CPU load from two two Radeon 6970's. (Yes, I know, Crossfire, not SLI, but the principle is the same)

 

In fact, there was no AMD CPU on the market at the time that could handle Crossfire in RO2, which was disappointing to an old AMD fan like myself.

 

That's how I wound up with the i7-3930k

 

VRAM has very little to do with SLI inefficiency in scaling.

 

Yes, its a shame that they can't combine and you can have 12GB of RAM, but it is a moot point, as I don't think I've ever seen my cards use more than ~2.5GB anyway :p

 

Another issue with SLI is the inherent added input lag resulting from alternate frame rendering. Compare a theoretical fast single GPU with two theoretical half as fast GPU's in SLI.

 

The frame rate may be the same (provided you get 100% scaling, which will not happen) but the per frame render time is longer, resulting in input taking much longer to display on screen.

 

Illustration from an old 1999 review shing tghe inevitable input lag associated with AFR:

 

http://img.tomshardware.com/us/1999/11/08/preview_of_the_double_whopper_/lag.gif

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...