Jump to content
Corsair Community

Best settings balance for Twin2X4096-8500C5DF


Recommended Posts

I'm looking for advise as to the best ratio of clock speed to relaxed timings when running the above memory with a core2quad cpu. Common consensus seems to be with relaxing the timings slightly as the greatest benefit is seen from higher clock speeds rather than tighter timings. I was just wondering at which point is this not the case?
Link to comment
Share on other sites

There is a point of diminishing returns with any ratio. When the timings finally benefit more than the speed.

 

For Example, with Intel CPUs. On canned bandwidth benchmark utilities.

  • 667Mhz at 3-3-3-10 can not compete with 1066Mhz an 5-5-5-18.

     

  • 800Mhz with 3-4-3-9 can and does compete with 1066Mhz and 5-5-5-18

     

  • 800Mhz with 4-4-4-12 is close in most instances with 1066Mhz and 5-5-5-18

     

  • 800Mhz with 5-4-4-12 loses out to 1066Mh and 5-5-5-18

 

Systems with the memory running the fastest has the most bandwidth available. But if a good portion of the bandwidth is theoretical, then it is not all that important and tighter timings could well be far more important. To give a comparison. You have a four lane highway that can work well (no congestion) with a given amount of cars. However, there has never been more than 70% of that given amount of cars on the highway. 30% of the available space for automobiles is held in abeyance and testing shows a theoretical ability but not a reality.

 

But in the real world? Now take this comparison into the world of DRAM speed and latencies. With a Dual Core processor, there is not a saturation of the memory bus with 800Mhz. Some of the theoretical bandwidth is already not being used. Let's give it a tentative percentage of 20%. Move to 1066Mhz and 5% of that tentative percentage is used in some certain cases. Now, insert a Quad Core processor where the two sets of cores do not share their caches. In cases where both sets of cores need the same data, there must be concurrent transfers of the identical data and in which case, that 20% is available for such a use, thus negating any stalls awaiting data transfer.

 

See? It depends on the use of the system, that processor used, to make a fundamental decision. With Quad Core processors and using heavy data transmission programs, programs that multiplex data such as audio and video, astronomical programs, etc. the faster data speed is far more important than the timings. This is the case since the data is in que awaiting the control pickup and the timings, (time of refresh, switches, etc.) are pretty much secondary to such an issue.

Link to comment
Share on other sites

Primarily the system will be used as a games machine but also to encode/recode audio and video so if I am understanding you correctly I should aim for timings of around 5-5-5-15 at the maximum clock speed these timings will allow while retaining system stability rather than testing at more relaxed timings and higher clock speeds or tighter timings and lower speeds. Also is there a performance gain to be had by decreasing the memory and cpu multipliers allowing a further increase in bus speed or is it just a case of trying combinations of all the above and running various benchmarks to find the best real world results?
Link to comment
Share on other sites

All in all, mainly in the real world, and with populating four DRAM slots, it's not going to give you much of a differential anyway. It is a few percent either way, and you will do far better to perform tests with a digital timer and your software than asking theoretical questions.
Link to comment
Share on other sites

Sorry for the confusion, I've swapped out the ram today for 2x2Gb modules as running the four sticks was proving to hold the memory back. Running at stock speed on the memory at the moment will be trying to push them a bit harder over the next couple of days, just wanted to know which direction to head in.
Link to comment
Share on other sites

Sorry for the confusion, I've swapped out the ram today for 2x2Gb modules as running the four sticks was proving to hold the memory back. Running at stock speed on the memory at the moment will be trying to push them a bit harder over the next couple of days, just wanted to know which direction to head in.

 

In that case, I tend to agree with the view of greater bandwidth will show improvement over lowered timings.

Link to comment
Share on other sites

Does the CM2X2048-8500C5DF usually show a maximum bandwidth of PC26400 in cpuz?

 

Yes, you will find this for compatibility with JEDEC standards for DDR2 which is maximum at 6400 with 5-5-5-18 at 1.8V. However, you should see the model number which will show the 8500 and if you check the SPD, you should see the extended profile of 533Mhz (DDR = 1066).

Link to comment
Share on other sites

That's exactly what is there, thanks for your help, just wasn't sure if I'd overlooked something in the bios. Having a problem with cpu temps now which I know is not related so will get to pushing speeds up a bit once I have some more paste.
Link to comment
Share on other sites

Sorry to drag this thread up again derekt, I was just looking at a hard forum page you posted in one of your replies in another thread. I am currently running my memory at a ratio of 2.4 @ 444fsb which cpuz reports as running at 1066mhz in dual channel symmetric mode. If I am reading the article correctly I could actually be hampering the performance of the memory by running a divider of more than 2 even though this would give a lower frequency of 888mhz it would perform better at this speed in synchronous mode?
Link to comment
Share on other sites

It depends. You need to test your system to find out what actually works best for you.

 

Use http://www.lavalys.com for testing memory bandwidth and then set to the values and dividers/strapping and see what gives the best results.

 

Q6600 @ 3.6Ghz (400FSB), 400 Strapand 1066Mhz 5-5-3-15 DRAM 2 X 2048.

 

http://i259.photobucket.com/albums/hh293/DerekT2008/4gb.jpg

Link to comment
Share on other sites

So what exactly is the difference between symmetric and synchronous dual channel modes in terms of performance?

 

There are only three modes that I know of at the moment and synchronous is symmetric. Most Intel systems only support two modes, being Single Channel and Dual Channel. Intel's i965/P35 and X38/X48 have a third memory mode being Dual Channel Asymetric via Intel "Flex" technology.

  1. Single Channel – only one channel of memory is routed and populated. There can be two channels of memory routed, but only one channel is populated and can be either channel A or channel B.

     

  2. Dual Channel Asymmetric – both channels are populated, but each channel has a different amount (MB) of total memory.

     

  3. Dual Channel Symmetric – both channels are populated where each channel has the same amount (MB) of total memory.

Single-Channel

 

The system will enter single-channel mode when only one channel of memory is routed on the motherboard, or if two-channels of memory are routed, but only one channel is populated. In this configuration, all memory cycles are directed to a single channel.

 

Dual-Channel Asymmetric

 

This mode is entered when both memory channels are routed and populated with different amounts (MB) of total memory. With the aid of Intel Flex Memory Technology this configuration allows addresses to be bounced between channels in interleaved mode until the top of the smaller channel’s memory is reached, allowing for full dual channel performance in that range. Access to higher addresses will all be to the channel with the larger amount of memory populated; thus giving single channel performance through those addresses.

 

Dual-Channel Symmetric

 

This mode allows the end user to achieve maximum performance on real applications by using the full 64-bit dual-channel memory interface in parallel across the channels.

The end user is only required to populate both channels with the same amount (MB) of total memory to achieve this mode. The DRAM component technology, device width, device ranks, and page size may vary from one channel to another. Addresses are bounced between the channels, and the switch happens after each cache line (64-byte boundary). If two consecutive cache lines are requested, both may be retrieved simultaneously, since they are ensured to be on opposite channels.

 

Mixed DRAM Memory Speeds

 

The (G)MCH will accept mixed DDR2 speed populations, assuming the SPDs on the DIMMs are programmed with the correct information and the BIOS is programmed as outlined in Intel’s BIOS reference code.

 

In all operating modes (Single-Channel, Dual Channel Asymmetric, and Dual-Channel Symmetric) the frequency of the System Memory will be set to the lowest frequency with its supported speed bin timings of all DIMMs populated in the system, as determined through the SPD registers on the DIMMs. For example, a DDR2-667 DIMM with supported 5-5-5 speed bin timings installed with a DDR2-533 DIMM with supported 4-4-4 speed bin timings should run at 533 MHz with supported 4-4-4 speed bin timings. The DDR2-667 DIMM should downshift to DDR2-533 timings, thus allowing the system to run at 533 MHz with supported 4-4-4 speed bin timings. The DDR2-667 DIMM will only downshift to DDR2-533, if the timings for DDR2-533 are programmed in the DDR2-667 DIMMs SPD.

 

Research:

 

http://www.intel.com/support/motherboards/desktop/sb/cs-011965.htm

Link to comment
Share on other sites

Which modules were you using in the everest snapshot and how do you access the benchmark page you've showb in the snapshot as I can only seem to run each test individually and no matter what I do can't get a read bandwidth of much more than 8000 using the same settings as you have in the test, unless I'm running a different version of everest? :confused:
Link to comment
Share on other sites

Which modules were you using in the everest snapshot and how do you access the benchmark page you've showb in the snapshot as I can only seem to run each test individually and no matter what I do can't get a read bandwidth of much more than 8000 using the same settings as you have in the test, unless I'm running a different version of everest? :confused:

 

I'm using the Corsair Twin2X4096-8500C5 modules. One of the memory issues is that ASUS has a faster throughput of DRAM bandwidth than other motherboard companies. This began with the i865 chipset and ASUS's manipulation of PAT/MAM (Performance Acceleration Technology and Memory Acceleration Mode). It made the i865PE Sprindale chipset as fast as the far more expensive i875 Express Canterwood chipset. At least at stock. If one overclocked, they would lose PAT unless they performed a BIOS modification. This memory tweaking has made ASUS a clear winner when testing memory against it's rivals chipset for chipset.

 

Here's the same DRAM running in an X38 chipset with an ASUS Maximus/Rampage Formula motherboard. Timings and speed are basically identical although the processor is faster by 400Mhz. Look at the writes and copies of this chipset in comparison. ASUS is always tweaking the memory speeds.

 

http://i259.photobucket.com/albums/hh293/DerekT2008/4096-1066DRAM.jpg

 

You can play with your Static tRead Value though. Lower it and test the throughput but first memtest. Be sure to memtest when you make any memory timing/speed changes. As well, test your Performance Enhance setting although I don't think Gigabyte has much luck with enabling this setting very high.

 

Regarding the Bandwidth Testing. If you look at your system tray, you will notice the Everest Icon (Orange ball with an "i"). Right click on it. Choose "Tools --> Cache and Memory Benchmark".

Link to comment
Share on other sites

Thanks for your help, you must have the patience of a saint! Will give those a try and let you know how I've got on shortly. Also would it be feasible to lower the timings to say 6-6-6-20 and try for a higher bandwidth or would that neither be possible or beneficial?
Link to comment
Share on other sites

Thanks for your help, you must have the patience of a saint! Will give those a try and let you know how I've got on shortly. Also would it be feasible to lower the timings to say 6-6-6-20 and try for a higher bandwidth or would that neither be possible or beneficial?

 

Lower the timings to 6-6-6-20? You have 2 X 2048 of 8500C5 DDR2 modules, correct?

Link to comment
Share on other sites

Yes that's right, I was just wondering if there would be any benefit in terms of performance in lowering the timings in order to operate the modules at a higher frequency i.e. 1150-1200 as 1110 is the maximum I can get stable at 5-5-3-15. Just thought it may be an idea worth trying or would this neither work nor improve speeds? 6-6-6-20 was just a theoretical starting point, I just thought if I can increase the frequency as much as possible while retaining stability this may be the best way to try as this seems to have the biggest impact, at least on bench tests.
Link to comment
Share on other sites

Yes that's right, I was just wondering if there would be any benefit in terms of performance in lowering the timings in order to operate the modules at a higher frequency i.e. 1150-1200 as 1110 is the maximum I can get stable at 5-5-3-15. Just thought it may be an idea worth trying or would this neither work nor improve speeds? 6-6-6-20 was just a theoretical starting point, I just thought if I can increase the frequency as much as possible while retaining stability this may be the best way to try as this seems to have the biggest impact, at least on bench tests.

 

You would not be lowering the timings. You would be raising them. If you raised your timings to 6-6-6-20 there is speed you could reach that would help such a drastic move.

 

Don't get too caught up in the numbers. Keep in mind that the differential is theoretical and usually amounts to no more than 5% of the actual bandwidth. 1100 at 5-5-3-15 with 2048 modules is just fine. Raising the timings will not give you benefit with the minimal speed increase that you will make. The timings will be there even when the bandwidth is not being used. Do you see this? Even though you have raised the bandwidth, much of it is theoretical BUT not the timings. They will effect the actual bandwidth.

 

I advise against such moves personally.

Link to comment
Share on other sites

Thanks for your patience, I think the penny has finally dropped. If I understand what your saying correctly increasing the bandwidth basically increases the amount of data that the memory can theoretically transfer at any given time, the timings however effect the speed at which the memory is able to transfer this data. So if the timings are increased to allow for extra bandwidth it would make that extra bandwidth pretty much useless as with higher timings it would never be utilised.
Link to comment
Share on other sites

Just finished third pass on memtest and it's looking pretty good, I can't really see anywhere any further improvements can be made. Any suggestions?
Document your CPU and memory BIOS settings and be as precise as possible. That will help me to see if there is any further tweaking.

Thanks for your patience, I think the penny has finally dropped. If I understand what your saying correctly increasing the bandwidth basically increases the amount of data that the memory can theoretically transfer at any given time, the timings however effect the speed at which the memory is able to transfer this data. So if the timings are increased to allow for extra bandwidth it would make that extra bandwidth pretty much useless as with higher timings it would never be utilised.

 

Half way there. At that level, the extra bandwidth is seldom used. However, the raised timings will effect the bandwidth that is being used and will slow that effective bandwidth down.

Link to comment
Share on other sites

I backed off the memory speed slightly as that allowed me to use the performance enhance turbo setting which gave better bench results than the 1100 frequency. My current bios settings are :-

 

CPU Multithreading Enabled

Limit CPUID Max. to 3 Disabled

No Execute Memory Protect Disabled

CPU Enhanced Halt (C1E) Disabled

CPU Thermal Monitor 2 (TM2) Enabled

Virtualisation Technology Enabled

 

 

Robust Graphics Booster Auto

CPU Clock Ratio 8x

CPU Frequency 3.52Ghz

CPU Host Clock Control Enabled

CPU Host Frequency 440

PCI Express Frequency 105

CIA2 Disabled

Performance Enhance with Turbo

System Memory Multiplier 2.5

 

SPD

Memory Frequency 800 1000

DRam Timing Selectable Manual

 

*******Standard Timing Control*******

 

Cas Latency Time 5 5

Dram RAS# to CAS# Delay 5 5

Dram RAS# Precharge 5 3

Precharge Delay (tRAS) 18 15

 

*******Advanced Timing Control*******

 

Act to Act Delay (tRRD) 3 Auto

Rank Write to READ Delay 3 Auto

Write to Precharge Delay 6 Auto

Refresh to ACT Delay 52 1

Read to Precharge Delay 3 Auto

Static tRead Value 7 Auto

Static tRead Phase Adjust 0 Auto

 

*******System Voltage Control*******

 

System Voltage Control Manual

DDR2 Overvoltage Control +0.30

PCI-E Overvoltage Control +0.05

FSB Overvoltage Control +0.10

(G)MCH Overvoltage Control +0.150

Loadline Calibration Enabled

CPU Voltage Control 1.37500

Link to comment
Share on other sites

There is a marriage between the performance settings (DRAM) and pure bandwidth. You've figured out that if you can run the DRAM in a higher performance mode with yet lowered bandwidth speed, you can still achieve a higher throughput in terms of usable DRAM running faster than DRAM not in use theoretically faster.

 

Good speed on that CPU. There's not really any more I can help you with. You've pretty much got the DRAM figured out and you couldn't count those who are running their bandwidth at max only to not use the performance capabilities because they need a slightly slower bandwidth to shine.

 

Here's my Q6600:

 

http://i259.photobucket.com/albums/hh293/DerekT2008/36-1200-1.jpg

 

Post a screenie of your bandwidth now.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...