Jump to content
Corsair Community

Developers: The Corsair Link USB protocol!


CFSworks

Recommended Posts

Hey, I recognize that DLL. It's an abstraction library for USB bulk transfers with the Silicon Labs part on AXi series power supplies. According to the vendor's website, you are allowed to freely redistribute this library.

 

I believe it won't help you interpret the protocol you're looking at, since that will be owned by the designer of the PSU (Flextronics, according to that teardown site). It might be exposed by the MCU firmware, if those who have unpacked it are still around and want to share.

 

Speaking of MCU's, I forgot to report back omegatotal's findings! The Commander node has a Freescale S08JM MCU, which is designed for USB devices. The Cooling Link has a Freescale S08MP MCU, which is designed for "brushless DC motor applications", i.e. PC fans. Its communication interface is SMBus.

Link to comment
Share on other sites

  • Replies 185
  • Created
  • Last Reply

Top Posters In This Topic

Can anyone made any sense of the USB Trace I captured when CL started up please? CL seems to be using SIUSBXP.dll so source would be handy.

 

The protocol seems rather different to that for the HID CL devices.

 

attachment.php?attachmentid=20800&stc=1&d=1423316563

 

That looks like Manchester encoding in the data packet.

Link to comment
Share on other sites

  • 1 month later...

I aten't dead.

 

Took MUCH longer to build my new PC than I had intended. Got that done yesterday. Now I have a hacked up CorsairLinkPlusPlus (more accurately, I fixed HidSharp & built CL++ against it) running on Linux.

 

directhex@bubblegum:~/Projects/CorsairLinkPlusPlus/CorsairLinkPlusPlus.CLI/bin/Debug$ mono CorsairLinkPlusPlus.CLI.exe

--START--
+ Root Device
+ Corsair Link
	+ Corsair H100i USB
		+ Corsair H100i
			- Fan 2 = 957 RPM
				Fan.CorsairLink.Default
			- Fan 3 = 981 RPM
				Fan.CorsairLink.Default
			- Pump 4 = 2223 RPM
			- Temp 0 = 26.76953125 °C
			- LED 0 = 255, 0, 0 RGB
				LED.CorsairLink.SingleColor
					255, 0, 0
		+ Corsair PSU AX860i
			- Temp 0 = 31.5 °C
			- Fan 0 = 0 RPM
				Fan.CorsairLink.Default
			+ PSU 5V
				- Current 0 = 3.125 A
				- Power 0 = 15 W
				- Voltage 0 = 5.03125 V
			+ PSU 3.3V
				- Current 0 = 3 A
				- Power 0 = 9 W
				- Voltage 0 = 3.3125 V
			- PCIe 1 Current = 0 A
			- PCIe 2 Current = 0 A
			- PCIe 3 Current = 0 A
			- PCIe 4 Current = 0 A
			- PCIe 5 Current = 0 A
			- PCIe 6 Current = 0 A
			- PSU 12V Current = 0 A
			- PERIPHERAL 12V Current = 0 A
			+ Mains
				- Current 0 = 0.5625 A
				- Power 0 = 92.1758952386575 W
				- Voltage 0 = 240 V
				- Power 0 = 103.75 W
				- Efficiency 0 = 76.7552316199242 %
-- END --

 

So good job, guys - especially Doridian - it was trivial to make this cross-platform

Link to comment
Share on other sites

  • 2 weeks later...
CFSworks,

First thank you for taking the time to create this post and report what you found so far. There are some issues as mentioned previously about releasing the source code. Honestly, we wanted to have the source code available a long time ago for this purpose but that was just not an option for the current product. We will get with the Product Manager and see what information can be released but as far as I know we are not allowed to release any source code at this time due to other agreements.

 

Has there been any change in terms of what can be released since this statement dated 07-15-2013 please? Some information on the H100iGTX protocol/API would be great.

Link to comment
Share on other sites

Has there been any change in terms of what can be released since this statement dated 07-15-2013 please? Some information on the H100iGTX protocol/API would be great.

 

I doubt they will ever release anything tbh. - but I have a H100i GTX (that CL doesn't manage to "read" properly either), so I could do capture/testing if anyone need that.

  • Like 1
Link to comment
Share on other sites

 

Finding out what MCU is inside would be great if we wanted to develop our own firmware. Even if we don't make any custom firmware, we can still help out red-ray by checking the manufacturer's app notes for the upload procedure.

 

\[...]

 

Also, despite all the firmware images being SREC, there might be different MCU's on each device. Has anyone opened up any other devices to find out? I imagine some people have opened the Commander nodes at least, since it would have low risk of causing any damage, and they look like they were designed for experimentation from the start.

 

The cooling node uses a Freescale MC9S08MP16 processor.

 

Datasheet: http://cache.freescale.com/files/microcontrollers/doc/data_sheet/MC9S08MP16DS.pdf

 

Reference Manual:

http://cache.freescale.com/files/microcontrollers/doc/ref_manual/MC9S08MP16RM.pdf

 

Here is the cooling node with a 6-pin Freescale BDM programming header in the top left corner.

 

I believe the H100 also uses the same MCU, but I can't find any photos from when I took mine apart to confirm.

  • Like 1
Link to comment
Share on other sites

The cooling node uses a Freescale MC9S08MP16 processor.

 

Thank you for the pointers. Is http://www.freescale.com/webapp/sps/site/prod_summary.jsp?code=S08MP the correct link for all the other information. I plan to have a look over the next few days, but at the moment I am looking into adding H100iGTX/H80iGT support to SIV. Does anyone know if the CL Mini uses the same chip? I suspect not as the I2C register layout is very different.

 

I have cropped and rotated the attached image so the header is now top right.

1538764471_CoolingNode.thumb.jpg.0aad54416c8d2153ebc9bd40ba3e494a.jpg

Link to comment
Share on other sites

Thank you for the pointers. Is http://www.freescale.com/webapp/sps/site/prod_summary.jsp?code=S08MP the correct link for all the other information. I plan to have a look over the next few days, but at the moment I am looking into adding H100iGTX/H80iGT support to SIV. Does anyone know if the CL Mini uses the same chip? I suspect not as the I2C register layout is very different.

 

I have cropped and rotated the attached image so the header is now top right.

 

That looks like the correct link. You'll probably also want the free IDE (Evalutaion) for firmware development. (Thanks Nadar for the correction).

 

The IDE includes a disassembler at C:\Freescale\CW MCU v10.4\MCU\prog\decoder.exe, so you can turn the .s19 files back into assembly.

 

 

EDIT: fixed link to IDE.

Edited by _pseudonym
Link to comment
Share on other sites

That looks like the correct link. You'll probably also want the free IDE for firmware development.

 

I looked at the IDE download mostly out of curiousity (I'm not going to teach myself assembly for this). I haven't actually tried to install it, but it doesn't look very free to me. The only download available is the professional evaluation - which gives me until June 10. - after giving a ridiculous amount of information during registration. Is it license.dat from the page you linked to that makes it a free version, or is "free" simply the name (from freescale)?

Edited by Nadar
Link to comment
Share on other sites

I looked at the IDE download mostly out of curiousity (I'm not going to teach myself assembly for this). I haven't actually tried to install it, but it doesn't look very free to me. The only download available is the professional evaluation - which gives me until June 10. - after giving a ridiculous amount of information during registration. Is it license.dat from the page you linked to that makes it a free version, or is "free" simply the name (from freescale)?

 

You are correct, I had the wrong link there. The non-time-limited version is the evaluation version. It does have limits on code size (64kB), but the MCU only has 16kB of flash so it doesn't matter.

 

And yes, they unfortunately do want to collect all sorts of information from you, but I'm sure you can find a way to tell them what they want to hear.

Link to comment
Share on other sites

And yes, they unfortunately do want to collect all sorts of information from you, but I'm sure you can find a way to tell them what they want to hear.

I just lie as I always do when someone tries to force information from me that's none of their business, I just dislike being put through that. A onetime email address is probably also the smart thing here, as this seems like a company that will spam you forever.

 

That aside, I'm still confused about their terms of evaluation/trial and the different versions. It seems to me like the only free option is a limited version called "evaluation" for the eclipse version "special" for the classic IDE.

 

Just to warn anyone that want to go down this path, I tried to install the "classic IDE" version, and it failed to install on a 64 bit system. Hopefully the Eclipse version will work on modern OS'es.

Link to comment
Share on other sites

Email catch all's are nice, I can give out any email address on my domain and know exactly who is selling my info.

 

Then when I start getting spam, use some scripting/mail rules to 'return to sender' and also forward a copy to the support email at the original page, with a few choice words in the subject line. :-P

 

I really wish corsair would open source the spec on the USB protocol for all devices, but I also understand protecting your IP from getting copied for/by counterfeiters.. :-/

Link to comment
Share on other sites

Email catch all's are nice, I can give out any email address on my domain and know exactly who is selling my info.

 

Then when I start getting spam, use some scripting/mail rules to 'return to sender' and also forward a copy to the support email at the original page, with a few choice words in the subject line. :-P

 

I really wish corsair would open source the spec on the USB protocol for all devices, but I also understand protecting your IP from getting copied for/by counterfeiters.. :-/

 

 

Corsair isn't going to release the Link software USB protocol no matter how much the select few continue to raise Cain.

 

I'm not trying to insult anyone, but it's readily apparent some don't have a sense of business. Has anyone ever heard of a Non-Disclosure Agreement, Intellectual Property Rights and or Patent Rights?

 

Corsair doesn't own the Intellectual Property Rights for the LINK software, it's leased from CoolIT. A violation of their Legal agreement would cause Repercussion in the Hundreds of Thousands if not Millions with Attorney's have a field day.

 

Any Corsair Employee can weigh on this if it's not correct or if the LINK software is owned outright by Corsair, please correct me if i'm wrong?

Link to comment
Share on other sites

As is often the case you are half right and failed to research this before you posted. The H100iGTX comes from http://asetek.com/customers/do-it-yourself/corsair/corsair-hydro-series-h100i-gtx.aspx.

 

An interesting question is given it does not come from CoolIT how did they manage to get CL to support it?

 

As you do not know the exact terms of the NDA you are just speculating. For all you know Corsair may be able to release such information provided an appropriate NDA is setup.

 

 

If you would have bothered to read my post Unofficial Hydro Series Installation Guide, it clearly states the H80i GT and H100i GTX are Asetek made pumps.

 

Red-ray you don't even know the difference between a Core Parking issue and a Memory Leak issue.

Link to comment
Share on other sites

Corsair isn't going to release the Link software USB protocol no matter how much the select few continue to raise Cain.

 

I'm not trying to insult anyone, but it's readily apparent some don't have a sense of business. Has anyone ever heard of a Non-Disclosure Agreement, Intellectual Property Rights and or Patent Rights?

 

Corsair doesn't own the Intellectual Property Rights for the LINK software, it's leased from CoolIT. A violation of their Legal agreement would cause Repercussion in the Hundreds of Thousands if not Millions with Attorney's have a field day.

 

Any Corsair Employee can weigh on this if it's not correct or if the LINK software is owned outright by Corsair, please correct me if i'm wrong?

I'm sorry, but I think it's you who don't understand. Corsair sells us hardware that depends on "free" software to use. The software isn't really free, but you pay for it when you pay for the hardware. This software doesn't work properly, and hasn't done so in years. They show no interest or capability to solve the issues (it's hard to tell which it is). We who bought the hardware are stuck with a product we can't use, so we ask for a desciption of how to communicate with the hardware we have bought to compensate for the fact that the broken software renders the hardware useless.

 

I struggle to see how that has anything to do with not having a sense of business. To me, as a customer, it's of no interest whatsoever how Corsair made the software, it's a part of a product I bought and it doesn't work. What deals and agreements they would have made is completely irrelevant for me, I only have an agreement with Corsair (by purchasing products from them).

 

If Corsair lacked the knowledge to produce the software themselves, they should hire someone to make it for them. It's obvious to me that such an agreement would give Corsair all IP rights to the software, anything else would be a grave error on Corsairs part. On top of this, we're not asking for the source code for the CL software, but simply how to communicate with the hardware. To have signed a deal that makes that a secret, would seem to me to be extremely short sighted. When you combine this with the fact that the software hasn't been fixed within anything resembling a reasonable time frame, I see only two possible solutions: Corsair renegotiates the deal and secure the necessary rights, or they recall all the products. To me it's that simple.

Edited by Nadar
Link to comment
Share on other sites

Your post should have stated that and also included the links. Given that CL reports both CoolIT and Asetek hardware it is clear that the NDA situation is not as simple as you seem to think. I feel such statements as the ones you made are pointless unless the are made by Corsair employees.

 

You totally failed to address

 

As regards your incorrect statement as to my knowledge then yet again you need to do your research before posting. I assume you did not include a link to justify this as there is no post that I made to link to, then again many of your posts fail to have appropriate links.

 

 

Here is the link as requested:

 

http://forum.corsair.com/v3/showthread.php?t=135179

 

Red-ray quoted " That is what I expected would be happening and eventually the system will be low on memory and things will be sluggish. All you can do is wait for Corsair to fix their code and in the meantime exit and restart CL one in a while ".

 

Care to explain to the Forums how the Corsair LINK is related to the stuttering?

 

The issue was related to " Core Parking ", so it appears you were incorrect once again.

Link to comment
Share on other sites

Here is the link as requested:

 

http://forum.corsair.com/v3/showthread.php?t=135179

 

Red-ray quoted " That is what I expected would be happening and eventually the system will be low on memory and things will be sluggish. All you can do is wait for Corsair to fix their code and in the meantime exit and restart CL one in a while ".

 

Care to explain to the Forums how the Corsair LINK is related to the stuttering?

 

The issue was related to " Core Parking ", so it appears you were incorrect once again.

I have over 600 hours of play time in BF4 and have heard the "core parking" claim many times. I've yet to spot any difference whatsoever, and considers this more of an urban myth than a real issue. It's to me very unrealistic that Windows would, even if it could, park cores while they were in use.

 

That aside, CL's high CPU and RAM usage could easily create lag conditions on a not too powerful computer. That CL has memory leaks is simply a fact, I've established that a long time ago. Just let it run, and it eventually will crash. I think the longest I've had it run in one go is somewhere around 48 hours. Another thing is that such a small program, with so few tasks should only use a very few CPU cycles once in a while to check on things if it were made correcly. It shouldn't be able to reach 1% on all but the very weakest CPU's imo.

 

What strikes me is that, as far as I could read, nowhere in the linked thread did it say that disabling core parking solved the problem. Still yet, you concluded that this was the problem, despide CL using resources like a mad bat out of hell.

 

This discussion is so off topic to this thread that it constitutes spam. Is there are point to your argument except that we are stupid for wanting the USB protocol spesification?

Link to comment
Share on other sites

I have over 600 hours of play time in BF4 and have heard the "core parking" claim many times. I've yet to spot any difference whatsoever, and considers this more of an urban myth than a real issue. It's to me very unrealistic that Windows would, even if it could, park cores while they were in use.

 

That aside, CL's high CPU and RAM usage could easily create lag conditions on a not too powerful computer. That CL has memory leaks is simply a fact, I've established that a long time ago. Just let it run, and it eventually will crash. I think the longest I've had it run in one go is somewhere around 48 hours. Another thing is that such a small program, with so few tasks should only use a very few CPU cycles once in a while to check on things if it were made correcly. It shouldn't be able to reach 1% on all but the very weakest CPU's imo.

 

 

 

You are incorrect for assuming urban myth and Core Parking has no affect in games. Obviously you are not aware of the core parking feature built into Windows which can be traced back to Windows 7.

 

With 600 hours in Battlefield i would expect someone to know the difference between stuttering and lag. Everyone knows the Net Code is screwed up in battlefield however it has nothing to due with the game stuttering.

Link to comment
Share on other sites

This is going even further off topic...

 

You are incorrect for assuming urban myth and Core Parking has no affect in games. Obviously you are not aware of the core parking feature built into Windows which can be traced back to Windows 7.

 

With 600 hours in Battlefield i would expect someone to know the difference between stuttering and lag. Everyone knows the Net Code is screwed up in battlefield however it has nothing to due with the game stuttering.

You can't simply state that I'm incorrect, you can state that you think I'm incorrect. There's no absolute proof available here. I'm aware of the core parking feature, and I explained why I think it's very unlikely to cause issues.

 

There are so many names for different kind of lag, I consider "lag" to be the umbrella concept for them all. It seems to me that you think lag only applies to network induced lag, but that's certainly not the way it's been used traditionally.

 

"The netcode" argument is another one of those urban myths as I see it. The behaviour is by design, it's all due to client hit detection and lag compensation - the result of which gives what many that doesn't understand how it works and only see the symptoms call "bad netcode". Any btw, 600 hours is just BF4. In Battlefield in total I have several thousand hours.

Link to comment
Share on other sites

This is going even further off topic...

 

 

You can't simply state that I'm incorrect, you can state that you think I'm incorrect. There's no absolute proof available here. I'm aware of the core parking feature, and I explained why I think it's very unlikely to cause issues.

 

"The netcode" argument is another one of those urban myths as I see it. The behaviour is by design, it's all due to client hit detection and lag compensation - the result of which gives what many that doesn't understand how it works and only see the symptoms call "bad netcode". Any btw, 600 hours is just BF4. In Battlefield in total I have several thousand hours.

 

 

Its a known fact that Core parking creates in game stuttering on Intel Core i7 processors with H/T dating back to Sandy Bridge, additionally it also affects AMD FX processors. Most games only take advantage of 2 logical Cores no matter how many Cores the processor has. The stuttering arise when Windows continues to enable and disable Cores every few seconds while trying to offset the Load demand.

 

I would recommend to pull up a YouTube video showing stuttering, it's has nothing to due with the lag issue.

 

Your assumption that Net Coding is urban myth is laughable, DICE seriously drop the ball on this. Look no further than the introduction of Network Smoothing Factor>

 

As far as the claim that Corsair LINK is creating a stuttering in game is just factually incorrect. How can a machine be under powered when CPU demand is 50% and Memory is using 3.5-4.0 GB of 8.0 GB under full system load and gaming?

Edited by StealthGaming
Link to comment
Share on other sites

To all: You may want to provide links to proof to back up your hypotheses.

It's hard to prove a negative.

 

Its a known fact that Core parking creates in game stuttering on Intel Core i7 processors with H/T dating back to Sandy Bridge, additionally it also affects AMD FX processors. Most games only take advantage of 2 logical Cores no matter how many Cores the processor has. The stuttering arise when Windows continues to enable and disable Cores every few seconds while trying to offset the Load demand.

I'll try to explain once more: While core parking is a real issue, especially with the AMD Bulldozer architecture, many in the gaming community seems to think this is the sole cause of the type of lag called stuttering. This is what I consider the "myth" part. The fact is that the stuttering can be cause by a whole ray of different things related to the game machine resources. Lag can be divided into two main "branches", one originating from high network or server latency and one originating from insufficient computer resources (e.g CPU, GPU, RAM, motherboard bus transfer rate). While core parking CAN cause such issues on some platforms, it's unlikely to be the cause except from on the AMD Bulldozer architecture where Windows' CPU scheduler has a bug. This bug can be fixed by installing hotfix kb2646060. Core parking is otherwise unlikely to cause issues because it by design only parks cores when the CPU is under light load conditions, which is not typical of a gaming situation. You can read more about it here.

 

When it comes to your assuption about the games using only two cores, you're wrong (that is, it's over simplified). The games, or any software, does not "use" cores. The software simply asks Windows for CPU time, and the Windows CPU scheduler assigns CPU time from whatever core it sees fit. The problem is that each thread runs in it's own memory scope which can't easily be switched between cores. To use multiple cores effectivly the software therefore has to be written in such a way that the CPU load is effectively distributed over several threads. From the programmers perspective it's easier just to let everything, or atleast, all the heavy computing, happen in one thread, since you don't have to deal with semaphores and shared memory. So, to say that a badly written software just uses one core can be true, but when it comes to how multiple threads use multiple cores that's entirely up to Windows.

 

I would recommend to pull up a YouTube video showing stuttering, it's has nothing to due with the lag issue.

 

As pointed out before, stuttering is just one "type" of lag.

 

Your assumption that Net Coding is urban myth is laughable, DICE seriously drop the ball on this. Look no further than the introduction of Network Smoothing Factor>

I'm not saying that the symptoms attributed to "bad net code" is a myth, the myth is that it's somehow caused by Dice's inability to write a bug free "net code". As stated above, the issues are BY DESIGN in the sense that when you combine client side hit detection with predicion algorithms and the factors in latency (both client, network and server), you WILL see such issues. There's nothing Dice can do to "fix" that other than to redesign the whole system, all they can do is try to fine-tune the experience (as is what they do by introducing e.g the smoothing factor or increase server tick rates). Increasing server tick rates reduces server latency, so it will lessen the symptoms some, depending on how big part of the total latency comes from the server. Adjusting the "network smoothing" is simply a way to adjust how much prediction the client will do -the setting has been there the whole time, Dice just exposed it to the users to let them tweak it themselves. This whole mess started back in BF2 when they started the whole "prediction" mess. The funny thing for those of us remembering before that, the whining about the net code was just as bad before they did that, but the symptoms were different. Back then you had to lead the shots more, to compensate for latency, and high pingers would suffer badly. It also meant that snipers wouldn't always get a kill even though they had the enemy in the crosshairs, simply because what they say was somewhat outdated information. To remedy this, client hit detection and prediction was introduced, leading to all kinds of strange behaviour since the client both predicts where something is about to move AND deciding if it's a hit, meaning that you can be killed for being somewhere you never has been.

 

In short, I agree that Dice got it wrong, I just disagree that the problem is "badly written net code". The problem is listening to whining snipers that don't always get their kills and then ruining the game for everyone else. It's "by design".

 

As far as the claim that Corsair LINK is creating a stuttering in game is just factually incorrect. How can a machine be under powered when CPU demand is 50% and Memory is using 3.5-4.0 GB of 8.0 GB under full system load and gaming?

As explained above, it's a bit more complicated that that. Badly written software, like CL, often doesn't respect the CPU scheduler and hands back it's CPU shares when they are not needed (but spends them looping for example). That leads to the scheduler thinking the software needs more CPU shares, and diverts more resources to this software - which it ofcourse again just wastes on some waiting loop. This will lead to one or more cores (depending on threading, my guess is that CL is not threaded) reaching 100% utilization which is really just wasted doing nothing. This means any threads unlucky enough to live on the same core, will be severely starved for resources. Because of semaphores, other threads in the same application will often end up waiting for the threads starved for CPU to release their locks, and everything slows down a lot. Simply looking at total CPU utilization is too simple, for that number to give meaning you have to assume that all software respects the scheduler and only uses the shares it needs. Therefore it's very possible for CL to severly slow down a computer even though there seems to be available CPU resources. I see the same happening on my computers running CL all the time, the whole system becomes slow and closing (not minimizing) CL is what resolves the issue.

  • Like 1
  • Confused 1
Link to comment
Share on other sites

Your opinion is a vast over-complication of known facts.

 

Moving this debate to the Game section where a poll will be started, everyone is encouraged to participate.

I'm not interested in discussing this, never have been, I just tried to explain why badly written software, like CL, can and very likely will have a performance impact even on relatively powerful computers. The netcode discussion was completely irrelevant here, and I admit that I should have refrained from commenting on that.

Link to comment
Share on other sites


×
×
  • Create New...