USB cables and me

I think jitter is a function of the clocks on either end, and not the cable, correct? Regardless, I don't think jitter is an issue in this application, as buffering on either end should more than suffice to get the bits all to their destination in plenty of time.

Yeah, a marginal cable can introduce all kinds of problems. That's true with any kind of cable.

bs

My casual research into this tells me that both synchronous and adaptive USB modes derive the clock from the actual signal on the cable and slopey signal edges on less than ideal cables can cause additional jitter. The adaptive mode is supposedly can be made very jitter resistant but not jitter free. The asynchronous mode is the only mode that can be jitter free as the destination device always runs off its own fixed clock and controls the speed of the transmission by sending feedback to the host to ensure the receiving buffer over/underrun doesn't occur.

Found this puppy: https://www.passmark.com/products/usb3loopback.htm
The price is accessible and looks interesting but it won't report jitter, so not entirely useful for USB audio testing.
 
My casual research into this tells me that both synchronous and adaptive USB modes derive the clock from the actual signal on the cable and slopey signal edges on less than ideal cables can cause additional jitter. The adaptive mode is supposedly can be made very jitter resistant but not jitter free. The asynchronous mode is the only mode that can be jitter free as the destination device always runs off its own fixed clock and controls the speed of the transmission by sending feedback to the host to ensure the receiving buffer over/underrun doesn't occur.

Found this puppy: https://www.passmark.com/products/usb3loopback.htm
The price is accessible and looks interesting but it won't report jitter, so not entirely useful for USB audio testing.
 
There are clocks on both ends of a USB circuit.

It is the job of the clock on the sending end to send data at the appropriate bitrate. I believe that the receiver derives the bitrate, not the 'clock', from the incoming signal. There should also be circuitry on the receiving end to factor out any jitter and package up the incoming bits according to that bitrate.

As long as both ends are doing their job, and the receiving end is ending up with the same bits that were sent, then jitter is not a factor. You could have a ton of jitter, but if the receiver can correctly align the incoming bits such that they match what was sent, then it's a non-factor. If there is enough jitter that the receiver can't properly reassemble the bits properly, then you have a problem.

bs
 
There are clocks on both ends of a USB circuit.

It is the job of the clock on the sending end to send data at the appropriate bitrate. I believe that the receiver derives the bitrate, not the 'clock', from the incoming signal. There should also be circuitry on the receiving end to factor out any jitter and package up the incoming bits according to that bitrate.

As long as both ends are doing their job, and the receiving end is ending up with the same bits that were sent, then jitter is not a factor. You could have a ton of jitter, but if the receiver can correctly align the incoming bits such that they match what was sent, then it's a non-factor. If there is enough jitter that the receiver can't properly reassemble the bits properly, then you have a problem.

bs

Jitter-correction is generally not bit-accurate, and excess jitter on the receiving end may lead to additional information loss. Only in asynchronous mode there should be no need for jitter correction as the receiver can control how much data it receives and it can keep its receive buffer filled as needed and send bits exactly as received further onto the DAC chip.
 
Jitter-correction is generally not bit-accurate, and excess jitter on the receiving end may lead to additional information loss. Only in asynchronous mode there should be no need for jitter correction as the receiver can control how much data it receives and it can keep its receive buffer filled as needed and send bits exactly as received further onto the DAC chip.

Ok, but again, unless there's some actual problem with a cable, connection, or hardware on either end, jitter is not usually a large factor in USB audio. And if you do drop a bit, there's still a CRC check and a retransmission, right? I haven't worked at the bits and bytes level of USB for a long time now (1.0/1.1), but even back then there were plenty of mechanisms in place to ensure that the bits arrived intact and in proper order.

bs
 
Ok, but again, unless there's some actual problem with a cable, connection, or hardware on either end, jitter is not usually a large factor in USB audio. And if you do drop a bit, there's still a CRC check and a retransmission, right? I haven't worked at the bits and bytes level of USB for a long time now (1.0/1.1), but even back then there were plenty of mechanisms in place to ensure that the bits arrived intact and in proper order.

bs

There is no re-transmission in Isochronous Transfer mode, the best receiver can do is CRC-check and and drop broken sample(s) and interpolate. Seems that USB jitter is actually a problem, or at least was in the early days. Now that the problem is well understood there are solutions to deal with it. Still, less jitter to begin with is always better I would think.
 
There is no re-transmission in Isochronous Transfer mode, the best receiver can do is CRC-check and and drop broken sample(s) and interpolate. Seems that USB jitter is actually a problem, or at least was in the early days. Now that the problem is well understood there are solutions to deal with it. Still, less jitter to begin with is always better I would think.

I'm a little confused. Who uses pure isochronous mode to transmit audio (or any kind of data)? Doesn't somebody use isochronous transfer, along with the control and interrupt functions, to implement synchronous or asynchronous communications?

Fully agree that less jitter is always better. Less problems of any kind are always better.

bs
 
I'm a little confused. Who uses pure isochronous mode to transmit audio (or any kind of data)? Doesn't somebody use isochronous transfer, along with the control and interrupt functions, to implement synchronous or asynchronous communications?

Fully agree that less jitter is always better. Less problems of any kind are always better.

bs

http://www.xmos.com/fundamentals-usb-audio
All audio data is transferred over isochronous transfers; interrupt transfers are used to relay information regarding the availability of audio clocks; control transfers are used used to set volume, request sample rates, etc.
 
Interesting read. But the isochronous, control, and interrupt mentioned are the raw USB interfaces, or building blocks. If you scroll further down, they talk about sending extra sample data (8 extra samples per second). Doesn't this indicate that something at the receiving end is doing something with those samples to ensure data accuracy?

bs

I think what this means is that the sender can safely stuff 8 extra samples per second and not cause an overflow (or rather a buffer overrun) on the receiving end.
 
I think what this means is that the sender can safely stuff 8 extra samples per second and not cause an overflow on the receiving end.

Sure, but what happens with those extra samples? The fact that you're sending any extra data indicates that the receiver is not blindly passing bits to it's DAC, it must be doing something with all of the incoming data before passing any of it along, otherwise the extra sample data would disrupt the audio signal. I've got to think that something is being done with the extra sample data is to perform some level of integrity check on what you're getting, to make sure that what you're getting is what was sent.

Now I'll have to go looking to see if there's anything depicting the data and control logic for an actual audio app.

bs
 
I think it is actually blindly passing bits to its DAC, as the receiver clock frequency is the clock frequency of the DAC.
 
I think it is actually blindly passing bits to its DAC, as the receiver clock frequency is the clock frequency of the DAC.

I don't see how it can, if there's extra sample data being sent, something has to at least make a decision to toss that.

Time for some reading. I'm obviously out of data on this. Interesting discussion, btw.

bs
 
I don't see how it can, if there's extra sample data being sent, something has to at least make a decision to toss that.

Time for some reading. I'm obviously out of data on this. Interesting discussion, btw.

bs

No, the receiver is the master, if the sender has put an extra sample it means the receiver needs it as it spins slightly faster than the sender.
 
I thought I'd share, feel free to laugh and humiliate me.

There's definitely no laughter from me... only an acknowledging nod.

Both the Curious USB cable and a DIY effort with solid silver data lines (shielded) and a separate solid copper ground line (shielded) but no +5V line (as it is not required in my application) sound better than the generic USB cables that I have used in the past. It's not a huge difference, but readily detectable. Once you've used the 'better' cables for a while, if you switch back to the generic cables, it's easier to detect the difference IMHO.

I've also compared these two cables against the solid USB A to B coupler that came with my Uptone Regen (and was touted as being better than any cable at the time) and the coupler is better than a generic cable, but not as good as the others.
 
There's definitely no laughter from me... only an acknowledging nod.

Both the Curious USB cable and a DIY effort with solid silver data lines (shielded) and a separate solid copper ground line (shielded) but no +5V line (as it is not required in my application) sound better than the generic USB cables that I have used in the past. It's not a huge difference, but readily detectable. Once you've used the 'better' cables for a while, if you switch back to the generic cables, it's easier to detect the difference IMHO.

I've also compared these two cables against the solid USB A to B coupler that came with my Uptone Regen (and was touted as being better than any cable at the time) and the coupler is better than a generic cable, but not as good as the others.

My experience as well. I'm skeptical about everything and USB cables were no exception.

I use Uptone's products with Curious Cables. They sound better than the silver Pangea cable I had prior. The Pangea and the Curious Cables sound better than all the cheap USB cables I have laying around.

Night and day better? No. However, it is undeniable to me that their IS a difference in sound quality. The Curious Cables have a fuller, richer and smoother sound to them. Sad....but....true. The Pangea had the same qualities but not to the extent of the Curious Cables.

Right now I'm checking out the USPCB hard adaptors from Uptone and so far they are promising. This weekend gonna do some comparisons.

Guys that keep talking about 1's and 0's.......there's gotta be more to this. Too many people are claiming they are hearing the same things/differences. Perhaps going a little overboard, though.
 
Guys that keep talking about 1's and 0's.......there's gotta be more to this. Too many people are claiming they are hearing the same things/differences. Perhaps going a little overboard, though.

TBH, I'm the kid in Science class in middle school when the teacher demonstrated optical illusions and none of them worked on me. ( The lines are all the same length, what staircase? Don't see faces in clouds either. ) I've spent my professional career working with 1's & 0's, a lot. If the USB cables are made up to spec, regardless of what they cost or the materials they are made of, they will "sound" the same in a given component chain. If there is a difference in sound then one of the cables is out of spec.

At the end of the day if it sounds good to you, and you can afford the cost if the cables are expensive, then it isn't worth it to argue about.

Mark Gosdin
 
Speaking of specs, I scanned through the USB 2.0 specification and I'd say 25% of the document is devoted to error detection, recovery, re-transmits, etc., and the document was written assuming the transmission media that is within the spec. From that, even being "within the spec", doesn't necessarily mean error free to me.
 
Ok..I will start and say I'm also in the class that thinks digital is digital...and any differences in quality are down to how much you want to hear them.

I look at it from this perspective:

For starters, USB data transmission is a balanced signal over a twisted pair that is further shielded. The shielding on the cable should be sufficient enough to effectively isolate it from any noise getting inside the cable. Tack on to that the balanced signal over a twisted pair makes for a balanced transmission line. So any noise that does get in to the system is going to be common in both channels and will effectively be rejected. Of course, I know for a fact no USB cable is the same and how some are total garbage compared to others.

So...I'm a ham radio operator. For the last year or so when I think about cables it's typically coax and transmission lines...and since I'm actually throwing some power in to them; I have to pay attention to things like shielding. But USB cables come in to play because I operate digital modes over HF/shortwave...which requires I have a way for my PC to trigger my radio to transmit. 99% of software is set up to use serial ports; so I have a USB to serial dongle plugged up to my machine. (BTW, the software basically pulls the RTS line to it's "high" state; which I feed through an opto-isolator to trigger my rigs transmit line). Let me tell you...HF/shortwave HATES our modern world. If you think RFI from your PC getting in to your audio stream would be a bad thing...how about the neighbor a quarter mile down the road with a plasma TV making your life hell? You quickly find out how cheap all your switching power supplies and electronics actually are by how much noise your radio puts out when you plug them in. I have actually made people in this house replace laptops because the model they had was just emitting so much RFI I couldn't work stations. I've been lucky is that I've found the right power supply for my computer with the right set of components that my PC doesn't emit any RFI. That was untill I got my dongle. Upon plugging that thing in..I was greeted with a very large amount of noise in my radio...which BTW sits maybe 2ft from my computer on a different wall.

The shielding on that cable was maybe 4 strands of wire wrapped around the bundle...but even worse was that it was floating on both ends! There was no connection to the shell of the USB connector...so it was not grounded and was not effectively working as a shield. When I cut the board out of it's moulded rubber housing...I replaced it with a monoprice USB cable I'd clipped the end off of...the main difference was it had an actual braid of wire over everything and was connected to the USB shell. Guess what? No more noise! And in fact...when shielding is done properly..it's a faraday cage around your wires. And we all should know what a faraday cage does.

So if this 12mhz oscillator's leakage is prevented by a proper shield on the USB cabling....I have to wonder how strong a signal you have to be next to in order to actually get in and cause interference in the first place. Plus...that interference would be common mode and rejected due to the data's balanced nature.

No data mode transmission is error free!
That's something that needs to be remembered. You will never avoid errors...all you can do is build in mechanisms to cope with the error in a sane amount of time. In fact, it's a bad idea to build any digital transmission system without a very basic error checking.

But I'll leave with this. I listen to a lot of DSD on my DAC...and that data is pushed to the DAC doing what's known as "DSD Over PCM" (DoP). This is not a conversion to PCM as everyone incorrectly assumes (or likely wants to assume) it is. This is merely repacking DSD bits in to a WAV/PCM header. DTS famously did this on Laserdisc and for the DTS-CD format. This is done because there was no standard provision for sending DSD data to a DAC...so when it was done..it was done in some weird way that required special software. Even worse is some systems...like OSX...won't work with anything *except* a PCM stream. So the chipset designers/manufacturers got together and actually came up with the DoP standard. So instead of having 24 bits make up the PCM sample; you have 24-bits of DSD packed in order. All of your underlying software thinks it's working with a 176.4khz PCM stream...but the USB controller for your DAC picks up this is DoP...then tells the DAC to go in to DSD mode.

DSD is vastly different than PCM. A corrupted sample in a PCM file just corrupts that sample. If you get bits getting wonky in a DSD stream...forget it..it has the ability to screw up the rest of the audio. DSD stores changes in the waveform where was PCM stores samples of amplitude waveform.

If your system doesn't do bit-perfect to your DAC...DoP will not work. *Any* modification done to a DOP stream no longer makes it a valid DoP stream. If my cheap USB cable was causing degredation to the degree it would be audible...my DSD playback would be entirely broken. Since it's not, I know the bits are arriving at the DAC properly and that any error correction done in USB transport has come up with exactly what I should be getting.

And if you're getting those bits properly to your DAC...then the medium in front of it shouldn't matter.
 
Back
Top Bottom