Are higher sampling rates ever better?

c.coyle

Fighting the Dunning-Kruger effect.
Interesting article:

The Science of Sample Rates (When Higher Is Better — And When It Isn’t)

"So, if you are ever using a converter and find it sounds dramatically better at a higher rate, don’t get excited about the sample rate. Get suspicious of the design shortcuts instead! Why isn’t the 44.1kHz on that converter up to snuff? How does this converter compare to the best-designed converters when they are set to a lower rate? Is it still better, or does the advantage disappear?"
 
First - it is a blog post, not an article. 2nd, he has no other articles quoted or proof. 3. He has a bias against high sampling rates and he has stated that in other places. His post does not prove anything. except expressing is own opinion.

Easy soldier. Just posting an article, er, blog post. And an old one at that, now that I notice.
 
First - it is a blog post, not an article. 2nd, he has no other articles quoted or proof. 3. He has a bias against high sampling rates and he has stated that in other places. His post does not prove anything. except expressing is own opinion.

I just read the article. I found several quotes/references/citations that refer to others' comments, and links are provided for some of those.

I found that he was not being dogmatic at all about what he was trying to explain/illustrate.

I think the article is much more objective than subjective, as you intimated.
 
There is some confusion with what Nyquist said. The Nyquist frequency is a minimum number, it is O.K. to go above it.

You also have to keep in mind that limitations of current software and hardware are a different issue than what can be done with the DSP processing equations. The equations have been around longer than any of this relatively new DSP hardware. The new hardware will continue to improve over time and allow better sampling at higher rates.

Your system has to be able to play back the higher rates or it will not matter that they are there. If your system cannot play back a higher rate it will chuck out all the extra information and play back at the highest rate that it can support. A 24 bit 96 Khz recording will only play back at 16 bit 44.1 Khz if that's all your system can support. Don't expect it to do things it was not designed to do.

When thinking about digitizing a signal, Quantization, keep in mind a Riemann sum. If storage space and processing power where not the limiting factors more information would always be better.

image448.gif


I think the more important number to be concerned with is how many bits can be accurately resolved by the system. More bits will give you a finer resolution of the signal volume and give you a more fluid dynamic range.
 
More bits will give you a finer resolution of the signal volume .

Nope. Each bit represents EXACTLY the same volume level, whether there are 16 or 24 of them, More bits do NOT give "finer resolution ". The only advantage of 24 bits is that it gives the producer more slop room while recording and mixing down. 16 bits gives far more than enough noise floor in any listening environment, particularly with the high dynamic range compression used in so many of today's recordings.

Suggest you go read/watch Monty Montgomery's excellent tutorials to learn how digital actually works...
 
Nope. Each bit represents EXACTLY the same volume level, whether there are 16 or 24 of them, More bits do NOT give "finer resolution ".

Each bit is a 1 / ( 2 ^ n) increment. If n = 16 we get an increment of 1 / ( 2 ^ 16). If n = 24 we get an increment of 1 / (2 ^ 24).
 
Each bit is a 1 / ( 2 ^ n) increment. If n = 16 we get an increment of 1 / ( 2 ^ 16). If n = 24 we get an increment of 1 / (2 ^ 24).

Sorry, doesn't work that way. Each bit represents a fixed voltage, 24 bits simply gives more dynamic range. 24 bits represents the SPL difference between a needle dropped on a carpet and a 747 in your living room. Way more than any recording will ever use.

"So, 24bit does add more 'resolution' compared to 16bit but this added resolution doesn't mean higher quality, it just means we can encode a larger dynamic range. This is the misunderstanding made by many. There are no extra magical properties, nothing which the science does not understand or cannot measure. The only difference between 16bit and 24bit is 48dB of dynamic range (8bits x 6dB = 48dB) and nothing else. This is not a question for interpretation or opinion, it is the provable, undisputed logical mathematics which underpins the very existence of digital audio."

"So, can you actually hear any benefits of the larger (48dB) dynamic range offered by 24bit? Unfortunately, no you can't. The entire dynamic range of some types of music is sometimes less than 12dB. The recordings with the largest dynamic range tend to be symphony orchestra recordings but even these virtually never have a dynamic range greater than about 60dB. All of these are well inside the 96dB range of the humble CD. What is more, modern dithering techniques (see 3 below), perceptually enhance the dynamic range of CD by moving the quantisation noise out of the frequency band where our hearing is most sensitive. This gives a percievable dynamic range for CD up to 120dB (150dB in certain frequency bands)."

http://www.head-fi.org/t/415361/24bit-vs-16bit-the-myth-exploded
 
Hmmm ...

I think part of my problem is that I think of dynamic range as an absolute range - a range of values (volts, etc.) between which no clipping occurs. I wasn't thinking of it as ratio-type value (dB). So by increasing the bitdepth you're not pushing the ceiling up by a factor of 2 ^ (24 - 16). You're pushing the self-generated noise "floor" down by the same factor.
 
Sorry, doesn't work that way. Each bit represents a fixed voltage, 24 bits simply gives more dynamic range. 24 bits represents the SPL difference between a needle dropped on a carpet and a 747 in your living room. Way more than any recording will ever use.

"So, 24bit does add more 'resolution' compared to 16bit but this added resolution doesn't mean higher quality, it just means we can encode a larger dynamic range. This is the misunderstanding made by many. There are no extra magical properties, nothing which the science does not understand or cannot measure. The only difference between 16bit and 24bit is 48dB of dynamic range (8bits x 6dB = 48dB) and nothing else. This is not a question for interpretation or opinion, it is the provable, undisputed logical mathematics which underpins the very existence of digital audio."

"So, can you actually hear any benefits of the larger (48dB) dynamic range offered by 24bit? Unfortunately, no you can't. The entire dynamic range of some types of music is sometimes less than 12dB. The recordings with the largest dynamic range tend to be symphony orchestra recordings but even these virtually never have a dynamic range greater than about 60dB. All of these are well inside the 96dB range of the humble CD. What is more, modern dithering techniques (see 3 below), perceptually enhance the dynamic range of CD by moving the quantisation noise out of the frequency band where our hearing is most sensitive. This gives a percievable dynamic range for CD up to 120dB (150dB in certain frequency bands)."

http://www.head-fi.org/t/415361/24bit-vs-16bit-the-myth-exploded

With CD resolution noise floor is about -130dB. 24 bits can potentially bring down further 48 dB. That means that with 24 bits quantization noise will be below thermal noise of any circuit at room temperature. This is exactly what we need - negligible quantization error. So the advantage of using 24 bits over 16 bits is clear. Modern technology makes 20 bit of resolution rather cheap, and 22 bits still achievable (though expensive). Why not to use what is readily available? If not full scale was used at recording process (and this is a common case with any live records) - no resolution is lost even if peaks were 12 dB below full digital level.
 
Nope. Each bit represents EXACTLY the same volume level, whether there are 16 or 24 of them, More bits do NOT give "finer resolution ". The only advantage of 24 bits is that it gives the producer more slop room while recording and mixing down. 16 bits gives far more than enough noise floor in any listening environment, particularly with the high dynamic range compression used in so many of today's recordings.

Suggest you go read/watch Monty Montgomery's excellent tutorials to learn how digital actually works...


Thanks, but I earned a living working on DSP hardware for more than 10 years so I do know a little bit about how it works. :)

I'm not sure where you got your information but what I'm talking about is the quantization of a raw signal source into a digital representation.

In electrical hardware the full scale range, FSR, of a converter is selected on a ADC chip or chosen by the EE designing the circuits. That FSR is then chopped up depending on the number of bits available on that particular ADC board. Each bit then represents one small voltage slice of the FSR. A single bit may represent a different voltage depending on the FSR and bit depth of different ADC systems.

Given the same FSR in the same system more bit depth will give you finer resolution of the signal. It gives you more points to sample the voltage and get a more accurate reading.

Here is a simple Wiki blurb ...don't make me bore you with ADC converter data sheets. :boring: :D

Go to the part that says RESOLUTION. :thumbsup:

https://en.wikipedia.org/wiki/Analog-to-digital_converter
 
In standard audio applications, each bit represents approximately 6db. Adding more bits doesn't slice the incoming audio signal more finely, it merely allows greater dynamic range. 24 bits doesn't slice into 3db or 2db per bit, it's still 6db. Anything more than the 120db available by dithering and noise shaping 16 bits is really unnecessary, and will make zero discernable difference in the reproduction chain.

https://blogs.msdn.microsoft.com/au...why-32-bits-per-sample-should-never-catch-on/
 
Increasing the bitdepth pushes down the self-generated quantization noise floor. And for_p1 said that this self-generated digital noise can get pushed down below the ambient analog noise floor where you don't really care about it anymore. Cool.

However - above this pushed-down noise floor don't we have a lot more "clean" bits than we did before? IOW by increasing the bitdepth didn't we increase measurement accuracy (resolution) by reducing measurement error?
 
In standard audio applications, each bit represents approximately 6db. Adding more bits doesn't slice the incoming audio signal more finely, it merely allows greater dynamic range. 24 bits doesn't slice into 3db or 2db per bit, it's still 6db. Anything more than the 120db available by dithering and noise shaping 16 bits is really unnecessary, and will make zero discernable difference in the reproduction chain.

https://blogs.msdn.microsoft.com/au...why-32-bits-per-sample-should-never-catch-on/

This article is silly. There is a thing called a volume control on the front of your amp! :p
 
In the last sentence of post #6, I interpreted the phrase "More bits" as "More samples".

I bet we would all agree that 16 bits provides more than sufficient dynamic range, for music at least.
 
In the last sentence of post #6, I interpreted the phrase "More bits" as "More samples".

I bet we would all agree that 16 bits provides more than sufficient dynamic range, for music at least.

Not more samples. That would be determined by the sampling rate.

Bit depth gives you a discrete range of voltage values that are definitive such as 2^16 or 2^24.
 
Let's have a look at one of the best DACS currently made, the Benchmark DAC2. You can pay more (a lot more if you're truly ignorant), but you won't get better performance.

https://benchmarkmedia.com/products/benchmark-dac2-hgc-digital-to-analog-audio-converter

THD+N, 1 kHz at 0 dBFS -109 dBFS, -109 dB, 0.00035%

So, what happens to all that dynamic range beyond the 120db that dithered 16-bit audio provides? Lost in the noise. Then add in the noise floor of the preamp and amp. Oops. Since music requires a maximum of 80db dynamic range, and modern rock/pop is horribly range compressed to as little as 10db, there's simply no point in anything beyond 16 bits. They're just trying to get people to purchase, yet again, the music they already own on LP, tape, CD, digital download...
 
When I'm talking about digitizing a raw signal I'm not talking about re-sampling something that has already been recorded like the Led Zeppelin catalog for the nth time. Not that I don't like LZ. :)

What I have in mind is the recording of a worthwhile performance that will last the ages. If you could go back in time and digitally record a live Beethoven or Mozart performance would you bring a standard 80s ADC and record in 16 bit 44.2 Khz or would you try to capture every data point you could possibly capture. I think I would take the ADC that would allow me to accurately capture the maximum data points possible.
 
On topic:
I read the OP's suggested link. I shook my head and started to think of Monty when he mentioned IM products as being a reason to shun high sampling rates. Why do people assume I'm using such poor equipment as to cause sever IMD? The other question is, why do people assume that these ultrasonics that are causing IM in my playback doesn't result in IM in the path to the digitizer in the first place? Most likely, the mic preamp will have greater IMD and result in the audible IM products which will be digitized no matter how slowly the ADC operates at (as in, 44.1kHz will capture the audible IM products from the mic pre).

IMO, I say lets just jump to the end of the game and have 500kHz digitizers for audio. Avoids a boatload of issues. And, is better for everyone.

Off topic:
With CD resolution noise floor is about -130dB.

Actually, CD is 16 bit. Therefore the dynamic range is 96dB.
 
Just go with Double DSD and we can avoid this whole argument. 

My big issue is the presumption that 20kHz is the limit of human frequency response. This biological number is the foundation for Nyquist.

But it is based on a test of response to sine waves. I've long held that the human hearing system didn't evolve to detect sine waves but phases, arrival time deltas, and very complex clues about predators, mates, and meals. In other words, biologists are the weak link in digital audio.

None of the above deals with bit depth of course.
 
Back
Top Bottom