Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

djphat2000

macrumors 65816
Jun 30, 2012
1,091
1,130
where are you getting that from?


Which still has to be played back on something, which still faces the problem of components not being able to reproduce the frequencies properly sand potentially causing audible distortion


You can't have ultrasonic distortion unless you converted the source material TO something else (in this case 24/192). Generally higher then the source was to begin with OR converted from Analog.

And think for a second. IF 24/192 introduced distortion, that would mean practically every movie (DTS HD-MA or TrueHD) would sound bad. And you would certainly hear it in the movie theater. IMAX anyone? With its 12,000 WATTS!!! WhohooOooohho.
 

LordVic

Cancelled
Sep 7, 2011
5,938
12,458
it all comes down to balance, stereo cross talk and amplification.
put simply, humans can hear 20hz to 20,000khz. on average, thats what a healthy human ear can hear.

now, audio data would tell a processor (DAC) to convert some data to an audio signal. the better the processor, the more 1:1 the conversion. Various tests are conducted (usually when the new iPod/iPhone is released) and measures how well a DAC translates sound data to actual audio waves, up and down the 20hz-20,000hz frequency range.
often times, cheaper processors do a good job in the middle but suck at the high range, or the very low range of frequency. the iPhones are consistenly good. to my knowledge the 4 > the 4S in this aspect. not sure about the 5 or the 5S, haven't looked.

this gets even more complicated by the existence of stereo sound (2 simultaneous signals). a good DAC is able to completely separate the signal and not make them affect each other. the end result is better positioning - you would be able to clearly identify the location of each sound source as you listen to a well recorded stereo recording. a bad DAC would make a stereo recording sound more 'mono' and the positioning of each sound source would be more 'blurred'.

Finally, amplification. Often, more expensive headphones require more electricity to run. The amplifier on 'stock' sound cards are often insufficient to drive these headphones and so will produce weak or tinny sound on the headphones.

the above is a simplification.

thank you.

I'm not an audiophile. But I do have decent hearing. I generally hear the difference in some music between 192kbps and 256 encoding, and absolutely hear the difference in frequency between 20hz and 44hz for example.

I don't tend to be super picky when i'm mobile. I tend to use really low end earbuds, because well, when i'm at the gym or sitting at work, audio quality isn't really a care.

But when I sit at home at my desktop and just want to listen to music. I have an ok set of Bose headphones and would like good clear, definition and separation of sounds. If a seperate controller might improve that quality for me, It sounds like it could be a welcome addition.

While it probably wont help my itunes purchased music sound that much better, my own CD collection, I ripped (about 300 CD's) in uncompressed format.

Now to just convince the musicians i listen to to end this useless "loudness" war. I listen to metal mostly, And OMG bands these days are just pumping the loudness up on the masters to the point where it sounds terrible no matter what... if Anyones listened to Metallica's Death Magnetic, you will hear just how bad its gotten, where there's significan't audio clipping of the dynamic range
 

csbo

macrumors member
Apr 10, 2014
30
3
High Definition iTunes Music Downloads May Be on the Horizon

You can't have ultrasonic distortion unless you converted the source material TO something else (in this case 24/192).

Yes, you can. In fact, given that upconverted signals are unlikely to have any info in the upper frequencies it's less likely to happen

read the article again. The problem is ultrasonic signal causing problems in components that weren't designed to deal with them
 

Avatar74

macrumors 68000
Feb 5, 2007
1,608
402
Only a fool would by 24/192 "hi-res" files. It's placebo.

Yes and no. While I'm averse to audiophile buffoonery, and there's plenty of it, there are discernible advantages to this format.

The tl;dr is that higher bitrates contribute to exponentially wider amplitude range and the higher frequency sampling mitigates frequency response effects at the highest end of the range of human hearing. The former probably provides, to the average ear, the most readily (glaringly not wishy-washy) discernible difference... but they go a bit hand in hand.

The longer explanation:

First let's get one thing out of the way. I absolutely and completely stand by most scientific findings (Audio Engineering Society etc.) that show that 256Kbps AAC is indiscernible from 16-bit Linear Stereo PCM (your CD audio format, aka CD-DA, CD Digital Audio, "Red Book"). This is simply not open to debate unless controlled, double blind ABX testing (and not conducted over the internet message forums) show overwhelmingly otherwise... and they don't.

On Amplitude Dynamics:

Now, 16-bit Stereo LPCM has 2^16 or 65,536 possible amplitude values per quantization interval/sample. What this translates to is a dynamic range of ~96.7dB. dB is a logarithmic scale in which every 3dB represents a doubling of wave power. What dynamic range represents is range of amplitude levels a system can reproduce from softest to loudest. The absolute values, the floor and ceiling, could vary depending on the mix but in principle this represents the distance from softest to loudest that CD audio, and likewise AAC perceptual coding, can reproduce.

The drawback is that in the last 30 years, the A-weighted average loudness, Leq(A), of popular sound recordings (anything released in any genre to wide distribution, not limited to "pop music" per se), has increased substantially.

What this means for the listener is that the dynamic range of CD audio is mostly wasted. It also explains some of the obsession with vinyl...

It's not that vinyl is a superior medium or that analogue recordings are better. They aren't. at 80dB dynamic range, vinyl sucks. With its high noise floor and other artifaction, analogue sucks. However, the art of engineering and mastering vinyl properly led to some innovative techniques for "sweetening" a very dynamic mix but keeping the average loudness well within the limits of what vinyl could handle. If the master recordings were directly transferred to digital instead of remastered to obnoxiously higher levels, the digital reproduction would sound flawless while the vinyl pressing would degrade.

An improperly mastered sound recording that peaks above -0dBFS (zero decibels full scale; or the maximum threshold of a given sound reproduction medium before amplitude clipping occurs, which will create distortion at any volume level but becomes profoundly worse to your ears as you increase your system's output volume... A properly mastered sound recording can sound fantastic on just about any stereo system, even the crappy one in my Honda.




But back to the 24-bit argument... 2^24 = 16.78 million possible amplitude values per quantization interval. This translates to a dynamic range of ~140dB. That's beyond the threshold of human hearing, and so substantially greater than 96.7dB.... What it sounds like to your ears is a clearer ability to distinguish even quieter sounds amongst even louder sounds, relative to a CD or vinyl pressing. Imagine true HDR photography for your ears... when you hear the full gamut for the first time, it's rather startling. But also certain sounds like cymbals, with their erratic/spastic amplitude changes, have much cleaner definition to them.

So here we're not talking about a case of two audiophiles fighting it out over some perceived difference that almost nobody can hear and is highly suspect of placebo effect.... If used to its full potential, rather than throwing on Metallica records which have the worst mastering known in human history (I used "Death Magnetic" as an example of totally flat, totally distorted, absolutely terrible mastering in a video I did on the Loudness Wars), it produces a substantially different result from 16-bit.

Now, about frequency...

Sampling frequency, as you know, has to do with being able to produce sounds in the A-weighted range (the range of human hearing, which peaks roughly around 22kHz, though most people have a steep falloff of hearing perception around 17-18kHz). The Nyquist Theorem, developed by Bell Labs in the 1920s, served as the root for determining what the minimum sampling limit would be for reproducing every frequency within the range of human hearing. That is, if the desired range is up to 22.05kHz, then the Nyquist limit or minimum sampling frequency has to be 44.1kHz, enough to minimally represent the peak and the trough of one cycle at that frequency.

When the Nyquist limit leaves little headroom, frequency response roll off and frequency aliasing can occur. But a 20kHz lowpass filter can act as an anti-alias filter. This is, however, not as optimal as simply raising the nyquist limit so that all perceived frequencies are WELL within the system's ability to reproduce with substantial definition and therefore eliminates the need to use lowpass and dithering (1kHz noise) as a substitute.

Again, the rule is still garbage in, garbage out.... So if you start with a master recording that has an average loudness of like... -9dBFS and peaks that peg 0dBFS, versus say an Ahmad Jamal recording from the 1970's like "Awakening" at -22dBFS and peaks below -9dBFS, then the added headroom and definition are totally wasted.

But given some of the standards Apple put in place for the "Mastered for iTunes" label, including recommending peaks no higher than -1dBFS to ensure that the loudest sounds do not clip/distort, there certainly could be a market for some very well-mastered sound recordings and a niche of fans who want to hear them.
 
Last edited:

LordVic

Cancelled
Sep 7, 2011
5,938
12,458

I want to thank you for this post. It was extremely informative. helped clear some things up i was wrong about and re-inforced otheres.

while it's probably going to get a TL:DR from most, I appreciate you posting it.
 

csbo

macrumors member
Apr 10, 2014
30
3
What this means for the listener is that the dynamic range of CD audio is mostly wasted.
this is the important point. Moving to 24 bits in effectively meaningless when we aren't even using 16



What it sounds like to your ears is a clearer ability to distinguish even quieter sounds amongst even louder sounds, relative to a CD or vinyl pressing...But also certain sounds like cymbals, with their erratic/spastic amplitude changes, have much cleaner definition to them.
you might listen to very different music than I do, but in most of mine the cymbals are far above the noise floor and so won't really benefit like this. Ymmv
When the Nyquist limit leaves little headroom, frequency response roll off and frequency aliasing can occur. But a 20kHz lowpass filter can act as an anti-alias filter. This is, however, not as optimal as simply raising the nyquist limit so that all perceived frequencies are WELL within the system's ability to reproduce with substantial definition and therefore eliminates the need to use dithering (1kHz noise) as a substitute.
is there evidence that modern converters actually have this problem? I know it was a problem in the 80s, but I seem to recall that even low end consumer grade stuff made in the past decade avoid it
 

samcraig

macrumors P6
Jun 22, 2009
16,779
41,982
USA
All I know is that there will be many that would welcome the addition of HD audio. However, I also think it's safe to say that this "move" to HD isn't going to "save" declining music sales. Most people are fine with the current quality and wouldn't know the difference - especially given the equipment they listen on.

People are always arguing against blu-ray because iTunes quality is "just the same" :rolleyes: This is no different.
 

Avatar74

macrumors 68000
Feb 5, 2007
1,608
402
is there evidence that modern converters actually have this problem? I know it was a problem in the 80s, but I seem to recall that even low end consumer grade stuff made in the past decade avoid it

Great question, and something I bring up in conversations about error.

I think you may be thinking of the so-called "jitter" and sampling error that audiophiles fret about and use to rationalize their spending habits... The answer is twofold:

Since at least the mid-1980s most DACs have sufficiently resolved jitter and sampling error with larger sample & hold buffers to mitigate issues created by sampling error. And they may have a built in lowpass filter circuit, as described in Pohlmann's Principles of Digital Audio. (the very best foundational read on the subject)... but I don't know if that's always the case. Granted this is a different scenario but the specs for mastering Dolby Digital soundtracks include a lowpass filter option for encoding the main channels.

But there are cases where the master recording introduces the problem and it is not a sampling error thrown during reconstruction.... CD-DA still relies on dithering because the fact of quantization stepping does not go away at that amplitude resolution.
 

teknikal90

macrumors 68040
Jan 28, 2008
3,356
1,905
Vancouver, BC
thank you.

I'm not an audiophile. But I do have decent hearing. I generally hear the difference in some music between 192kbps and 256 encoding, and absolutely hear the difference in frequency between 20hz and 44hz for example.

I don't tend to be super picky when i'm mobile. I tend to use really low end earbuds, because well, when i'm at the gym or sitting at work, audio quality isn't really a care.

But when I sit at home at my desktop and just want to listen to music. I have an ok set of Bose headphones and would like good clear, definition and separation of sounds. If a seperate controller might improve that quality for me, It sounds like it could be a welcome addition.

While it probably wont help my itunes purchased music sound that much better, my own CD collection, I ripped (about 300 CD's) in uncompressed format.

Now to just convince the musicians i listen to to end this useless "loudness" war. I listen to metal mostly, And OMG bands these days are just pumping the loudness up on the masters to the point where it sounds terrible no matter what... if Anyones listened to Metallica's Death Magnetic, you will hear just how bad its gotten, where there's significan't audio clipping of the dynamic range

if i were you id buy better headphones before a better source.
bose is decent but are by no means 'accurate' headphones. they are coloured headphones meant to 'enhance' as opposed to accurately reproduce.

although most headphones are coloured in a certain way up until you reach a certain price point, a good bet would be the Shure series - srh840 would be a good starting point at a decent price. reasonably accurate and easy to drive using stock soundcards (even iPhones)

the little brother from this model - the srh440 is used in a lot of recording studios as monitors.
 

csbo

macrumors member
Apr 10, 2014
30
3
Great question, and something I bring up in conversations about error.

I think you may be thinking of the so-called "jitter" and sampling error that audiophiles fret about and use to rationalize their spending habits...


Actually, I think over sampling was what I was thinking of

Also, isn't it more an argument for recording at higher rates and not necessarily delivery method?
 

djphat2000

macrumors 65816
Jun 30, 2012
1,091
1,130
That's not correct and actually it's exactly the opposite of what the article explains: the article explains that if the audio you want to reproduce has supersonic frequencies, trying to reproduce them can disrupt the reproduction of the audible frequencies. Basically you don't hear the supersonic part, but you will hear artifacts in the audible part caused by them. If you "convert up" to 24/192 from a source which lacks these supersonic frequencies I expect them not to exist in the "converted up" version too, which means you won't have any disruption in the audible frequencies.



You're confusing different issues:

One issue is lossy vs lossless codecs: lossy codecs can introduce audible artifacts, so if you compare FLAC (lossless) with e.g. MP3 (lossy) you can very well hear artifacts introduced by the MP3 lossy compression which have nothing to do with the sampling-rate or bits-per-sample choices.

The other issue is which impact has 16bit vs 24bit sampling and 44kHz vs 192kHz sampling rate. The article explains that if you take uncompressed or lossless-compressed audio (so that artifacts due to compression play no role) one at 16/44 and the other at 24/192, the one at 24/192 doesn't offer any advantage in audio quality compared to the 16/44 one.


Well then that makes no sense to me. If your saying (rather the article) that IF the source has ultrasonic's in the recording (how did it get there to begin with?) it will (could) have distortion in the audible range when you try and reproduce it. You would have had to record it in 24/192 for it to be there in the first place no (picked up by the recording or added to it from the recording equipment used)?

For it then to be reproduced (so you can hear the distortion) on an audio system that "can't" play it back correctly. I do NOT hear distortion on any of my 24/192 audio files or movies (DTS-HD-MA or Dolby TrueHD) movies. I DO hear a distinct difference in sound quality between MP3/AAC and higher bit rate/frequency audio of the same music recording (source vs MP3/AAC of the source).
Depending on the music in question, sometimes pretty alarmingly different.

The last part of what you said (about the article) also makes no sense. They say there is no advantage to the 24/192 over 16/44.1. I would say, a lot of that has to do with how it was mixed in the first place. What exactly are we listening to (and more importably on what system are we listening on). I'll say that most people really can't tell the difference. Part due to the prevalence of lossy audio in the world. Part because, most people simply don't care. And part because most people don't listen to music worth much a damn in the first place.

To give another analogy. If you remember Sony (Or similarly typed) Trinitron CRT screens. Once you saw the line (albeit really thin black lines that separated the 3 regions of the screen). You saw them forever.

I'm also not saying that 24/192 is for the masses. Its not convenient, and its not what matters most to most people just wanting to listen to some music. But, it is better to those that can hear the difference.
 

Avatar74

macrumors 68000
Feb 5, 2007
1,608
402
if i were you id buy better headphones before a better source.
bose is decent but are by no means 'accurate' headphones. they are coloured headphones meant to 'enhance' as opposed to accurately reproduce.

although most headphones are coloured in a certain way up until you reach a certain price point, a good bet would be the Shure series - srh840 would be a good starting point at a decent price. reasonably accurate and easy to drive using stock soundcards (even iPhones)

the little brother from this model - the srh440 is used in a lot of recording studios as monitors.

generally, studio engineer headphones are cheaper and better than audiophile ones. Engineers can't be fooled. Case in point: Sennheiser's own audiophile headphones cost twice as much as their studio line, but the studio line has more accurate reproduction.

And that's a very fair statement about Bose. Amar G. Bose was a genius (and a fellow Indian) at getting big sound out of little enclosures, and began at a time when sound reproduction was pretty mediocre. Bose's selling point is their efficiency not their accuracy, and I think they're fairly straightforward about that so I kind of dislike the half-informed audiophile yahoos who poo poo them.
 

bsolar

macrumors 68000
Jun 20, 2011
1,535
1,751
You can't have ultrasonic distortion unless you converted the source material TO something else (in this case 24/192). Generally higher then the source was to begin with OR converted from Analog.

The distortion is not in the audio data you try to reproduce, it's in the resulting sound. The reason is that sound systems usually are not designed to reproduce ultrasonic audio, so these frequencies are effectively outside their range of optimal operation. The article even provides sample audio files with perfect audio data of supersonic tones which when reproduced should be completely silent (supersonic) but during reproduction on some systems cause artifacts to be heard.

And think for a second. IF 24/192 introduced distortion, that would mean practically every movie (DTS HD-MA or TrueHD) would sound bad. And you would certainly hear it in the movie theater. IMAX anyone? With its 12,000 WATTS!!! WhohooOooohho.

The article doesn't claim that the distortion is of such magnitude to ruin the audible part completely or even that the distortion always happens:

In summary, it's not certain that intermodulation from ultrasonics will be audible on a given system. The added distortion could be insignificant or it could be noticable. Either way, ultrasonic content is never a benefit, and on plenty of systems it will audibly hurt fidelity. On the systems it doesn't hurt, the cost and complexity of handling ultrasonics could have been saved, or spent on improved audible range performance instead.

I suggest you to actually read it.
 

Avatar74

macrumors 68000
Feb 5, 2007
1,608
402
Actually, I think over sampling was what I was thinking of

Also, isn't it more an argument for recording at higher rates and not necessarily delivery method?

Both. Downsampling is always preferable to upsampling, and better audio resolution is critical when doing multichannel recording, bouncing channels, running fx, etc. that will ultimately be downmixed into two channels... similar concept to back in the day when ILM used a 70mm Vistavision camera to shoot the effects plates to later be optically composited to 35mm.

There's far less generation "noise" with digital but it's not completely gone.

But when the source is 24/192 and the output medium is 16/44 you still lose some definition.... so my BEST advice is to actually record and mix at the rates you plan to master to, because that will help you test the boundaries you need to stay within for the given medium. i.e. Don't do things in 24 that will screw up the result in 16.... ideally, stick to 24 all the way, or stick to 16 all the way through the process.
 

csbo

macrumors member
Apr 10, 2014
30
3
I do NOT hear distortion on any of my 24/192 audio files or movies (DTS-HD-MA or Dolby TrueHD) movies.
how do you know that? Distortion doesn't necessarily sound bad, so the absence of offensive sounds doesn't mean no distortion. There's also a chance the designer intron ally put in filters to prevent the distortion

I would say, a lot of that has to do with how it was mixed in the first place.
sure, but then it isn't an issue of the sampling frequency or bit rate
 

blackcatdigi

macrumors newbie
Apr 11, 2014
1
0
Yep. 16-bits is more than enough head room to capture dynamic range. Let's not forget the music is sampled 44,100 times PER SECOND. That's as close to the original sound wave you need. Everything else is placebo and diminishing returns. You can't hear anything beyond 22k anyway (I'm probably down to 18k at my age) so 192k is just overkill.

As a music industry professional for over 30 years I strongly disagree with your statement. I have been involved in the development of digital recording technology as well as engineering/recording thousands of projects and I can assure you with absolute certainty that you are incorrect. There are plainly audible differences even within our limited range of hearing.

This is what the 'golden-ear' folks who design and use the equipment have ALL agreed on:

16 bit is not enough. 24 bit is sufficient.
44.1k is not enough. 48k is better.
The actual sweet spot between the benefits of higher sample rates and the inherent flaws of the existing converter technology is widely regarded to be around 65k, but of course no such device exists. Therefore we choose between 44.1/48/88.2/96/177/192k sample rates prior to pushing the record button

I have participated in countless double-blind A/B/X tests over the years and I can still accurately determine which formats are which.

So, with all due respect, this is not a placebo effect.

Also, just FYI: Unless you are under the age of 20 your hearing likely extends no higher than 12k or so. If you can hear above 15k above the age of 40 you have led a very, very, quiet and isolated existence!

Cheers!

----------

My Goodness!

Thank you Avatar74 for your very in-depth explanation for the fine folks on this forum! I should have read all of the replies before my over-simplified response.
 

bsolar

macrumors 68000
Jun 20, 2011
1,535
1,751
Well then that makes no sense to me. If your saying (rather the article) that IF the source has ultrasonic's in the recording (how did it get there to begin with?) it will (could) have distortion in the audible range when you try and reproduce it. You would have had to record it in 24/192 for it to be there in the first place no (picked up by the recording or added to it from the recording equipment used)?

Yes? I mean, what's the point of being able to get a 192kHz audio file if there are no frequencies in it requiring more than 44kHz of sampling rate (talking about final reproduction)?

For it then to be reproduced (so you can hear the distortion) on an audio system that "can't" play it back correctly. I do NOT hear distortion on any of my 24/192 audio files or movies (DTS-HD-MA or Dolby TrueHD) movies. I DO hear a distinct difference in sound quality between MP3/AAC and higher bit rate/frequency audio of the same music recording (source vs MP3/AAC of the source).
Depending on the music in question, sometimes pretty alarmingly different.

And that's fine, but it has nothing to do with the bits-per-sample and sampling frequency discussion. It's like saying that you cannot see any artifacts in a 32bit color PNG image but you can see them in a 32bit color JPEG one. The situation we are discussing is more: are you able to see the difference between a 32bit PNG image and a theoretical 64bit PNG image? You would need to find a display actually able to display that many colors accurately and even then you might not be able to see a difference anyway.

The last part of what you said (about the article) also makes no sense. They say there is no advantage to the 24/192 over 16/44.1. I would say, a lot of that has to do with how it was mixed in the first place. What exactly are we listening to (and more importably on what system are we listening on). I'll say that most people really can't tell the difference. Part due to the prevalence of lossy audio in the world. Part because, most people simply don't care. And part because most people don't listen to music worth much a damn in the first place.

To give another analogy. If you remember Sony (Or similarly typed) Trinitron CRT screens. Once you saw the line (albeit really thin black lines that separated the 3 regions of the screen). You saw them forever.

I'm also not saying that 24/192 is for the masses. Its not convenient, and its not what matters most to most people just wanting to listen to some music. But, it is better to those that can hear the difference.

The article states that there is no advantage because humans are not able to enjoy the increased dynamic range and higher frequencies. The article even provides his own video analogy:

In our hypothetical Wide Spectrum Video craze, consider a fervent group of Spectrophiles who believe these limits aren't generous enough. They propose that video represent not only the visible spectrum, but also infrared and ultraviolet. Continuing the comparison, there's an even more hardcore [and proud of it!] faction that insists this expanded range is yet insufficient, and that video feels so much more natural when it also includes microwaves and some of the X-ray spectrum. To a Golden Eye, they insist, the difference is night and day!

Of course this is ludicrous.

No one can see X-rays (or infrared, or ultraviolet, or microwaves). It doesn't matter how much a person believes he can. Retinas simply don't have the sensory hardware.

Basically if you can hear sounds requiring more than 44kHz sampling rate your hearing surpasses human standards. I'm sure a lot of scientists would like to meet you and test you. For science!
 

lars666

macrumors 65816
Jul 13, 2008
1,202
1,325
Well, it seems like people (not me at this point in time) tend to stream their music. So actually storing music is a decreasing habit.

So if you really want to own it physically, you might be willing to spend additional storage on that while also getting better quality delivered (if you're keen on that).

Again, 192k sampling rate is not making anything better for human ears anymore, even with a one million dollars stereo system. We can argue about 256/320 MP3s and AACs vs. lossless (although I personally don't hear the difference through my very good DAC/speakers either) and about 16-bit vs. 24-bit sampling rate, but 192k is waste in ANY case, streaming or local space. The reference article to this topic is indeed the one which was already posted here before by somebody else.

I would be VERY surprised if Apple ever offers 192k files. Lossless 24/96? Definitely possible. To pay more for those would only make sense to a small percentage of users though, definitely NOT for people listening to music on their computer speakers or even – I shudder when I only think about that – the iPhone stock earbuds.
 

subjonas

macrumors 603
Feb 10, 2014
5,639
5,987
finally, but now for step two

I've been waiting for lossless iTunes music for a long long time (much more so than lossless movies), so if this rumor is true, this is very blessed news (albeit the quality is even better than I was hoping for and probably better than I need). But lossless downloads is only step one of two. There is still the bigger and more fundamental issue of licensing (renting) versus owning. I know I speak for a lot of people when I say I want to buy music, not license. This obviously goes for movies and ebooks too. For the last 6-7 years I've pretty much been off the entertainment market, because physical media is a hassle, and downloads are license-only and (have been) second-rate in quality. If I have to choose between the two, I always go with physical media, but it's extremely occasional. But as soon as iTunes and the like stop licensing and start selling, I can assure you I'll be one download trigger-happy mofo. I'm just waiting. In the meantime, vendors need to be honest and replace the 'buy' button with a 'license' button so they stop pulling a fast one on uninformed consumers.
 

bsolar

macrumors 68000
Jun 20, 2011
1,535
1,751
16 bit is not enough. 24 bit is sufficient.

Are you talking about distribution or production? From the article posted before:

Professionals use 24 bit samples in recording and production [14] for headroom, noise floor, and convenience reasons.

16 bits is enough to span the real hearing range with room to spare. It does not span the entire possible signal range of audio equipment. The primary reason to use 24 bits when recording is to prevent mistakes; rather than being careful to center 16 bit recording-- risking clipping if you guess too high and adding noise if you guess too low-- 24 bits allows an operator to set an approximate level and not worry too much about it. Missing the optimal gain setting by a few bits has no consequences, and effects that dynamically compress the recorded range have a deep floor to work with.

An engineer also requires more than 16 bits during mixing and mastering. Modern work flows may involve literally thousands of effects and operations. The quantization noise and noise floor of a 16 bit sample may be undetectable during playback, but multiplying that noise by a few thousand times eventually becomes noticeable. 24 bits keeps the accumulated noise at a very low level. Once the music is ready to distribute, there's no reason to keep more than 16 bits.
 

Avatar74

macrumors 68000
Feb 5, 2007
1,608
402
Basically if you can hear sounds requiring more than 44kHz sampling rate your hearing surpasses human standards. I'm sure a lot of scientists would like to meet you and test you. For science!

That's not the issue.

The issue is that failure to use a lowpass filter to completely eliminate source frequencies higher than the Nyquist limit going IN to the A/D converter upon recording, will result in aliases that are within the audible range. The problem is compounded not just by base frequencies, but also nth order harmonics of base frequencies that result from an audible frequency but lie between the nyquist limit and the sampling frequency.

Here's a good visualization of the lower, completely incorrect frequency produced when something exceeding the nyquist frequency is captured.

NOTE: frequencies above the nyquist frequency are not automatically thrown out... the Nyquist frequency is half the sampling frequency. The sampling frequency determines the limit of all frequencies picked up. The Nyquist frequency is the maximum frequency that can be reproduced accurately. So if a 33kHz tone is sampled at 44.1kHz, it will cause an alias below the nyquist limit that you can hear (see the youtube video link above).
 
Last edited:

kaldezar

macrumors regular
May 28, 2008
120
6
London, England
just out of curiosity (this question is out of my own ignorance).

What would a highquality USB digital audio converter do for listening? For those who already listen to their music with fairly good quality headphones, Why would it improve their experience?

Do sound cards in computers not provide the ability to output the audio as close to possible in the recording? If you're already listening to FLAC directly off the Audio CD, are modern day sound cards really that bad at audio playback? any technical resources that explains this would be awesome. thanks

Yup sound card DACS are relatively poor quality as is the amplification built into them. I have been using a Cambridge Soundmagic DAC (not terribly expensive) to listen to FLAC and Apple lossless files which is an improvement. However in the last few days I have been using a replacement for my trusty iPod classic, which plays FLAC, apple lossless at bit rates up to 192/24. The replacement is the Fiio X5 which from a usability viewpoint doesn't even come close to an iPod but sonically blows it away, basically by using better more expensive components. I purposely loaded it up with music I know really well including albums which have always sounded crap even though artistically they were superb....Layla for example on the Fiio doesn't sound great( the original recording just isn't that good) but it does sound better than I have ever heard it before. Fleetwood Mac Rumours DVD-A which i have ripped as a 24/96 FLAC again sounds far superior to the FLAC rip off of the CD. Deeper more defined bass, far more detail in vocals especially harmony vocals. My headphones are Shure SE535's with custom moods from ACS which really make the most of the higher quality, Think that iTunes HD tracks will be a great idea especially if they work out a lot cheaper than HD tracks......$24 an album is bit pricey too say the least!
 

bsolar

macrumors 68000
Jun 20, 2011
1,535
1,751
That's not the issue.

The issue is that failure to use a lowpass filter to completely eliminate source frequencies higher than the Nyquist limit going IN to the A/D converter upon recording, will result in aliases that are within the audible range. The problem is compounded not just by base frequencies, but also nth order harmonics of base frequencies that result from an audible frequency but lie between the nyquist limit and the sampling frequency.

Here's a good visualization of the lower, completely incorrect frequency produced when something exceeding the nyquist frequency is captured.

NOTE: frequencies above the nyquist frequency are not automatically thrown out... the Nyquist frequency is half the sampling frequency. The sampling frequency determines the limit of all frequencies picked up. The Nyquist frequency is the maximum frequency that can be reproduced accurately. So if a 33kHz tone is sampled at 44.1kHz, it will cause an alias below the nyquist limit that you can hear (see the youtube video link above).

I understand the technical issues you describe but what I meant comes a step before: you have to decide which Nyquist frequency you want to sample in the first place.

In the visible spectrum analogy provided, you have to decide if you want to reproduce ultraviolet light or radio frequencies in your video: if you want you need a significantly higher sampling frequency, because the Nyquist frequency you want is higher too. Since these frequencies are undetectable by the human eye you most likely don't want to.

Back to audio, since humans can hear only up to 20kHz at best there is no reason to go above that as Nyquist frequency.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.