Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Luba

macrumors 68000
Original poster
Apr 22, 2009
1,807
379
Theoretically, what's better: If "Mastered for iTunes" files for us to buy on iTunes will be 24bit/96kHz but lossy VERSUS Apple Lossless (from a CD) that's 16bit/44kHz and lossless? Apple Lossless (ALAC) would sound like a CD. "Mastered for iTunes" has a better bit depth of 24 bit and a better sample rate of 96kHz than a CD, but "Mastered for iTunes" is lossy.

Ideally, the best would be 24/96 and lossless, which I believe is the same as the studio digital master, correct?

BTW, if Apple were to offer in the future 24/96 lossless, would that, could that also be called Apple Lossless (ALAC)? Or would another name for the format be needed? Can I now rip a SACD that's 24bit/96kHz into ALAC format into iTunes?

Thanks!
 
Last edited:
You're talking about two entirely different things. The mastering of a song and the file's bitrate are two very different subjects.
 
If I understand this correctly, "Mastered for iTunes" is just a name of a new music file that will be offered by Apple. So in the near future, I can buy "Mastered for iTunes" files from Apple at 24bit/96kHz but it's lossy, perhaps at 256 kbps. Or I can continue to buy CDs and then ripped them into iTunes using Apple Lossless. Theoretically speaking, which would be better audio-wise?

You're talking about two entirely different things. The mastering of a song and the file's bitrate are two very different subjects.
 
Can I get a link on it? I wanna read the exact words of the original source to avoid confusion.
 
Ok, thanks! I didn't really hear anything about bitrate vs bit depth and sampling rate, nor any new mastering methods-- only a possibility of a new compressoin type perhaps. It's an unclear article, so hopefully things would be cleared up come time for the media event.

As an answer to your question, you will most likely get better results out of higher bitrate than higher bit depth or sampling rate, although usually those things go hand in hand. Bitrate usually translates into bit depth and so forth, but this all depends on the compression and file type, it's very complex.

The mastering of the song is the single biggest difference you'd be able to hear.

Edit: Just saw the video as well. The guy's strictly talking about mastering differences, so there's no discussion as to actual resolution and quality. He also touches bases on how vinyls can seem to sound better because usually they're mastered with higher dynamic range in mind, which is true.
 
Last edited:
It's worth reading Apple's own documentation on the "Mastered for iTunes" thing.

http://www.apple.com/itunes/mastered-for-itunes/
http://images.apple.com/itunes/mastered-for-itunes/docs/mastered_for_itunes.pdf

It's not a new format. We'll continue getting the same iTunes Plus 256 kbps variable-bit-rate AAC files we've been getting. "Mastered for iTunes" is simply a set of guidelines from Apple to sound engineers about how to optimise their master file for iTunes. Apple then takes this master file and converts it to the familiar iTunes Plus format.

Will this result in better sounding music? Depends on the sound engineer. There's nothing stopping them from taking a 44/16 CD track, upsampling it to 196/24, which then Apple will convert to iTunes Plus and produce something no better than a ripped CD track. Or, the sound engineer could seriously make the best possible sounding master file, rather than subjecting it to the loudness wars, specifically targeted at the expected use of the track (earbuds? home hi-fi?) In which case, yeah, it would probably sound better than a poorly mastered, loudness war victimized, ripped CD track.

Unfortunately, there already appears to be instances of "Mastered for iTunes" tracks simply being the same old tracks with a shiny new lable. Time will tell if Apple and the content owners will take this seriously, or simply use it as marketing hype.
 
It's worth reading Apple's own documentation on the "Mastered for iTunes" thing.

http://www.apple.com/itunes/mastered-for-itunes/
http://images.apple.com/itunes/mastered-for-itunes/docs/mastered_for_itunes.pdf

It's not a new format. We'll continue getting the same iTunes Plus 256 kbps variable-bit-rate AAC files we've been getting. "Mastered for iTunes" is simply a set of guidelines from Apple to sound engineers about how to optimise their master file for iTunes. Apple then takes this master file and converts it to the familiar iTunes Plus format.

Will this result in better sounding music? Depends on the sound engineer. There's nothing stopping them from taking a 44/16 CD track, upsampling it to 196/24, which then Apple will convert to iTunes Plus and produce something no better than a ripped CD track. Or, the sound engineer could seriously make the best possible sounding master file, rather than subjecting it to the loudness wars, specifically targeted at the expected use of the track (earbuds? home hi-fi?) In which case, yeah, it would probably sound better than a poorly mastered, loudness war victimized, ripped CD track.

Unfortunately, there already appears to be instances of "Mastered for iTunes" tracks simply being the same old tracks with a shiny new lable. Time will tell if Apple and the content owners will take this seriously, or simply use it as marketing hype.

One could only wish they would take the initiative in doing so. All my music are CDs in either ALAC or AIFF format, so I don't order from iTunes since they only sell AAC format music.
 
The current offerings from iTunes are still really good, but you just want to make sure you'd get the album with the best mastering, since iTunes offers a lot of different variations of the same songs. I'd be hard-pressed to tell the difference between them and lossless cd rips, and I have decent equipment.
 
Some things need to be clarified here:

1.) "Mastered for iTunes" is simply a specific process encouraged by Apple for mastering engineers to create a version of their masters that are "optimized" for this 16-bit/44.1k AAC format, similar to how they create specific masters for vinyl and CD.

2.) None of these new files will have higher sample rates or bit depths than what is currently present in the iTunes Music Store. What you will be downloading will be 16-bit/44.1k/256kbps AAC files. The difference is the source material that mastering engineers are feeding Apple's new encoders.

These source files can range from 16-24 bit; 44.1k 48k 88.2k or 96k sample rates.

3.) These new files will not always sound closer to the CD source. Mastering engineers may decide to slightly alter their signal processing chain to allow the music to be represented more accurately via the AAC format - therefore, when you do an AB comparison between the CD master and an AAC master, you'll notice subtle differences.

This clip pretty much sums up the entire thing: http://productionadvice.co.uk/mastered-for-itunes-cd-comparison/

Please note that the engineer in the video isn't saying that "Mastered for iTunes" is a crappy format, but that the assertion that it sounds closer to CD quality audio is complete BS. Yes, sometimes it'll sound close, other times it'll be drastically different (from a mastering perspective, at least).

Hope this clears up some of the confusion!
 
why cant we move forward to better formats rather than backwards and sideways, maybe one day a 24bit 96k (or better) PCM format will finally take over, until then I'll get my milage out of my SACD's :p
 
Mastered for iTunes

So true . Apple do so many things fantastically well and are constantly innovating and pushing the envelope in most areas but , in this , and 720p Apple tv video limitation ( hopefully to change soon ) , they are really stupid in my opinion .

God knows there are already more than enough formats and no need for another added complication of ' bigger numbers but with possibly less quality than you may think ' . They are in danger of becoming the 21st century equivalent of the greedy Sony Corporation constantly in search of yet another ( easy ) revenue stream .

Just use Flac ( or even mlp if possible ? ) and give us what the creator and artist REALLY hear in the studio

~M~
 
Apple needs to step up their audio. None of this 128kbps AAC crap. They are by the mass, largely considered the audio distribution capital at large. Offer levels of quality.

-AAC 192kbps, 44.1khz/48khz sample rate, standard 16-bit depth. If its a video make it Dolby Pro Logic II/IIx (depending on the movie)

-Dolby Digital 5.1 EX for videos which have 6.1/7.1 sound. Dolby Digital can be compressed down to some insanely low bit rates and still sound amazing. I'm telling you I've listened to a 192kbps 5.1 EX file and it sounded pretty damn good.

-Support for DTS in mp4 container. Not saying they need to start pushing DTS out as AC3 is much more compressed and easier to handle. Also ensure DTS-ES Discrete 6.1 is supported. I'm tired of having to get a movie on BluRay for 7.1 when we have existing technology like DTS-ES and Dolby Digital EX that has been sitting around for ages but is never used. It bothers me seeing a movie like transformers 3 with TrueHD 7.1 and a standard 5.1 DD as fallback...might as well make it 5.1 EX or DTS-ES...

-Some type of FLAC like audio format that can be considered Master Audio for music listening. Lossless 24bit with whatever sample rate the studio wants as long as its under 96khz.
 
I don't think you understand the audio quality that some people are looking for, I will settle for nothing less than 24bit 96k for a new format, because that's where things actually start so sound really good, AAC 192kbps is WORSE than itunes+ I'm a fan of uncompressed but lossless is lossless so ALAC would be a great step forward with 24/96

but personally I'd like to see some 192khz music, and maybe even expand to a 32bit bit depth, but the holy grail would be 32bit DXD (aka 352.8Khz)
 
Apple needs to step up their audio. None of this 128kbps AAC crap. They are by the mass, largely considered the audio distribution capital at large. Offer levels of quality.

-AAC 192kbps, 44.1khz/48khz sample rate, standard 16-bit depth.

Any music you buy from iTunes these days will come in iTunes Plus format, which is 256 kbps variable bit rate, and 44/16.
 
I don't think you understand the audio quality that some people are looking for, I will settle for nothing less than 24bit 96k for a new format, because that's where things actually start so sound really good, AAC 192kbps is WORSE than itunes+ I'm a fan of uncompressed but lossless is lossless so ALAC would be a great step forward with 24/96

but personally I'd like to see some 192khz music, and maybe even expand to a 32bit bit depth, but the holy grail would be 32bit DXD (aka 352.8Khz)

192khz is considered by allot, complete overkill. I can't really take a stance on 192khz content since the majority of the stuff I listen to is =<96khz. 32bit (not 32bit float) would be an insane jump...I mean I'm not sure what type of hardware could really utilize that...though most modern receivers support 192khz @ 24bit in L-PCM, Master Audio or TrueHD format
 
192khz is considered by allot, complete overkill. I can't really take a stance on 192khz content since the majority of the stuff I listen to is =<96khz. 32bit (not 32bit float) would be an insane jump...I mean I'm not sure what type of hardware could really utilize that...though most modern receivers support 192khz @ 24bit in L-PCM, Master Audio or TrueHD format

I know 192 Khz is A LOT, but he said 192 kbps, maybe it was a typo, but how am I supposed to know that when both are relatively valid

I have to admit though there's not a big (noticeable) difference from 96k - 192k but while my gear is good there is much better so maybe I'm missing out a little, and while I know everything is 24bit or less, I just want DXD 32bit we never need to have another format and just be done with it no matter what changes in the future
 
I don't think you understand the audio quality that some people are looking for, I will settle for nothing less than 24bit 96k for a new format, because that's where things actually start so sound really good, AAC 192kbps is WORSE than itunes+ I'm a fan of uncompressed but lossless is lossless so ALAC would be a great step forward with 24/96

but personally I'd like to see some 192khz music, and maybe even expand to a 32bit bit depth, but the holy grail would be 32bit DXD (aka 352.8Khz)

How about we start by just hoping for lossless 44.1/16?

Contrary to common audiophile belief, that standard wasn't happened upon by accident. 16 bits provides 96dB of dynamic range (and up to 120 with dithering). Considering that most music has less than 12dB of dynamic range, and even quality orchestral recordings rarely exceed 60dB, 16 bits are more than adequate. What would 24 bits gain us? The ability to encode a wider dynamic range and that's it. Yet 16 bits already provides enough range to cause immediate hearing damage if it were fully utilized. (96dB + 30dB noise floor of a quiet listening room = 126dB from 16 bits)

As for the 44.1kHz sampling rate? That was chosen because 44.1kHz is enough to fully reconstruct a 20kHz signal perfectly. (Not almost, good enough or whatever else you think, but perfectly.) Most listeners above 20 years old can't hear much above 16 or 18kHz anyhow, so that 44.1kHz sample rate provides significant headroom as well.

So what do we get from high resolution music? Larger file sizes with no audible differences. No thanks. I'll take regular lossless please with a plea to mastering engineers to quit the loudness war.
 
How about we start by just hoping for lossless 44.1/16?

Contrary to common audiophile belief, that standard wasn't happened upon by accident. 16 bits provides 96dB of dynamic range (and up to 120 with dithering). Considering that most music has less than 12dB of dynamic range, and even quality orchestral recordings rarely exceed 60dB, 16 bits are more than adequate. What would 24 bits gain us? The ability to encode a wider dynamic range and that's it. Yet 16 bits already provides enough range to cause immediate hearing damage if it were fully utilized. (96dB + 30dB noise floor of a quiet listening room = 126dB from 16 bits)

As for the 44.1kHz sampling rate? That was chosen because 44.1kHz is enough to fully reconstruct a 20kHz signal perfectly. (Not almost, good enough or whatever else you think, but perfectly.) Most listeners above 20 years old can't hear much above 16 or 18kHz anyhow, so that 44.1kHz sample rate provides significant headroom as well.

So what do we get from high resolution music? Larger file sizes with no audible differences. No thanks. I'll take regular lossless please with a plea to mastering engineers to quit the loudness war.

if you get yourself some decent equipment then you'd know that 96k sound noticeably better than 44.1, 44.1 was actually chosen because it was good enough, as transients and high frequencies are not reproduced as well at 44.1 as they are at higher sample rates, and harmonics above the average hearing rage are also thought to effect the sound as well (although this is often disputed). but the fact is you get less samples per wavelength at higher frequencies so higher sample rates are necessary

and sure maybe 24bit is overkill, but when using 24bit there is less need for dither, although I will admit the bigger advantages are during recording
 
if you get yourself some decent equipment then you'd know that 96k sound noticeably better than 44.1, 44.1 was actually chosen because it was good enough, as transients and high frequencies are not reproduced as well at 44.1 as they are at higher sample rates, and harmonics above the average hearing rage are also thought to effect the sound as well (although this is often disputed). but the fact is you get less samples per wavelength at higher frequencies so higher sample rates are necessary

and sure maybe 24bit is overkill, but when using 24bit there is less need for dither, although I will admit the bigger advantages are during recording

I have more than 'decent' equipment. I could buy a nice car with the headphone arrangement I have.

Of course you get fewer samples per wavelength at higher frequencies, but 44.1 still provides enough samples for perfect reproduction to above 20kHz. More samples won't make it more perfect.

The differences you hear between 96k and 44.1k material is because of differences in the mastering.

So, HOW are higher frequencies and transients reproduced better than the already perfect reproduction that 44.1 gives us? Be specific please.
 
Last edited:
Of course you get fewer samples per wavelength at higher frequencies, but 44.1 still provides enough samples for perfect reproduction to above 20kHz. More samples won't make it more perfect.

That is only true if you have the theorically perfect "brick wall" low pass filter that cut off everything above 22KHz With a less then perfect filter there can be considerable distortion in the upper frequencies.

That said, few people can hear this. Maybe dogs or children. But most of us don't hear anything above 15K or so. So the argument is moot.


But "perfect" is something we don't see much of in real like, only in the world of brick wall perfect filters
 
I have more than 'decent' equipment. I could buy a nice car with the headphone arrangement I have.

Of course you get fewer samples per wavelength at higher frequencies, but 44.1 still provides enough samples for perfect reproduction to above 20kHz. More samples won't make it more perfect.

The differences you hear between 96k and 44.1k material is because of differences in the mastering.

So, HOW are higher frequencies and transients reproduced better than the already perfect reproduction that 44.1 gives us? Be specific please.

I could tell you HOW specifically, but I can't be bothered looking though all my old notes and such from the course I did in sound engineering just to settle a petty argument on the internet, if you really can't hear the difference, then either your ears are failing, or my ears are better than I ever realised
 
I could tell you HOW specifically, but I can't be bothered looking though all my old notes and such from the course I did in sound engineering just to settle a petty argument on the internet, if you really can't hear the difference, then either your ears are failing, or my ears are better than I ever realised

Nice, a personal attack when asked for a technical answer. I bow down to your golden ears. :D

If you'd like to find out how and why this stuff works for real, please come and join us in the Sound Science forum at Head-fi where there are plenty of opportunities to check out your biases with real tests.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.