Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

cyanescens

macrumors newbie
Original poster
Feb 5, 2014
2
0
I didn't create this to bash 256kbps AAC files. I have a lot of songs on iTunes already and I am generally happy with them, but sometimes the quality frustrates me. Sorry it is my first post but I'm not a troll.

I don't work in a music industry nor do I play any instruments.

However, I can totally tell the difference between a CD track and a digital song, regardless whether it is 256kbps or 320kbps or AAC or MP3. I mean a lot of times air gushes out from sub woofers and you can feel it when you play tracks from CD, and you don't get that from files. And even CDs cannot seem to emulate the clashes and rides a drum kit makes when you are standing next to it perfectly, even with high end floor speakers with matching amplifiers and so on.

Now I understand when people say things like 'pin drop sound at a concert/orchestra'.

Am I crazy? Or is it true that people are just reluctant to admit that 512kbps is not the hard ceiling of sounds, its kind of like saying after 16 million color SVGA no one can tell the difference (look how much we progressed since then), or 'human eye cannot detect over 400ppi on a phone' (I can see the difference especially when you flick through the screens, it is just that technologically for the time being, it is not worth those extra ppi for price/power consumption/computing power needed etc)

I don't think I have supernatural senses, nor do I think my senses are some sort of placebo effect. We are just being told 'be happy with the current retail standard, human can't tell the difference any further' right? Otherwise film industry and music industry would be not only be producing CD format and Blu-ray format, but also working with CD and Blu-ray format, which I am almost certain is not true.
 
I didn't create this to bash 256kbps AAC files. I have a lot of songs on iTunes already and I am generally happy with them, but sometimes the quality frustrates me.

1. Many people can't tell the difference.
2. Many people don't care about the difference.
3. I can't tell the difference in the car, or if music comes from the MacBook speakers. Probably couldn't tell if it comes from my TV's speakers.
4. With good headphones, I cannot tell _what_ the difference is. Even if I can tell that one version sounds more enjoyable than another, I couldn't describe _why_. Many people will think that because they can't describe what the difference is, they have to say they can't hear a difference.
5. At 256 KBit AAC, I think quality of your equipment is still more important than the compression, unless you go to very expensive equipment.
6. Owning a lot of audio books, that's an area where 80 Kbit AAC+HE is enough.
 
  • Like
Reactions: BeechFlyer
When I am mixing in a production setting, on say Pro tools, master files are always 24bit 96khz AIFF (Also called a golden master) A CD can only do 16bit.

Any apple product can play AIFF, but your looking at 30-100MB per song depending on length.
 
Buy CDs and make perfect-copy files from those. Problem solved :B
 
256 VBR seems to be the ceiling for most people in reasonable circumstances

I have a lot of songs on iTunes already and I am generally happy with them, but sometimes the quality frustrates me.

I can totally tell the difference between a CD track and a digital song, regardless whether it is 256kbps or 320kbps or AAC or MP3. I mean a lot of times air gushes out from sub woofers and you can feel it when you play tracks from CD, and you don't get that from files.

I don't think I have supernatural senses, nor do I think my senses are some sort of placebo effect.

This is a great question, with often convoluted answers. I've only found a handful of fairly scientific examinations of this issues (which I'll touch on in a second). I think the issue of sound quality can often become one of diminishing returns, above a certain point, but there are some primary variables here.

First of all, I want to mention that if you're comparing a literal CD to a compressed file (as opposed to a wav or lossless rip), you don't have a level playing field, because your compressed file and CD are probably not taking the same path to your speakers or headphones (e.g. different connections, dac's, amps or preamps, wiring etc.). You would have to compare compressed files to uncompressed or lossless files.

The biggest variables in general though are the contexts for listening, which basically means A) the ears of the listener, and B) the actual environment used for listening (which includes the room acoustics, equipment (speakers, headphones, amps), and speaker / headphone placement).

So for most normal listening environments (e.g. people with normal ears listening to average or reasonably good --consumer-- speakers) I think it's virtually impossible to tell the difference between anything above 256, and all the objective tests I've seen support this. So in that sense, iTunes will work fine for a "normal" person.

However, if you have "good" speakers (dare I use the term "audiophile"), then you might start noticing differences, but even then they are usually rather subtle and rely on an optimal (or at least well calibrated) listening environment. For some background by the way, I do a lot of audio-related work, and I'm a moderate "audiophile" (I dislike that word, but basically for me it means that I really enjoy music and I have pretty high quality well-selected gear).

So here's the "scientific part". There have been very few reasonably scientific examinations of evaluating how people can tell the differences between different compression qualities, and when I say "scientific", I mean objective. If you are reading about any kind of subjective listening test, it is simply NOT a valid test. The test HAS to be an objective and BLINDED test, otherwise subjective and psychological effects (including placebo) will destroy the results of the test! Completely defeats the purpose...

So, out of all the objective testing I've seen, it turns out that most people (emphasizing most) CANNOT tell the difference in anything above 256 VBR! The VBR (variable bit rate) is important though, because 256 at a constant bit rate IS more noticeable (but even then it still requires half decent gear).

Personally, I've tried some very simple blinded testing myself (there's a program you can get for this), and I couldn't reliably tell the difference between 320 cbr and lossless (but I only did this test quickly, with a sample size of 5 or 6... I need to replicate this more :) ).

Now, are there instances where I believe people can tell the difference between high bitrate compression (256 VBR... and maybe 320 at a constant bit rate) and uncompressed audio? Yes, but I think this will only be the case in an ideal (or very optimized) listening environment.

If anyone compared (blindly) a 320 cbr mp3 with a lossless file on my Sony boombox in my kitchen, I doubt anybody could tell the difference. If they did the same in my studio using good monitors and a DAC, etc, I imagine some people might be able to tell them apart, at least with the right kind of track maybe... but I still imagine it would sound the same to most people.

Now, if you compared those 2 files on a $50 000 stereo system, using carefully selected DAC's, amps, speakers etc., using perfectly positioned speakers while sitting in the sweet spot, in an empty room that has been acoustically treated, I imagine the number of people who could tell them apart would increase... and even then I doubt that more than half the people could find the differences (again, in a blind test), although this number might go up if they were music enthusiasts, or musicians, or audio engineers, etc.

So, all of this to say that the evidence suggests that above 256 VBR (which does NOT include itunes! that's cbr) people are hard pressed to notice any differences in any day to day circumstances. Even with high end gear, I think people fail to realize how many other variables affect listening, including speaker placement, even the amount of wax in your ears!

In your case, I don't know what kind of gear you're using, I imagine it must be half decent if you think you can hear these differences so easily, and perhaps, you might actually have better ears than you think! :)

That being said, I would suggest you try some blind experiments yourself. I found a great little app (forget the name though) that lets you do blind A and B comparisons between two audio files. So, you pick 2 files with different compression, the program will play them back, but you have no way of knowing which is which, and then you can try and guess which is which. Repeat that a bunch of times and you'll have a better sense of what you can reliably detect! :) I've been wanting to replicate my experiment with more people to get more numbers, but I haven't got around to that yet... and my earlier references to 256 VBR being the detectable ceiling for most people did not come from my own experiments, but from someone else's, they did blinded tests with a pretty good sample size.

So... there's some food for thought! I'll try and find the name of that program too... :)
 
Don't know where I found it, but there was some interesting info about the "made for iTunes" program.

1. Lots of music is recorded to go the the limits of the loudness range. If you compress and decompress that music, you can exceed the limits and the quality gets awful, even with 256 KBit/sec AAC. The trick is to stay away from the limit.

2. Some of the music on iTunes is compressed from masters at 24 bit / 192k samples per second. That removes the compression step going to 16 bit / 44100 samples of a CD.
 
Thank you all for your insights!

Yes I do have a half decent equipment, Sennheiser Momentum for headphone (mainly for portability), Klipsch 5.1 for one of my desktop, and a Creative 2.1 for the other. I don't use any DAC while walking around. No B&W floor speakers or anything like that though. I also tend to focus more on sounds coming out of drum sets and bass rather than singing/lyrics and guitar and that could be one of the reasons that bother me a bit on sound quality.

I'm not too familiar with 'made for iTunes' program, but I can totally see how it can help, as I've had some CDs which from my memory had pretty good recording but sounded so-so on iTunes version and so on.

In addition to above, how the CD was originally recorded is probably quite important too, I see that a lot of 'remastered' version of classic rock songs tending to sound a bit better than the originals. And stuff from EMI/Sony/Universal would probably sound significantly better than a lot of DIY punk/metal records.

No, I don't listen to anything from TV speakers or OEM laptop speakers, I wouldn't be surprised if you can't tell the difference between a 192kbps file and 256kbps file on those. Although many people bash the freebie earpods, I use them when I don't feel like carrying a headphone around, and they have some nice sounds for kick drums and floor toms, although anything high is all muddled up.

I love iTune downloads, you pay half the price of CD, without the hassle of going to a shop, or wait for shipping, or wait for stock to be available, all the songs download within seconds, with album arts, and it was a boon they upgraded the songs from aac 128 to 256. But I think it is normal for people to always want more, right now, for no additional charge :) I'm sure it is matter of time before they upgrade the songs again to ALAC, as it would only take up just above twice the space from AAC 256.
 
A CD is digital, too. :)

As usual, it's more complicated. VBR doesn't necessarily sound better than CBR and a MP3 with a bitrate of 320kbps doesn't have to sound better than one at 256kbps. It depends a lot on the codec (AAC, MP3, ...) and especially on the encoder and its settings.

Apple has a history of NOT using the best encoders (for both audio and video), probably because of licensing issues. But they're constantly getting better. Not sure if they ever make the jump to ALAC in the Store though.

Also, compressed does not mean lossy. FLAC and ALAC compress the files without losing information.

Some tracks are more complex and require a higher bitrate than others.

Mastering is another important topic. At some point, there was JVC's XRCD matering process, which, in my opinion, actually improved sound quality. "Mastered for iTunes" is a good thing, too.
Remasters: Some of them are great. However, a lot is just compressed and made louder (thereby losing dynamic range).

The DAC is more important than many people realize. The Mac's analog output is usually not very good. Using the optical out and let the receiver do the conversion can do wonders.

Also, make sure that you have your iTunes Equalizer and Sound Enhancer turned off. ;)

All things considered, this is simply not a yes/no question. Find what works for you.

For what it's worth, I'll continue buying lossless (CD or online) as long as I can get my hands on it, even if I rarely can tell the difference from a well-encoded 256kbps MP3.
 
When I am mixing in a production setting, on say Pro tools, master files are always 24bit 96khz AIFF (Also called a golden master) A CD can only do 16bit.

And we remember that attempts to sell 24-bit CD formats have generally failed to find much traction in the market. The marginal difference wasn't worth the extra cost for most people.
 
I can't tell the difference between V0 and FLAC even with electrostatic headphones and a high end DAC, and I can easily tell the difference between a good DAC and a slightly lesser one. Below that it's obvious. Maybe some drum fills or high hats sound better in FLAC, but I might be kidding myself, probably am. I think anything about 44.1khz is silly... only very young children can hear sounds higher than 20khz. What matters (and it was a sound mixer at Fox who told me this) isn't the frequency but the accuracy of the sampling clock. I do find a lot of modern recordings over compressed and horrible, but that has to do with the mastering not the delivery method.

I also work as a colorist and score perfectly on color matching tests, and have 20/15 vision. I can't see more than 10 million colors or whatever the cut off is. Can't differentiate between an 8 bit panel and a 10 bit panel (though I suppose that's 24 million colors, so never mind).

If your ears and eyes really are this acutely trained (and they might be, I know people who have better vision than I do) you are in an extremely small minority, one that is market-irrelevant and probably constitutes 0.1% of people or less. Blessing and a curse, I'm sure.
 
There's more to all of this than just bit rates and speakers. I haven't tested on my own in a long time, but about a decade ago, I did an unscientific self-test on my own 700 watt 5.1 surround system. I picked a random song and compressed it to mp3. I couldn't tell the difference between the CD and anything at 160kbps or above, and that's mp3.

However, I also did a different test comparing a CD to a certain "special" format that was 384kbps. That lossy 384kbps sounded significantly better than the "lossless" CD. Why? Because it was Dolby Digital 5.1, at 24bit, 96kHz. It was the DVD version of the exact same concert. In comparison, the stereo CD at 16bit, 44kHz sounded like utter crap. The DVD had double the sampling rate, meaning high frequencies were significantly better represented. Therefore, even though the compression resulted in the loss of some data, it wasn't nearly as much data that is lost just by dropping the sample rate.

Moral of the story: don't just go looking for higher bit rates, since they just get you a more accurate representation of a crappy source. Look for higher sample rates.
 
Wow, that's a lot of educational and fascinating information guys. :) Thanks for sharing!


...But, um, I can't help but notice the distinct lack of any mention of an iPod touch in this thread. :eek:
 
Look for higher sample rates.

Actually, no. While high quality recordings often have higher sampling rates, that doesn't necessarily improve audio quality. Human hearing ends at about 20kHz (if you're young); higher frequencies can't usually be heard. According to the Nyquist–Shannon sampling theorem, a signal can be reconstructed perfectly when the sampling frequency is at least twice as high. That means that a standard audio CD at 44.1kHz can already contain frequencies you can't even hear.

However, a lot of DACs in Computers don't even go that high and very few headphones do at useful levels (even if they claim so).

Keeping all that in mind, it is completely possible that your DD5.1 track DID sound better, but most likely that had to do with mastering. Also, Dolby circuits tend to do some post-processing to the audio stream.

I can't help but notice the distinct lack of any mention of an iPod touch in this thread. :eek:

Portable devices usually don't have the greatest DAC, so you're less likely to be able to hear any difference between a well-encoded AAC file and a lossless one. Also, if you're on the road, there's usually other noise as well. Therefore it's "good enough" for most people (me included).
 
Last edited:
I find it boggling that I can't really notice any difference from my 128 AAC versions and a cd...

I suppose I'm blessed with bad ears.
 
The Big issue is the "loudness war" most of the music industry is messing up the music we listen to. A good file will sound good as MP3 and probably better as a larger file (HD) Try downloading a file from Linn records to find out what a good file should sound like, its night and day to my ears.
There are a lot of articles about Loudness war on the www. but i have posted the link to YouTube video which explains the problem quite clearly.There is some software called Audacity that enables you to look at the sound waves of the file and see what condition the file is in.

STOP the loudness war!

http://www.youtube.com/watch?v=3Gmex_4hreQ
 
Actually, no. While high quality recordings often have higher sampling rates, that doesn't necessarily improve audio quality. Human hearing ends at about 20kHz (if you're young); higher frequencies can't usually be heard. According to the Nyquist–Shannon sampling theorem, a signal can be reconstructed perfectly when the sampling frequency is at least twice as high. That means that a standard audio CD at 44.1kHz can already contain frequencies you can't even hear.

There is definitely a discernible difference between 44.1k and 192k. You are confusing audio frequency with sampling frequency. One has not much to do with the other. Sampling frequency is how many times an audio wave is sampled per second. Even a novice with healthy hearing can hear the difference between 44,000 and 192,000 samples per second. Much more clarity and accuracy. The 20k limit for human hearing refers to audio frequency, not sampling frequency, and has no bearing on this discussion.
 
Portable devices usually don't have the greatest DAC, so you're less likely to be able to hear any difference between a well-encoded AAC file and a lossless one. Also, if you're on the road, there's usually other noise as well. Therefore it's "good enough" for most people (me included).

...Right, but I'm just referring to the fact that this thread is in the iPod Touch forum.:p
 
There is definitely a discernible difference between 44.1k and 192k. You are confusing audio frequency with sampling frequency. One has not much to do with the other.

You are mistaken.
High-pitched tones have shorter soundwaves, thereby requiring a high sampling rate. This means that the two are directly linked.

There's a lot of good info on Wikipedia; look it up if you don't believe me.
Also, there's some excellent technical info on this page:
http://people.xiph.org/~xiphmont/demo/neil-young.html
 
So many ways sound can go wrong (or right).

Matching up the original media with the correct amplifier and then device for output (speakers, headphones etc.) can drive one to distraction.

On a mediocre system, it is hard for me to tell the difference between a "good" 256 AAC file and the CD it was taken from. On a better system, I absolutely can hear the difference.

For kicks, I took a CD and converted it to Apple Lossless, also had an iTunes download of the file along with an HDTracks 96/24 of the same song. To add to the mix I made a 320 MP3. My results

On the iphone with Apples earphones - Apple Lossless and 256 AAC sounded similar other than volume. Lossless played a bit lower and when matched to what sounds the same, not easy to tell one from another. MP3 was inferior. The 96/24 file wont play on the iPhone.

Using the Dirac app on iPhone - the difference became more noticeable - Lossless played best, 256 AAC came in second and MP3 was miserable.

Home system - Middle of the line AVR, Goldenear Triton 7 speakers (small towers with no sub, ribbon speakers akin to RAAL) - 96/24 similar to CD and Apple Lossless but there was a bit more "nuance" in some instruments and vocals which is not easy to explain, then 256 AAC and MP3 again, came in last.

Last - AVR with Bowers P7 headphone - 96/24 had fullest stage and way more nuance. CD and Apple Lossless sounded very good but not the same as the 96/24. The 256 AAC was very good but not great and the MP3 sounded best on the headset compared to the other set up above. I tried this test also with the on ear Sennheiser Momentums but due to poor ear fit, I couldn't give a fair listening to each as I had to keep adjusting the ear pieces.

In short - at least with the one song, the way it was recorded, the way it was transferred, the particular end product, the software I used to do conversions along with the hardware I have lets me know which play of the song I preferred and considered more as a faithful rendition. I find that Apple Lossless for my iPhone is the best bet while in most cases CD/Apple Lossless is good for home stereo playback via speakers and 96/24 for most things does sound better (also very dependent on the original 192/24 master they came from).

Final - I have the original "The Pretenders" on vinyl, CD and 96/24. The 96/24 is miserable. It is extremely flat sounding and as such, I made an Apple Lossless and use some equalizing and it sounds (for me) superior to the 96/24 flac version. So I guess it really, as I said, depends on so many part of the equation where music can go very wrong or very right.
 
You are mistaken.
High-pitched tones have shorter soundwaves, thereby requiring a high sampling rate. This means that the two are directly linked.

There's a lot of good info on Wikipedia; look it up if you don't believe me.
Also, there's some excellent technical info on this page:
http://people.xiph.org/~xiphmont/demo/neil-young.html

Kris is right. Above double the nyquist of the limits of human hearing (20khz) the extra tones are for the dogs!
 
1. Many people can't tell the difference.
2. Many people don't care about the difference.
3. I can't tell the difference in the car, or if music comes from the MacBook speakers. Probably couldn't tell if it comes from my TV's speakers.
4. With good headphones, I cannot tell _what_ the difference is. Even if I can tell that one version sounds more enjoyable than another, I couldn't describe _why_. Many people will think that because they can't describe what the difference is, they have to say they can't hear a difference.
5. At 256 KBit AAC, I think quality of your equipment is still more important than the compression, unless you go to very expensive equipment.
6. Owning a lot of audio books, that's an area where 80 Kbit AAC+HE is enough.

Most people don't have a setup where you can hear such differences. The people that are able to hear those differences, can hear it because they got very high end setups that reveal those things.

You are not going to notice it if you listen to your Macbook pro speakers or your car speakers of course :)
 
[[ So, out of all the objective testing I've seen, it turns out that most people (emphasizing most) CANNOT tell the difference in anything above 256 VBR! The VBR (variable bit rate) is important though, because 256 at a constant bit rate IS more noticeable (but even then it still requires half decent gear). ]]

A question if you don't mind regarding the statement above.

When creating an mp3 file, is it preferable to use a constant (i.e., non-variable) bitrate @ 256?
Or, should I be using the variable bitrate when creating a 256 mp3 file?

I would -reason- that a constant bitrate would be preferable -- whereas making the bitrate "variable" would result in certain passages that were somewhat "less than" 256, and might be more "noticeable". But I could be completely wrong...

If I choose to encode at 320, should I be doing so at a constant bitrate?
 
The only reason not to use VBR MP3 files in my opinion would be compatibility issues. Some older MP3 players or software might have issues with these files.

As far as I know, VBR 256 really means that the encoder strives for an average of 256 kbps. By choosing VBR, you are giving the encoder the freedom to use much less than the target bit rate to encode parts where there is basically no difference in resulting quality (most extreme example: silence or near-silence at the beginning and end of the track), and use the space saved there for more complex parts of the music, where using more than the target bit rate produces a noticeable quality difference.
 
Actually, no. While high quality recordings often have higher sampling rates, that doesn't necessarily improve audio quality. Human hearing ends at about 20kHz (if you're young); higher frequencies can't usually be heard. According to the Nyquist–Shannon sampling theorem, a signal can be reconstructed perfectly when the sampling frequency is at least twice as high. That means that a standard audio CD at 44.1kHz can already contain frequencies you can't even hear.

However, a lot of DACs in Computers don't even go that high and very few headphones do at useful levels (even if they claim so).

Keeping all that in mind, it is completely possible that your DD5.1 track DID sound better, but most likely that had to do with mastering. Also, Dolby circuits tend to do some post-processing to the audio stream.

Don't try to tell me that human hearing ends at 20kHz when I already know mine tops out at 22kHz. Average human hearing tops out at 20kHz. However, human hearing in general tops out at 22kHz.

So, your sampling theorem:

http://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem

Note figure 3 in the section on aliasing, where it says "The samples of several different sine waves can be identical, when at least one of them is at a frequency above half the sample rate."

Basically, just because we can't hear frequencies above 22kHz doesn't mean they aren't there, and since they are there, we don't actually get a perfect sample, and things won't come out right.

Therefore, your theorem, while true, isn't applicable, since there really is no maximum frequency in audio, and therefore, higher frequency sampling will improve the situation.

It's a bit like the Moirre pattern in images. Just because you can't make out the pixels on a retina display doesn't mean there will never be a Moirre pattern.
 
  • Like
Reactions: tonyr6
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.