Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Oh please, you gave up any right to complain about other people when you posted "Interesting tin foil hat view of the world with a touch of condescending aggression."

No, I didn't. No-one ever does. That's a bully's perspective you're applying. Kids learn to identify master suppression techniques in school these days. It's time you learned others can see them too.

No, I'm not implying you're a genuine bully. But you, very conveniently, failed to quote what the poster before had said even though you damn sure knew the poster was way out of line.

I'm no better than the average joe when it comes to replies in the forgiving style of Gandhi or Mandela, but what I wrote is actually true. It really was an interesting tin foil hat view of the world with a touch of condescending aggression.

Sorry about this encore, everyone. I know some persistent bugger will write replies that are at best half true trying to lure me in into a destructive and unworthy discussion. I hoped some people were better than that. Well, my bad.
 
Last edited:
  • Like
Reactions: JamesPDX
For all of you who think CD quality is worthless over an iTunes download here is a visual comparison which clearly shows clipping (loss of data) on a 256 file, and how the CD quality version does not experience this. For contrast I have included the original FLAC spectrum analysis.

To start, the lossless, 24bit 96Khz FLAC track.
08 - In the End [24-96].png


Followed by the 16bit 44.1Khz
08 Rush - In the End [16-44] 807kbps.png


And last but (very much so) the least [i don't know how many bits] 48Khz, 256kbps track.
08 Rush - In the End [22-48] 256kbps.png


As compression increases (file size and bit rate
reduction) as does the loss of data, this corresponds to clippings of upper and lower ends of the audible spectrum leaving you with a tinny sound as the extra data isn't being filled in. At the top of the spectrum on the 256 file you see a clear line where there is no more audio output, it simply stops. This also increases overall volume of the track and just simply makes it louder, but at the end of the day more vacuous.

Maybe I'm not preaching entirely correctly and someone with a bit more knowledge will back me up, but the spectrums speak for themselves, 256 is inadequate, 320 is tolerable, CD quality is what we should be asking for and wanting.
 
  • Like
Reactions: Benjamin Frost
No, I'm not implying you're a genuine bully. But you, very conveniently, failed to quote what the poster before had said even though you damn sure knew the poster was way out of line.

The poster that you replied to made a *generalised* comment about audiophiles, and you took it as a personal attack, which it wasn't. What I know is that there is no way that the post you were responding to was more out of line than your response, and - ironically - because you don't want to listen to statements that disagree with your opinion, you start calling foul.
 
For all of you who think CD quality is worthless over an iTunes download here is a visual comparison which clearly shows clipping (loss of data) on a 256 file, and how the CD quality version does not experience this.
Clipping generally refers to exceeding the allowable amplitude range of the signal beyond the restrictions of the encoding format. This causes bad distortions. But that's not what your graphs show.

BTW, perceptual audio compression on its own does not cause clipping.
As compression increases (file size and bit rate reduction) as does the loss of data
Perceptual audio compression of course means "loss of data", that's the whole point. The question is if the loss is audible or not. Fancy graphs will not answer that question.
this corresponds to clippings of upper and lower ends of the audible spectrum leaving you with a tinny sound as the extra data isn't being filled in. At the top of the spectrum on the 256 file you see a clear line where there is no more audio output, it simply stops.
Of course it stops, it's a band-limited signal. But you're talking about >20KHz frequencies, which are not audible.
This also increases overall volume of the track and just simply makes it louder, but at the end of the day more vacuous.
No, it doesn't. You are confusing clipping and band-limiting.
 
  • Like
Reactions: cycledance
For all of you who think CD quality is worthless over an iTunes download here is a visual comparison which clearly shows clipping (loss of data) on a 256 file, and how the CD quality version does not experience this. For contrast I have included the original FLAC spectrum analysis.

I don't remember anyone saying that exactly. Of course lossy formats loose data. The question is does it IN THE REAL WORLD make a difference? For the vast majority of consumers, at the data rates used by iTunes, it does not make a difference. On a train/in a car/at the gym/whilst cooking/etc. just have too much background noise to tell. Can anyone tell the difference between an iTunes download and a CD quality version, maybe and I'd love to see a proper study on that!

As the data rate goes up the probability of someone being able to tell the difference would go down, as file size goes up, so it's just a practical compromise.

To start, the lossless, 24bit 96Khz FLAC track.
As compression increases (file size and bit rate reduction) as does the loss of data, this corresponds to clippings of upper and lower ends of the audible spectrum leaving you with a tinny sound as the extra data isn't being filled in. At the top of the spectrum on the 256 file you see a clear line where there is no more audio output, it simply stops. This also increases overall volume of the track and just simply makes it louder, but at the end of the day more vacuous.

Maybe I'm not preaching entirely correctly and someone with a bit more knowledge will back me up, but the spectrums speak for themselves, 256 is inadequate, 320 is tolerable, CD quality is what we should be asking for and wanting.

Clipping is not the correct word to use here, it has a defined meaning. Band limiting possibly describes what you are seeing, but the effects you are talking about are above the limit of human hearing anyway, you're just looking at a sharper cut off. Spectrographs are not really the right tool to show data loss like this. I'm not sure what vacuous sounds like, surely there's no sound in a vacuum?

If people like pretty spectrographs I like this one made of a 176.4KHz 24-bit file from the hdtracks.co.uk sampler pack.
05-Valse-Caprice in A-flat Major.png


All that extra data to sample an bunch of ultrasonic spectrum WITH NOTHING IN IT, not even noise which would make me suspect it's been somehow band limited in the past (maybe a lower sample rate master). Your FLAC above at least has what looks like ultrasonic noise in it.
 
Last edited:
  • Like
Reactions: cycledance
I'm not even sure exactly what I'm seeing there in the AAC file, but I know what I'm NOT seeing and that is clipping. Data Compression really should not cause "clipping" which is a result of the signal going over the maximum voltage levels. But that's not what you see there. The COLOR code is the output level. The "height" of the graph is the FREQUENCY output. That "line" across the top of the graph that "looks like" clipping is between -110dB and -115dB down (i.e. ultra LOW voltage levels or whatever it is it's very VERY quiet in the noise floor, which is the opposite of "clipping" which would be RED if it were happening (i.e. over 0dB; actually the graph only goes up to -20dB so "red" can't be assumed to be clipping either, which is off the scale).

If anything, I would guess that line is some kind of noise abberation that is a result of the compression, BUT it's at a freqeuncy that is pretty much inaudble to the human ear, let alone at -110dB (i.e. if you were playing audio at 120dB and destroying your hearing, that noise would only be at 10dB in the room and that would be virtually inaudible over typical room noise even if it were in a freqeuncy range you could easily hear. At 21kHz, it's 100% inaudible to humans PERIOD. Thus, the idea that lossy compression is INAUDIBLE remains intact in my mind.

Now let's look at the 24/96 graph. What musical content do you see above 20kHz? Everything I see is broad spectrum NOISE (nothing but a field of dithered space colors ALL -100dB to -110dB down). In other words, there's little there but noise aberrations with some very light evidence of harmonics, but they're so low in volume relative to the signal (like 40dB lower) they would be hard to hear even in the audible spectrum. In other words, even if humans could hear to 40kHz, there's almost no musical content above 20kHz to hear. It's pretty much just room noise.

The ALAC graph looks perfect for band-width limited sound. It rolls off right as it approaches 22kHz with no odd frequency aberrations what-so-ever. If anything, to me, it proves CD sound does exactly what it's supposed to and records the way it's supposed to. The lossy AAC has some visible aberrations, but they're clearly inaudible (exactly what they're supposed to be). 24/96 has useless spectral noise content and not much else (all of which is inaudible anyway). Basically, I see nothing to get excited about.
 
Maybe that's dither way, way up there.

File Under: Audio Engineering -So here's something kinda goofy that I'd like to share: When we (audio production) are providing masters or stems for CD or digital delivery, we have to put a special meter on the master bus and watch out for inter-sample clipping. Even though you may set your bus limiter for a ceiling of -0.5dBFS, there are inter-sample peaks that can still get through and clip the master, but won't show on the master fader: It's just another little annoyance that I have to monitor, but paying attention to this stuff on my end will make for better-sounding stuff on the listener's end. Note in the image that my true peak is -0.19dBFS. The session is 24/44.1
Yellow = Bus setting. Red = Inter-sample peak level.

Inter-sample-peaks.jpg
 
Thus, the idea that lossy compression is INAUDIBLE remains intact in my mind.

Whoa be careful there! Lossy compression most definitely CAN be audible, it just depends on how far you go..... some more pretty graphs even though I don't really agree with their use. Audio files here. Short and simple I can't tell the difference between FLAC and 320kbps, but can between flac and 32kbps (and yes I did some quick and dirty rounds with an ABX tester). So where is the crossover between audible and inaudible? That's the difficult question.

Sample music FLAC of original

Test.png



320kbps MP3

Test320.png


32kbps MP3

Test32.png
 
  • Like
Reactions: Benjamin Frost
Whoa be careful there! Lossy compression most definitely CAN be audible, it just depends on how far you go..... some more pretty graphs even though I don't really agree with their use. Audio files here. Short and simple I can't tell the difference between FLAC and 320kbps, but can between flac and 32kbps (and yes I did some quick and dirty rounds with an ABX tester). So where is the crossover between audible and inaudible? That's the difficult question.

I meant the goal of lossy compression is/was to be inaudible at a reasonable setting (obviously everything has its limits). Here I was specifically referring to the 256kbps example, which for AAC should be transparent (MP3 is more like 320kbps as it is less efficient). I doubt you could easily tell if there's an audible difference from a spectrograph, but you can tell some things about some data that is visible (like that so-called "clipping" artifact that was shown that isn't clipping at all).
 
Whoa be careful there! Lossy compression most definitely CAN be audible, it just depends on how far you go..... some more pretty graphs even though I don't really agree with their use. Audio files here. Short and simple I can't tell the difference between FLAC and 320kbps, but can between flac and 32kbps (and yes I did some quick and dirty rounds with an ABX tester). So where is the crossover between audible and inaudible? That's the difficult question.
The MPEG did formal listening tests (using double-blind methodology) to answer just this question back in the 90s. The test group consisted of ~30 audio professionals. Their conclusion based on the statistical results was that stereo AAC at 128 kbps has "indistinguishable quality" according to a definition by the EBU. This was likely why Apple initially chose that bitrate for the iTunes store.

If you search for "Report on the MPEG-2 AAC Stereo Verification Tests" you might find a copy of the report somewhere.
 
The MPEG did formal listening tests (using double-blind methodology) to answer just this question back in the 90s. The test group consisted of ~30 audio professionals. Their conclusion based on the statistical results was that stereo AAC at 128 kbps has "indistinguishable quality" according to a definition by the EBU. This was likely why Apple initially chose that bitrate for the iTunes store.

If you search for "Report on the MPEG-2 AAC Stereo Verification Tests" you might find a copy of the report somewhere.

The trouble with quoting "indistinguishable quality" is that it gives the impression of an absolute - i.e. it is indistinguishable, therefore you will not be able to distinguish it.

However, the report is more subtle than that - it did find that AAC was statistically distinguishable on some samples. But it was below the EBU threshold. So, in one sense it is statistically indistinguishable, but at the same time it absolutely was distinguishable.

Also, note the sharp difference between the profiles of AAC - while "main" performs statistically best, "LC" just scrapes into the EBU threshold, and "SSR" falls outside. Looking at codec comparisons, it seems that Apple (and probably most music services) are using low complexity profiles, so their encodings are going to be on the worse side of what is achievable with AAC.

And that's kind of always the problem with lossy compression - it's good, no doubting that it can get very close to the original, but tracks are different, encoding and decoding software might have slightly different implementations (you can't rule out that a particular combination of one encoder and a different decoder might give perceptibly worse results).

When making a storage / bandwidth trade off has value, then AAC provides excellent quality. But if storage isn't a problem (like my home network), and bandwidth isn't an issue (home network and broadband), why not have lossless?

The other question mark is just how transparent your listening system is. Quite rightly, we choose our systems subjectively, not objectively - we want to enjoy what we listen to. So whilst losing low energy sounds might be imperceptible in a controlled, neutral environment, our preferences may have put together a system that boosts those low energy sounds (e.g. where large parts of certain frequency ranges might be masked). Suddenly, it's much more apparent that those sounds have been lost from the source material during compression. So subjectively, there are lots of reasons why you might start to notice lossy compression.

Anyway, that report in full:

http://www.radiojackie.com/im/ISO N2006 MPEG-2 AAC Stereo.pdf
 
  • Like
Reactions: Benjamin Frost

Thanks for digging that report up. It generally supports what I thought I was trying to get across! I suppose going full circle back to the start of this thread and skipping all the side conversations about analogue etc.

I guess what those of us on the "trust the engineering" side are saying is that we'd prefer apple to offer music in a lossless or uncompressed 44.1/16 rather than a 96/24 compressed format? That's fine for downloads, but I'm guessing for streaming lossy is still the most practical, where less compressed at 44.1/16 rather than more compressed at 96/24 would be preferable.
 
However, the report is more subtle than that - it did find that AAC was statistically distinguishable on some samples. But it was below the EBU threshold. So, in one sense it is statistically indistinguishable, but at the same time it absolutely was distinguishable.

Also, note the sharp difference between the profiles of AAC - while "main" performs statistically best, "LC" just scrapes into the EBU threshold, and "SSR" falls outside. Looking at codec comparisons, it seems that Apple (and probably most music services) are using low complexity profiles, so their encodings are going to be on the worse side of what is achievable with AAC.

And that's kind of always the problem with lossy compression - it's good, no doubting that it can get very close to the original, but tracks are different, encoding and decoding software might have slightly different implementations (you can't rule out that a particular combination of one encoder and a different decoder might give perceptibly worse results).

I've always looked at 128kbps as the start of where it becomes transparent. A 128kbps AAC file can sound transparent if the absolute best encoders are used and care is taken, but 256kbps provides enough of a buffer that it should leave virtually no doubt. And doubt, as it were, was the reason it was moved to 256kbps on iTunes and that doubt was based on MP3 which sounds pretty darn bad at 128kbps by comparison. The consumer was downloading massive amounts of 'free' MP3s from various sources like Napster and those MP3s were not only typically 128kbps (the "high quality" ones were typically 192kbps which is still too low for MP3 but it's the point where you need a reasonably good system to hear a difference with MP3) and so MP3 started to get a really bad reputation as sounding like crap as a result from people with decent systems or discerning ears. Low rates on lossy compression are akin to low rates on JPEG compression. The produce audible artifacts that are objectionable to most ears. So when AAC came along and claimed to be transparent at a rate that MP3 sounded like whistling crap, the psychological impact was "not possible" in a lot of minds and "too low". It doesn't even matter if it's true or not at that point. People are going to disparage it. 256kbps MP3 had a much better reputation (very hard to tell differences) and 320kbps was regarded as transparent for MP3 except by audio "snobs" where even lossless 16/44.1kHz was somehow not good enough for them.

The audio snobs (I hate to say audiophile as not all "audiophiles" are ignorant or crazy, IMO. Wanting great sound quality doesn't have to mean ignorance, after all) wanted 24/96 and then 24/192 and they based this one two things. One is that what was used in recording so it must be "better" for playback (in reality it's used for headroom so you don't accidentally "clip" your signal which is very easy to do if you're only recording at DAT or CD rates and the higher sampling rate ensures even a cheap filter will work for band-limiting (you'd have to oversample anyway and this effectively removes that step). The other reason is this persistent (wrong) idea that digital sound is like pixels on a monitor and that square pixels are jagged/stair-steps and oversampling is a bit like anti-aliasing (smoothing edges, but still visible if you zoom in) but increasing resolution would make those jaggies SO much smaller. Thus, I'm surprised they haven't concluded they need 64/256 or something by now since most of these types are convinced these "stair steps" are the real problem with digital audio and you need millions if not BILLIONS of samples per second to "smooth those steps out" (they draw graphs with sampling points and things showing a jagged line that reminds you of square pixels without anti-aliasing and proclaim THAT is why digital sounds like crap. The problem is that this is total nonsense! The reconstruction filter produces perfect sine waves in the output function from the waveform samples; the two points are mathematical sine wave reconstruction references, not stair steps and thus there are no steps or gaps in digital audio. Audio signals are built from WAVE functions not linear pixels and lines!

So why does 8-bit audio often sound like crap on something like your old Amiga computers? Isn't 8-bit audio flawed and 16-bit audio smooth those stair-steps and that's why it's so much better sounding? Nope. There's a couple of reasons and that's a combination of cheap/poorly made samplers (no oversampling) of the creating bandwidth aberrations and thus contributing to the idea of "bad digital sound" and very limited specs on the chips for playback (high signal to noise ratios, limited filtering options and a lot of noise being picked up around the motherboard from what I've read).

When making a storage / bandwidth trade off has value, then AAC provides excellent quality. But if storage isn't a problem (like my home network), and bandwidth isn't an issue (home network and broadband), why not have lossless?

Certainly on the recording end, lossless means you can go in and edit things again, if needed. Remastering is possible. You'd never want a lossy master. You'd end up with a new even lossier version of a lossy recording. But if it's the final consumer product, it really doesn't matter as long as it's transparent. People like the idea of not having to "worry" about whether it's transparent and so they want lossless. Others have the mentality of wanting the same file the master has so they hear what the mastering engineer hears (that's certainly the advertising gimmick behind DTS HD-Master audio and Dolby TrueSound, which ultimately are just lossless compression around a lossy core (i.e. the codec restores the missing information whether it's audible or not). You "know" you have the best possible sound. Of course, in reality you have the sound the mastering engineer put out as the final product either way which could be good or it could be crap. There's a false sense of security around it, IMO. A bad recording is still a bad recording even if it's 24/192 and lossless.

The other question mark is just how transparent your listening system is.

It's also how good your ears/brain are at hearing tiny differences. The problems come in when people think they hear things that aren't there and that is why ABX type double-blind testing is used in SERIOUS studies not just conjecture and advertising. The human brain is far too easily fooled. The human body has been known to even heal itself on placebos. If all drug testing were like Stereophile methods for audio, we'd have a LOT of sugar pills on the market (although in a way we already do with homeopathic type stuff sold without any testing at all with the "not approved by the FDA for any medical use" type warnings on them. People buy those like crazy too without any idea whether they are poisoning themselves or wasting money on a sugar pill. But they BELIEVE in them. The problem with faith is it's just a hope something is true not a fact that it is.
 
That's fine for downloads, but I'm guessing for streaming lossy is still the most practical, where less compressed at 44.1/16 rather than more compressed at 96/24 would be preferable.

If you are streaming over 3G, or storing on a portable device where you don't want to take up too much space, then AAC makes sense.

My home broadband exceeds 150Mbps, and even at the low end, you have more bandwidth than you need for FLAC streaming.

Or put it another way - if people are streaming Netflix, etc. in HD, then streaming FLAC really isn't a problem.
 
If you are streaming over 3G, or storing on a portable device where you don't want to take up too much space, then AAC makes sense.

My home broadband exceeds 150Mbps, and even at the low end, you have more bandwidth than you need for FLAC streaming.

Or put it another way - if people are streaming Netflix, etc. in HD, then streaming FLAC really isn't a problem.

For music, however, you need a platform that can scale with bandwidth well, as people may be moving from wifi to 3g to 4g. You either take the lowest acceptable bandwidth safe in the knowledge that it will be fine on most connections and to most people, or you have to have a platform that either dynamically adjusts (difficult to do if you have to switch decoders). A bit more clunky an option of allowing customers to choose the quality, but then you might have to deal with more customer support calls... Compromises, compromises!
 
Guys do you not notice that discussing this leads to endless circular arguments because quite simply this is a subjective matter and you can't argue subjective stuff. It's like arguing my religion is more right than yours. There is no scientific way to define a clear answer to the question "what can the human hear" in terms of compression tricks and the result of that simply is that some people will experience what they hear differently than others. It is irrelevant if those people only imagine they are hearing things differently or if they actually do. It remains a subjective thing and arguing about it is pointless.

The facts are: Lossy compression is lossy and lossless compression is lossless. Beyond that, everyone has to choose for themselves what they prefer.
 
A 128kbps AAC file can sound transparent if the absolute best encoders are used and care is taken, but 256kbps provides enough of a buffer that it should leave virtually no doubt.

"Can" is a vague term. Look at the report that was linked to. None of the lossy codecs scored as being truly transparent - although AAC, 128kbps using the MAIN profile came pretty close.

But the majority of encodings appear to be using Low Complexity profile. In the linked report, the LC encodings where significantly less transparent.

Yes, it's close. Close enough for a many circumstances. But it is not transparent.

So why does 8-bit audio often sound like crap on something like your old Amiga computers? Isn't 8-bit audio flawed and 16-bit audio smooth those stair-steps and that's why it's so much better sounding? Nope. There's a couple of reasons and that's a combination of cheap/poorly made samplers (no oversampling) of the creating bandwidth aberrations and thus contributing to the idea of "bad digital sound" and very limited specs on the chips for playback (high signal to noise ratios, limited filtering options and a lot of noise being picked up around the motherboard from what I've read).

You've also missed out that 8-bit audio was also usually accompanied by a much lower sampling rate (theoretically, it didn't need to be, but in practice you have even more limited storage capacity back then, so having 8/22 instead of 16/44 means a quarter of the data, without compression - even without considering DAC capability), plus on something like an Amiga, you are doing multiple channel mixing when you've already got limited dynamic range.

But if it's the final consumer product, it really doesn't matter as long as it's transparent. People like the idea of not having to "worry" about whether it's transparent and so they want lossless. Others have the mentality of wanting the same file the master has so they hear what the mastering engineer hears.

In theoretically ideal world, you want to have the mixed sound - as it was heard by the engineer - delivered exactly to your system. And process, such as downsampling, has a possibility of affecting the sound in some way.

In reality, downsampling doesn't have a noticeable affect, but in some cases deliberate processing is applied - e.g. notable differences between 24-bit/96 releases / vinyl and CD counterparts, where the sound has been deliberately mangled, such as through compression.

Lossy compression is deliberately altering the sound - but using psychoacoustic algorithms to remove sounds that should be inaudible. It's been proven in the report above that lossy compression *is* noticeable, however slight, at least at 128kbps. Is any recording engineer / record label taking care to initially do the lossy compression and listen to it to ensure that is what they wanted, going back and tweaking the parameters of the compression if need be? Of course not.

I'm not anti-AAC. I use AAC on my mobile phone, with ear buds. There is clear reason for streaming sites to continue to offer scalable services starting lossy compression at different rates. But more than anything, demanding lossless streaming is a clear indication to recording industry that we don't want music to be "tampered" with. It's as much about getting the industry to change it's practices and improve the production processes as a whole, cutting out compression, as it is the final delivery format.

It's also how good your ears/brain are at hearing tiny differences.

Maybe, but that's not really the point. When you have lossy compression, the algorithm is removing sounds that are less apparent and you won't be able to hear. And that may well be true in an ideal listening room.

But the real world is not a specially configured listening room. It's not the sound engineer's studio and monitoring equipment.

The real world is odd shaped rooms, and a variety of equipment cobbled together based on all sorts of preferences and prejudices.

So whether a lossy format seems transparent in the real world is not just a matter of whether you can perceive the difference in an ideal listening room. It's whether it sounds transparent on your system, which may be skewed in a way that makes sounds that you wouldn't normally hear in the ideal listening room, as audible.
[doublepost=1452002826,1452002618][/doublepost]
For music, however, you need a platform that can scale with bandwidth well, as people may be moving from wifi to 3g to 4g. You either take the lowest acceptable bandwidth safe in the knowledge that it will be fine on most connections and to most people, or you have to have a platform that either dynamically adjusts (difficult to do if you have to switch decoders). A bit more clunky an option of allowing customers to choose the quality, but then you might have to deal with more customer support calls... Compromises, compromises!

Which is fine. I never said any streaming service should *get rid* of any format that they currently support. I'm just saying they should offer lossless *as well* - and possibly even charge more for it, like Tidal.
 
guess this will be even a better reason in iOS to switch off the high quality setting
 
"Can" is a vague term. Look at the report that was linked to. None of the lossy codecs scored as being truly transparent - although AAC, 128kbps using the MAIN profile came pretty close.

But the majority of encodings appear to be using Low Complexity profile. In the linked report, the LC encodings where significantly less transparent.

Yes, it's close. Close enough for a many circumstances. But it is not transparent.

OK, I looked at your report. I said for assured transparency to use 256kbps for AAC and 320kbps for MP3 and that report didn't even test those bitrates! You are wasting my time replying to something I said that doesn't address the bitrates used (not in the graphs above either). "Maybe" 128kbps isn't 100% "transparent" but that is precisely why Apple upgraded to 256kbps. ALL my library is encoded at 256kbps not 128kbps. The ONLY people I've seen claim 256kbps for AAC isn't enough are people who can't back up their claims and audiophile "quacks" that believe in non-existent digital stair-steps.

In theoretically ideal world, you want to have the mixed sound - as it was heard by the engineer - delivered exactly to your system. And process, such as downsampling, has a possibility of affecting the sound in some way.

You've just wandered into the land of fairies and magical creatures unless you are talking about downsampling below the CD limits where the differences might be audible. (You later talk about deliberate changes; that has nothing to do with downsampling itself).

And no, you do NOT want mixed sound by the engineer in many cases. You want something MUCH BETTER. I've said from the start that the PROBLEM of "bad" digital audio (and analog for that matter) is the POOR overly compressed mixes made by mastering engineers. I don't want THAT sound. I want GOOD sound. The solution to good digital audio is to replace bad masters with good ones. All this NONSENSE about AAC at 256kbps not being good enough and 24/96 rates is a lot of horse crap and THAT is the entire point I've been making.

In reality, downsampling doesn't have a noticeable affect, but in some cases deliberate processing is applied - e.g. notable differences between 24-bit/96 releases / vinyl and CD counterparts, where the sound has been deliberately mangled, such as through compression.

Then THAT is not a result of downsampling, but deliberate CHANGES that alter the sound. Why even bring up downsampling?

Lossy compression is deliberately altering the sound

Ah, I see. It's because lossy is evil. :rolleyes:

- but using psychoacoustic algorithms to remove sounds that should be inaudible. It's been proven in the report above that lossy compression *is* noticeable, however slight, at least at 128kbps. Is any recording engineer / record label taking care to initially do the lossy compression and listen to it to ensure that is what they wanted, going back and tweaking the parameters of the compression if need be? Of course not.

So use 256kbps not 128kbps. It's not that difficult.

I'm not anti-AAC. I use AAC on my mobile phone, with ear buds.

The mere fact you use earbuds (I would NEVER use those) tells me a lot about your knowledge and experience level. IF you wanted even "good" sound, you would NOT use earbuds. The transducer (speakers/headphones) are the NUMBER 1 cause of bad sound for any given person. How can you possibly talk about hearing what the mastering engineer hears on his workstation in one breath and then talk about listening to crappy earbuds in the next? The only way you'll EVER hear what the sound engineer heard on his console is to use the same room, hardware and speakers he used. ANYTHING ELSE will alter the sound in some way. And there's nothing magical about his setup either.

The RECORDING is what really matters and it can be remastered in any number of ways (just compare Dark Side of the Moon Alan Parsons mixes vs the James Guthrie mixes versus the stereo mix. Now try down-mixing to 2-channel and compare to the 2-channel mix. They are VERY different and the surround versions aren't automatically better, IMO because they emphasize different instruments which changes the sound a LOT.

I use 256kbps AAC at home and on my iPod and in my car. I've done extensive testing with my best sounding albums doing A/B tests between 256kbps AAC and ALAC and I could never detect a difference and if you've looked at my system I posted earlier, it's not exactly low-end.

If people WANT to use ALAC, go ahead. It won't hurt anything and it will give some people peace of mind regardless. I think Apple should be selling lossless files before 24/96 compressed, although without remastering it won't sound any different in either case. I personally keep a set of ALAC files of my CD collection on another drive (and a backup at another location) as a backup for my actual CDs and in case I ever want to tweak levels or something, but I don't typically use them for everyday use since it creates too much of a PITA in iTunes to maintain two sets of libraries or wait for it to convert every time I want to sync new music. I've recently moved to Kodi for some rooms, so I can easily use ALAC there, but it's really moot.
 

Google Music uses 320 kbit per second MP3 on their service as far as I know and I notice that I'm using that instead of my own lossless iTunes files within 15 minutes of listening to stuff. People have tested this with me multiple times. There is no doubt about it. The difference however isn't one that is clearly audible. It's more of a feeling or lack thereof. Whatever is missing in 320 kbps MP3s causes me to feel less emotion while listening to the same music. And that is the difference I notice after a while (roughly 10 to 15 minutes) and how I am able to differentiate. Some other people have reported similar findings, many others get "annoyed" while listening to music they like in lossy compressions. I think the brain does more processing than we think and even while we don't realize the changes lossy music makes consciously, we do on some level register them - and that's a difference. That being said I have never done any such testing on AAC with 256 kbps, it's entirely possible that its superior encoding techniques solve this as well.
 
Google Music uses 320 kbit per second MP3 on their service as far as I know and I notice that I'm using that instead of my own lossless iTunes files within 15 minutes of listening to stuff. People have tested this with me multiple times. There is no doubt about it. The difference however isn't one that is clearly audible. It's more of a feeling or lack thereof. Whatever is missing in 320 kbps MP3s causes me to feel less emotion while listening to the same music. And that is the difference I notice after a while (roughly 10 to 15 minutes) and how I am able to differentiate. Some other people have reported similar findings, many others get "annoyed" while listening to music they like in lossy compressions. I think the brain does more processing than we think and even while we don't realize the changes lossy music makes consciously, we do on some level register them - and that's a difference. That being said I have never done any such testing on AAC with 256 kbps, it's entirely possible that its superior encoding techniques solve this as well.

Your first sentence (highlighted part in bold) implies you switch to Google Music after listening to lossless. But I assume by your later statements you mean the opposite. In any case, a lot of music on iTunes (I've never used Google Music) is DIFFERENT from that on the CD version. Their "Mastered For iTunes" label will now clue you in when that's the case (although they may use the same master on a CD and still get the label) but a lot of older music may use a different master and not have the label so you can never be too sure. I have bought songs from iTunes and later got the CD and the iTunes versions are often much LOUDER (therefore more compressed) than the CD. There's many times different CDs out there as well. So just "hearing a difference" (even if you can prove it) between two different sources you have no control over would invalidate the type of scientific testing some of us are talking about.

However, if what you imply is true, then you can take a CD and make your own 320kbps MP3 or 256kbps AAC and do an ABX double blind test (which have no time constraints, so this idea you don't notice right away to blind testing is no good is BS) and you should be able to get it right almost 100% of the time with no difficulty since you would always "sense" that "something is wrong" with the compressed one. However, I have YET to see a single person EVER do this. Why? If it's so damn obvious and so damn easy to "sense" then why can't anyone EVER PROVE it? And THAT is why I say BS. I've tested myself and I can't hear or "sense" any such things and I've been accused by a lot of people that know me to be WAY too picky about sound.
 
You are wasting my time replying to something I said that doesn't address the bitrates used (not in the graphs above either). "Maybe" 128kbps isn't 100% "transparent" but that is precisely why Apple upgraded to 256kbps.

No, you said:

"A 128kbps AAC file can sound transparent if the absolute best encoders are used and care is taken, but 256kbps provides enough of a buffer that it should leave virtually no doubt."

You are just changing your argument because you won't admit to overstating your case.


You've just wandered into the land of fairies and magical creatures unless you are talking about downsampling below the CD limits where the differences might be audible. (You later talk about deliberate changes; that has nothing to do with downsampling itself).

And I also said that downsampling doesn't cause any issue, but as it is an automated process that occurs after the mixing, and isn't directly monitored that some people might want to have precisely the audio stream that was monitored with zero processing applied.

And no, you do NOT want mixed sound by the engineer in many cases. You want something MUCH BETTER. I've said from the start that the PROBLEM of "bad" digital audio (and analog for that matter) is the POOR overly compressed mixes made by mastering engineers.

Which is why you want what is heard during *mixing*, not during *mastering*. The person mixing it generally isn't compressing the sound, which is why vinyl and high-res files get offered at the full dynamic range, and the CD ends up sounding like overblown crap.

All this NONSENSE about AAC at 256kbps not being good enough and 24/96 rates is a lot of horse crap and THAT is the entire point I've been making.

24/96 is nonsense because it can be *empirically proven* that it offers no advantage over 16/44.1.

Lossy compression does - by it's name - lose data, even at the higher bit rates. Whether that has an effect is not empirical, but subjective.

And because what matters is the real world use of the audio, not an arbitrary (even nominally "perfect") listening condition, there are factors to that which go beyond those taken into account in industry tests.

The mere fact you use earbuds (I would NEVER use those) tells me a lot about your knowledge and experience level.

No it doesn't. I use earbuds when I'm listening to music / podcasts on my phone, because I'm listening that way when I am out in public. Trying to obtain very high fidelity when on a train, or plane, is pointless. So I use earbuds *for convenience* - because, shockingly, that's more important in those conditions.

At home, I don't use earbuds. I use a system where I can tell the difference between the highest quality stream on Spotify compared to my lossless rips, but not a lossless stream from Tidal.

Are there other factors at play, beyond lossy compression? Possibly. But it's all fairly irrelevant. There is no good reason for streaming services to not offer lossless streaming as the high-end of their product - it doesn't take away from the people that are happy to use lossy compression, we have the bandwidth to cope with it, and people are prepared to pay for it.

The only way you'll EVER hear what the sound engineer heard on his console is to use the same room, hardware and speakers he used. ANYTHING ELSE will alter the sound in some way. And there's nothing magical about his setup either.

Which is PRECISELY WHAT I WAS SAYING. Your own system, your own environment will change the sound.

I could never detect a difference and if you've looked at my system I posted earlier, it's not exactly low-end.

Yes, well I did detect a difference between 256kbps OGG (Spotify) and CD, but not lossless Tidal and CD. That's using a Sonos, a NAD direct digital amp and B&W 805SEs. That's not exactly a low end system either. I'm happy to admit there may be other factors at play, but also like I said, there is no good reason for streaming sites to not be offering an option of lossless CD quality for home use. It's not just good for listeners, but good for the industry as these options can attract a pricing premium at negligible marginal cost.

If people WANT to use ALAC, go ahead. It won't hurt anything and it will give some people peace of mind regardless. I think Apple should be selling lossless files before 24/96 compressed, although without remastering it won't sound any different in either case. I personally keep a set of ALAC files of my CD collection on another drive (and a backup at another location) as a backup for my actual CDs and in case I ever want to tweak levels or something, but I don't typically use them for everyday use since it creates too much of a PITA in iTunes to maintain two sets of libraries or wait for it to convert every time I want to sync new music. I've recently moved to Kodi for some rooms, so I can easily use ALAC there, but it's really moot.

Which is, again, PRECISELY WHAT I SAID. And precisely what I've done with ripping ALAC to a NAS. The difference is a Sonos can just index and play the ALAC directly from the NAS share, and I don't maintain an iTunes library. So, if I want to transfer files to the phone, I can just drag and drop them, and it will transcode to AAC on the fly.

Seriously, you need to take a deep breath and stop trying to have an argument with everyone.
 
Your first sentence (highlighted part in bold) implies you switch to Google Music after listening to lossless. But I assume by your later statements you mean the opposite.

Sorry I wasn't very clear. I have my own music collection, ripped from CDs as ALAC in iTunes and comparing that to the Google service that allows you to upload your music and stream it. Sometimes I listen to Google and sometimes I listen to iTunes. I usually notice that I "forgot" to switch from Google to iTunes after I've listened for while cause of the feeling that something is missing, but never the other way around. This suggests to me that I notice something about their MP3 quality.

However, if what you imply is true, then you can take a CD and make your own 320kbps MP3 or 256kbps AAC and do an ABX double blind test (which have no time constraints, so this idea you don't notice right away to blind testing is no good is BS) and you should be able to get it right almost 100% of the time with no difficulty since you would always "sense" that "something is wrong" with the compressed one.

I've done this with friends by having headphones on and them switching on music via iTunes / Google randomly and having me listen and I was able to tell with a more than 80% reliability which one it was.

I think the problem that prevents people from demonstrating such feats is that in order to create proof like that you need to do a big ruckus. You either have to go to some place to be tested in an alien environment or (and this is already way harder to get going) have them come to you to test you in your known environment. In both cases the test subjects are pretty riled up from all the stuff happening and such "emotional" things cannot be tested well in these circumstances. This also shows however how tiny the difference can be. It might only be noticeable to some people, or only affect some people and not others. It goes to show how close the codecs are to providing an equal experience with such a reduced data rate in any case.
 
  • Like
Reactions: Benjamin Frost
No, you said:

"A 128kbps AAC file can sound transparent if the absolute best encoders are used and care is taken, but 256kbps provides enough of a buffer that it should leave virtually no doubt."

You are just changing your argument because you won't admit to overstating your case.

Do you know the grammatical difference between the word "CAN" and "DOES" ???? Holy fracking hell. Let's make a giant volcano out of nothing. I said a lot of other things too, but after harping on it, let's ignore all that can go down to one sentence.

And I also said that downsampling doesn't cause any issue, but as it is an automated process that occurs after the mixing, and isn't directly monitored that some people might want to have precisely the audio stream that was monitored with zero processing applied.

And that says to me you believe in Voodoo Magic. You don't NEED the 24/96 master. Most of the people that want to "have" it want it only because they believe it will sound better. 100% psychological nonsense.

I've got the 24/96 master of Amused To Death. I've got the 16/44 version. I think most people that have heard that album will agree it's one of the best recorded rock albums of all time and it's VERY dynamic. I still hear NO DIFFERENCE between the 24/96 version and it's 16/44 down-sampled CD.

Which is why you want what is heard during *mixing*, not during *mastering*. The person mixing it generally isn't compressing the sound, which is why vinyl and high-res files get offered at the full dynamic range, and the CD ends up sounding like overblown crap.

And who is selling THAT? I agree that a fully uncompressed master would be awesome in many cases and I've been arguing from the start that fixing the masters is the key to better sound. But attacking the distribution medium requires proof. But I don't get ANY proof regarding 16/44 CDs versus 24/96 masters. All I get is supposition and speculation. So now it's not CDs, it's AAC. But where's the study done on 256kbps AAC vs. the CD lossless version or even the 24/96 master? I haven't seen that posted yet. Instead, I see more "I can hear it" or now "I can SENSE it" the latter with claims that it's SO BAD that they have to switch within 15 minutes or they'll go insane! Yet when asked to PROVE that claim, all you get is SILENCE because it's 100% BS NONSENSE. People have some psychological issue listening to something when they "know what it is" already and believe there to be something wrong with it. Hid that from them and they suddenly don't know a damn thing. That requires mental health care to fix not a new format.

24/96 is nonsense because it can be *empirically proven* that it offers no advantage over 16/44.1.

OK. But you won't get a lot of agreement on that by 24/96 fans.

Lossy compression does - by it's name - lose data, even at the higher bit rates. Whether that has an effect is not empirical, but subjective.

It's not subjective. It's inherently PROVABLE by empirical testing! Find someone that claims to hear or "sense" a difference so bad they have to switch to lossless within 15 minutes and test that person with a proper double blind ABX test. If they can show that they can hear what they claim beyond a statistical "guess" then I'll gladly agree there's a problem with the format or bit-rate or whatever. But ALL I see is people arguing that they CAN hear it, but they don't want to prove it. Oh well. The proof is on the side of the person making the extraordinary claim.

And because what matters is the real world use of the audio, not an arbitrary (even nominally "perfect") listening condition, there are factors to that which go beyond those taken into account in industry tests.

More abstract "Voodoo" talk. Any controlled condition can test whether something is audible to someone or not.

No it doesn't. I use earbuds when I'm listening to music / podcasts on my phone, because I'm listening that way when I am out in public. Trying to obtain very high fidelity when on a train, or plane, is pointless. So I use earbuds *for convenience* - because, shockingly, that's more important in those conditions.

Excuses excuses. There are other headphones available than fracking earbuds that fit in the ear and isolate you to a large degree from the environment. I sometimes use noise-reducing JVC headphones at work around industrial machinery when I'm stuck in one location for a long period. To say I can't hear the difference between an earbud and a high quality headphone in a noisy environment is pretty extreme given the low quality of earbuds. Noise cancellation improves the experience as well. Noise cancellign would be bad in quiet environment since they can introduce aberrations of their own, but they are minor compared to the noise of machinery or a jet engine.

At home, I don't use earbuds. I use a system where I can tell the difference between the highest quality stream on Spotify compared to my lossless rips, but not a lossless stream from Tidal.

I don't use Spotify either. I think artists should be paid for their work and the streaming model is set up to benefit the Music Industry not the artists.

Which is PRECISELY WHAT I WAS SAYING. Your own system, your own environment will change the sound.

No, you're saying that you need to recreate the studio environment and that requires having the studio master. I'm saying unless you can recreate EVERY studio, that's going to run into more problems than just picking out typical studio speakers. Did you know some of the most common speakers in history to be used in recording studios were Yamaha NS10s and small driver Auratones? Honestly, I don't want my home system to sound like those speakers even if it is what the artists originally heard of his own recording. ;)

I'm happy to admit there may be other factors at play, but also like I said, there is no good reason for streaming sites to not be offering an option of lossless CD quality for home use. It's not just good for listeners, but good for the industry as these options can attract a pricing premium at negligible marginal cost.

I said long ago that there's no reason iTunes or other companies can't use lossless quality these days. A lossless CD is far smalle than a compressed 1080p movie. Even a lossless 24/96 album is smaller than a DVD. The reasons for not selling them online come down to costing the company more to use (individual bandwidth is no big deal, but they can save when it's millions of people streaming) and a general apathy of the typical music listener to such things who think earbuds sound awesome.

Which is, again, PRECISELY WHAT I SAID. And precisely what I've done with ripping ALAC to a NAS. The difference is a Sonos can just index and play the ALAC directly from the NAS share, and I don't maintain an iTunes library. So, if I want to transfer files to the phone, I can just drag and drop them, and it will transcode to AAC on the fly.

I have everything set up to work in iTunes AND something like Kodi. But I have digital only files too and you have to put them somewhere. Go to library mode and it gets confusing, even in Kodi. You get multiple results and need labeling to specify what is what. I can easily scan my ALAC library from Kodi since they are all in a single folder, but other MP3 or AAC only files won't be there and I have to remember what's what again (I have over 8000 songs so it's not always easy to remember everything). Transcoding "on the fly" takes FOREVER if you have say 16GB of music files to transcode every time you change your USB stick for the car. Sorry, I don't have that kind of patience.

Seriously, you need to take a deep breath and stop trying to have an argument with everyone.

Give me a break. If you don't want to argue, then don't reply. You are arguing too. I'm simply sick of audiophile claims that come up over the years about nonsense like 'stair steps' and "all compression is evil" when it gets really absurd really fast.
[doublepost=1452014712,1452014441][/doublepost]
Sorry I wasn't very clear. I have my own music collection, ripped from CDs as ALAC in iTunes and comparing that to the Google service that allows you to upload your music and stream it.

So Google takes whatever you send it and turns it into an MP3, then?

Sometimes I listen to Google and sometimes I listen to iTunes. I usually notice that I "forgot" to switch from Google to iTunes after I've listened for while cause of the feeling that something is missing, but never the other way around. This suggests to me that I notice something about their MP3 quality.

So you're saying Google sounds better or worse than iTunes? It's still not clear to me. Forgetting to switch from Google to iTunes implies you didn't notice anything.

I've done this with friends by having headphones on and them switching on music via iTunes / Google randomly and having me listen and I was able to tell with a more than 80% reliability which one it was.

The only thing you have to be really REALLY careful about in blind testing is that the levels are as close as possible to each other. Even a tiny volume difference can lead to someone picking (usually the louder one) as "better" or "different". I'm not saying that was the case, but it can easily happen.
 
So Google takes whatever you send it and turns it into an MP3, then?

Yeah, it's basically a service like Apple's iTunes Match.

So you're saying Google sounds better or worse than iTunes? It's still not clear to me. Forgetting to switch from Google to iTunes implies you didn't notice anything.

It's because when I'm not at home I listen to my music using Google's service and then come home and continue to listen here for a while before I notice I forgot to switch to iTunes. iTunes is better. The feeling I get that Google has "stuff missing" is fleeting and depends on both my own mood and the mood of the songs I listen to. But it always happens this way. I never listen to iTunes and get the feeling I might still be on Google.

The only thing you have to be really REALLY careful about in blind testing is that the levels are as close as possible to each other. Even a tiny volume difference can lead to someone picking (usually the louder one) as "better" or "different". I'm not saying that was the case, but it can easily happen.

That's true. It's possible that iTunes at max and Google at max in the browser result in a slightly different volume and it might affect my judgement.
 
  • Like
Reactions: Benjamin Frost
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.