Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I think the problem that prevents people from demonstrating such feats is that in order to create proof like that you need to do a big ruckus. You either have to go to some place to be tested in an alien environment or (and this is already way harder to get going) have them come to you to test you in your known environment. In both cases the test subjects are pretty riled up from all the stuff happening and such "emotional" things cannot be tested well in these circumstances. This also shows however how tiny the difference can be. It might only be noticeable to some people, or only affect some people and not others. It goes to show how close the codecs are to providing an equal experience with such a reduced data rate in any case.

Actually you can do a reasonable test yourself. As MagnusVonMagnum suggests create some pairs of files, starting with an uncompressed version rip from the CD. Create an MP3 using your choice of encoder. Download the ABX testing tool from here (available for MAC/WIN/LIN). You really should pop both files quickly into an editor like audacity to check there are no obvious level differences (if the encoder does it's job properly there won't be). There are links to more instructions and videos on the linked Lacinato site.

I really would suggest you give this a go, and try with other encodings as well. You will probably be surprised (as I have been) how far you can reduce the encoded data rate before you can reliably tell the difference.

That being said, I would still prefer apple to provide a lossless format for downloads. iTunes has always allowed for files to be transcoded down to a lower format automatically so it wouldn't matter for loading onto devices. Streaming, I can understand why sticking with lower bandwidth lossy files may be preferable.
 
Do you know the grammatical difference between the word "CAN" and "DOES" ???? Holy fracking hell. Let's make a giant volcano out of nothing.

The implication of your statement is that the only reason why a 128kbps might not be transparent is because it has been produced incorrectly.

And that says to me you believe in Voodoo Magic. You don't NEED the 24/96 master.

Again, THAT IS PRECISELY WHAT I HAVE SAID. Nobody needs 24/96 for playback. But you gave a bunch of incorrect reasons why "audiophiles" clamour for higher resolution formats, and I'm giving you a different perspective on why they might be asking for it.

I did not, EVER, say that it was necessary, or preferable.

So now it's not CDs, it's AAC. But where's the study done on 256kbps AAC vs. the CD lossless version or even the 24/96 master? I haven't seen that posted yet.

Proof works both ways - where is the proof that there is NO perceptible difference between 256kbps AAC vs CD? You certainly haven't provided that - you've just made a claim that you can't hear a difference, which in itself has exactly the same weight as those telling you that they can hear a difference.

The only documented studies of lossy formats against lossless show that there IS a perceptible difference. What's more, the difference in AAC is *greater* when using a low complexity profile, which is what the vendors appear to use.

Now, those studies are not done at 256kbps. So there is an element of doubt as to whether 256kbps would be imperceptible. But the balance of available of evidence suggests that it might be perceptible.

Instead, I see more "I can hear it" or now "I can SENSE it" the latter with claims that it's SO BAD that they have to switch within 15 minutes or they'll go insane!

Again, that was not what was said.

Yet when asked to PROVE that claim, all you get is SILENCE because it's 100% BS NONSENSE.

Show me the PROOF that there is no perceptible difference in a *lossy* format then, and not just YOUR *SUBJECTIVE* experience.

OK. But you won't get a lot of agreement on that by 24/96 fans.

They can believe what they want, but the *data* actually shows that there is no difference in the files that would be audible. Any perceived difference in listening must be down to another reason, but that doesn't exclude people from hearing a difference.

But, in stating that the data proves that there is no significant difference between 24/96 and 16/44 also proves that I don't as you claim "believe in Voodoo Magic". But somehow you don't see fit to not make such bogus claims or retract them?

It's not subjective. It's inherently PROVABLE by empirical testing!

If the test involves listening, then it can only be subjective.

More abstract "Voodoo" talk. Any controlled condition can test whether something is audible to someone or not.

It can only tell you whether something is audible under those conditions, and then possibly only to the people involved.

If there were no differences in equipment, then you wouldn't have expensive speakers, with a custom crossover and a fancy vinyl rig that you aligned yourself. The fact that you have put time and money into setting up that system proves that you believe it makes a difference.

So can you categorically prove that in every instance that people couldn't hear a difference between CD and AAC that it is impossible the system / environment was masking the CD sound which was removed from the AAC?

Can you categorically prove that nobody conducted the test on a system that created a spike in sound energy at a particular frequency range, which is attenuated in the AAC, and so the difference is more apparent on that system?

And to be clear, I'm NOT saying that this does happen - I'm just highlighting that we *know* that differences can exist in the testing condition, that can be audible, and so could make a difference in the results of a test based on listening.

Excuses excuses. There are other headphones available than fracking earbuds that fit in the ear and isolate you to a large degree from the environment. I sometimes use noise-reducing JVC headphones at work around industrial machinery when I'm stuck in one location for a long period. To say I can't hear the difference between an earbud and a high quality headphone in a noisy environment is pretty extreme given the low quality of earbuds. Noise cancellation improves the experience as well. Noise cancellign would be bad in quiet environment since they can introduce aberrations of their own, but they are minor compared to the noise of machinery or a jet engine.

Please stop. That is a silly argument. I know I can get "better" headphones. I know I can isolate myself more (in fact, I use BOSE NC earbuds ONLY on jets, for the reason of blocking out some of the noise - sometimes even without playing any audio!).

I never said I can't hear a difference between earbuds / headphones. Of COURSE I can. That's not the point. I use wireless beats2 earbuds because they have good enough audio, have a good interface with the phone, and I don't get tied up in a lead when I'm walking.

It's a personal preference, it's entirely my right to make, and I'm not making any claims about it. What the hell is your problem?

I don't use Spotify either. I think artists should be paid for their work and the streaming model is set up to benefit the Music Industry not the artists.

Excuses, excuses. The Music Industry is set up to benefit the music industry. By the time you take the retailer, distributor, shipping, advertising, publisher costs, publisher profits, etc. from the cost of a CD, there isn't a lot left for artists there, either.

I think the industry is taking artists for a ride based on their pre-streaming contracts. Because they can. Beyond that, I've not seen a good comparison over time of streaming revenue vs CD revenue.

It makes sense that people would buy an album, and then listen to it for a number at times, generating no additional revenue beyond the initial purchase. So you will see that early CD sales revenue will be higher than streaming revenue, but CD revenue will drop off faster than streaming revenue as people continue to listen to it.

So - over the lifetime of an artist - does streaming revenue match up to album sales? Imo, the jury is still out. Right now, it's the artists that sell large numbers of CDs on launch that are making the most noise about it - for obvious reasons if they are only looking at the short term.

No, you're saying that you need to recreate the studio environment and that requires having the studio master.

Nope. What I'm saying is that the mixed, pre-master audio is not only the closest the representation to what was intended, but as a studio can't shape the sound for every system it might be played on, and as there is no "reference system" for studios to shape the sound to, the best solution is to give us the "purest" sound from the mixing, even if we then apply a DSP to it to shape the sound so that it is preferable to us in our listening environment.

(Note there that unlike more "purest audiophiles", I'm not adverse to well implemented, targeted manipulation of the audio file. Especially if that can originate and be processed in the digital realm. I've used systems including room correction algorithms. After all, however transparent we "think" our systems are, the components and the environment they are placed in do impart a characteristic on the audio).

Give me a break. If you don't want to argue, then don't reply. You are arguing too.

My point is you are not reading entire posts, and thinking through what they are really saying before replying. You are jumping straight in, dividing things up, taking them out of context, and trying really hard to find something to argue with.

I'm mostly pointing out that I HAVEN'T been saying the things you claim I have.

I'm simply sick of audiophile claims that come up over the years about nonsense like 'stair steps' and "all compression is evil" when it gets really absurd really fast.

Which, if you bother to read back through my posts properly, you'll see is what I've been doing as well. I've been advocating against higher than CD quality formats. I absolutely don't deny that lossy compressed files have their uses, and even use streaming services with lossy formats at times.

And I'm certainly not suggesting that anyone drops lossy compression formats - just that they offer lossless CD quality as well.

I just have a preference, when using a high quality playback system, to use lossless CD quality as when bandwidth and storage is not an issue, there simply isn't any good reason not to. Even if AAC has imperceptible loss most of the time, it *is* a lossy format (it also isn't used by the majority of services, who tend to go with inferior formats) - even 1 perceptible loss in 100 tracks isn't really acceptable when there was no overriding good reason for lossy to be the only available format.

But it's imperative that service offer lossless, 16-bit/44.1khz audio before they start fussing with anything involving higher bit depths / frequencies - whether it would be using a lossy or lossless encoding.
 
The trouble with quoting "indistinguishable quality" is that it gives the impression of an absolute - i.e. it is indistinguishable, therefore you will not be able to distinguish it.

However, the report is more subtle than that - it did find that AAC was statistically distinguishable on some samples. But it was below the EBU threshold. So, in one sense it is statistically indistinguishable, but at the same time it absolutely was distinguishable.
You are just redefining "indistinguishable". For lossy systems, you have to define such terms statistically, since you will never be able to prove that absolutely no one can distinguish it on any sample. ;) Of course that doesn't mean that a codec can absolutely never be distinguished.

For all known lossy codecs there exist "killer samples" that cause artifacts that are easily audible even to untrained ears. There are also rare cases of people with certain types of hearing damage that causes them to hear artifacts of lossy audio encoding more clearly than "normal" persons, since the perceptual models don't work well for them. But these are obviously exceptions. More commonly, someone who is trained to know what to listen for can also hear artifacts more often than laymen.
Also, note the sharp difference between the profiles of AAC - while "main" performs statistically best, "LC" just scrapes into the EBU threshold, and "SSR" falls outside. Looking at codec comparisons, it seems that Apple (and probably most music services) are using low complexity profiles, so their encodings are going to be on the worse side of what is achievable with AAC.
There is a lot more to a codec than just the profile. Implementations have gone through huge improvements in the almost 20 years since those listening tests. I'm pretty sure that 99% of the readers of this forum (including me) will not be able to distinguish the vast majority of well-encoded AAC 128kbps files from the lossless originals in a double blind test. 256 kbps should be almost impossible even for experts under ideal condidtions.
[doublepost=1452027234][/doublepost]
That being said, I would still prefer apple to provide a lossless format for downloads. iTunes has always allowed for files to be transcoded down to a lower format automatically so it wouldn't matter for loading onto devices.
This is actually one (and IMO the only) good argument in favor of lossless files in the consumer space. If you make a lossy encode of an already lossily encoded file, there is a chance of so-called tandem losses, where artifacts add up and become more easily audible than a direct encode to the same format. So if you think that you may have a need to re-encode files later, then making lossless encodings is a good idea.
 
Last edited:
The implication of your statement is that the only reason why a 128kbps might not be transparent is because it has been produced incorrectly.

Gee, I thought I said 128kbps is the starting point for transparency to begin. Yeah...let me see. Yup, that's what I said. Begin means it starts to be transparent. It doesn't mean it's 100% fool proof. That's why I then said to be sure to get transparency, bump it to 256kbps to remove almost any doubt except that from the doubting Thomas crowd for whom 24/192 lossless probably isn't good enough.

Now I know you want to read more into what I said to win some kind of point, but the simple fact is I haven't done a lot of listening at 128kbps AAC because I didn't buy anything from the iTunes store before they removed encryption and I've always encoded my own CDs at 256kbps. My statements about 128kbps were based upon what I read back in the late '90s on the rec.audio.high-end newsgroup where I talked with at least one of those responsible for its development and he had arguments with the same types that want 24/96 today, etc. It was unreal how much work went into making sure AAC was a great format and better than MP3. Their goal was indeed to get as close to transparent as possible, but I don't recall for certain whether it was 128 or 192 that was being used at the time (192kbps was the most common rate at the time for "high quality" MP3s at the time; 320kbps didn't become really common until much later with some 256 thrown in and of course most of those MP3 sources were pretty questionable to begin with (i.e. napster and such done by ordinary people with god knows what encoder).

What I do know is that 256kbps sounds effectively" transparent to me (i.e. I"ve never been able to detect a difference and I did a lot of comparisons for awhile there before setting the ALAC library aside out of sheer frustration with iTunes inability to maintain two separate libraries with "ease" (a simple "hide" option for AAC vs ALAC duplicates at a click would make it night and day better, IMO, but Apple doesn't seem to take feedback well).

Proof works both ways - where is the proof that there is NO perceptible difference between 256kbps AAC vs CD?

It's very simple why it doesn't work both ways. You can't really prove a negative. In other words, I can't truly prove that there's no possible difference under any possible circumstance. At best, I could show a trend (it's like trying to prove God doesn't exist). But I CAN prove that I CAN hear a difference if I actually CAN do it (CAN CAN CAN or CCC recordings ;)). So on those grounds, I have to defer that the proof of a claim is on the one making a positive claim that they CAN hear a difference. ABX testing makes that simple for someone to prove (or alternately fail to prove the claim).

But if someone wants to prove to me there IS a difference to backup extraordinary claims that lossy sucks or 24/96 sounds better, I want to see proof and it's easy to provide that proof (create samples and download an ABX testing program; it will test you automatically for as long as needed).
 
Gee, I thought I said 128kbps is the starting point for transparency to begin. Yeah...let me see. Yup, that's what I said.

No, you said "A 128kbps AAC file can sound transparent if the absolute best encoders are used and care is taken, but 256kbps provides enough of a buffer that it should leave virtually no doubt."

You haven't said that every 128kbps AAC will sound transparent, but you didn't caveat that there may be any other reason for it not being transparent, other than a problem with the encoding, that should be "fixable" by changing the encoder.

It was unreal how much work went into making sure AAC was a great format and better than MP3.

I don't doubt the work that went into AAC. And I favour it out of all of the lossy compressions that I've heard. But that's a different question.

The EBU tests concluded that 128kbps was *statistically* indistinguishable, given the criteria they laid out. And for the purposes of the EBU - consider the use of compressed audio streams in broadcasting, and the generally less than optimal listening environments that it would be used in, I'm absolutely fine with saying that AAC is good enough for those purposes (and broadcasting bandwidth is at a premium).

But when you look in detail at those tests, they are not totally indistinguishable. Even where AAC was determined to be significantly indistinguishable, and closest to the uncompressed format, it still "failed" two of the samples. And crucially, out of all the samples they used, the two it "failed" on was human speech and a commercial rock music track.

If you want to use 128kbps, then fine. There are many circumstances where it is good enough. It's better than other 128kbps formats.

But the EBU tests are proof that 128kbps AAC is *not* transparent. Not just tests that individuals claim to have carried out, but actual documented, published proof. But you still claim there is no proof.

What I do know is that 256kbps sounds effectively" transparent to me (i.e. I"ve never been able to detect a difference and I did a lot of comparisons for awhile there before setting the ALAC library aside out of sheer frustration with iTunes inability to maintain two separate libraries with "ease" (a simple "hide" option for AAC vs ALAC duplicates at a click would make it night and day better, IMO, but Apple doesn't seem to take feedback well).

That's absolutely fine, it's an opinion as to what is suitable for you. Hell, as I said, there are circumstances where I use 256kbps, because I do need to save space, and the difference is not significant when I'm using lower quality equipment in a noisy environment.

I want to see proof and it's easy to provide that proof (create samples and download an ABX testing program; it will test you automatically for as long as needed).

This morning, I took one of my own ALAC files from my library, and the ABX software linked above, and generated an uncompressed PCM (because the software doesn't support ALAC), and the 256kbps AAC (using iTunes), and ran some tests.

Doing the AB*X test - listening to a randomly chosen sample and choosing whether it was the AAC or PCM - I was hopeless. Completely random results.

But when I changed to doing a shootout - randomising the samples, comparing the two and choosing the "best" one - I only chose the AAC file ONCE. All of the other times, I chose the uncompressed, lossless file.

And that was using 256kbps AAC.
 
No, you said "A 128kbps AAC file can sound transparent if the absolute best encoders are used and care is taken, but 256kbps provides enough of a buffer that it should leave virtually no doubt."

You haven't said that every 128kbps AAC will sound transparent, but you didn't caveat that there may be any other reason for it not being transparent, other than a problem with the encoding, that should be "fixable" by changing the encoder.

Ah, so let's twist my words to mean I love 128kbps (even though I've NEVER used it myself) and run this dialogue and its contents into the ground about four more times. That makes for good conversation. ;)

The EBU tests concluded that 128kbps was *statistically* indistinguishable, given the criteria they laid out.

So the difference between absolute and statistical gets me 5 pages of nit-picking over the words I chose to convey that transparent sound starts to become apparent at 128kbps but is almost assured at 256kbps. Ok. That's two.

And for the purposes of the EBU - consider the use of compressed audio streams in broadcasting, and the generally less than optimal listening environments that it would be used in, I'm absolutely fine with saying that AAC is good enough for those purposes (and broadcasting bandwidth is at a premium).

Actually, the far more efficient AAC-HE was designs for low bandwidth radio. It's crap, but it's far better sounding than many of the alternatives, which means it's silver-lined crap.

But when you look in detail at those tests, they are not totally indistinguishable. Even where AAC was

There's three and about the 5th repeat of saying the same thing I've already read once more as if my memory has failed and I need to be reminded of where the bathroom is. :D

If you want to use 128kbps, then fine. There are many circumstances where it is good enough. It's better than other 128kbps formats.

Ah, there's four and a repeat of the idea from someone's hind quarter that I use 128kbps when I've explicitly said that I DON'T use it what-so-ever here and haven't given even the SLIGHTEST impression that I do based on a comment that 128kbps was where transparency becomes possible (born out by testing, not my personal use).

But the EBU tests are proof that 128kbps AAC is *not* transparent.

Ah, there's number five in this thread alone. We are officially run into the underground kingdom of dirt now and working our way to China. :(

Not just tests that individuals claim to have carried out, but actual documented, published proof. But you still claim there is no proof.

I believe I was talking about 256kbps when it came to proof. You like to keep bringing up 128kbps based on a twisting of meaning to win an argument that has long since gone to the dogs. Walls of text be damned. Let's keep on digging. Digging in the dirt, to find the place I got hurt.....

That's absolutely fine, it's an opinion as to what is suitable for you.

There's six and a repeat of the impression I listen to 128kbps again. Almost to China now....

Hell, as I said, there are circumstances where I use 256kbps, because I do need to save space, and the difference is not significant when I'm using lower quality equipment in a noisy environment.

And a repeat to remind us you actually find the use of earbuds acceptable (I do not in any circumstance because they sound like garbage all on their own; they don't need the help of a lossy format) but harp on AAC as being too inferior to use at home.

This morning, I took one of my own ALAC files from my library, and the ABX software linked above, and generated an uncompressed PCM (because the software doesn't support ALAC), and the 256kbps AAC (using iTunes), and ran some tests.

Doing the AB*X test - listening to a randomly chosen sample and choosing whether it was the AAC or PCM - I was hopeless. Completely random results.

But when I changed to doing a shootout - randomising the samples, comparing the two and choosing the "best" one - I only chose the AAC file ONCE. All of the other times, I chose the uncompressed, lossless file.

And that was using 256kbps AAC.

So you chose a random test and utterly failed and then chose a different random test and this time is was obvious, eh?

I'm not the type to often believe in coincidental type events being totally random, but how many total passes of this test did you do? Someone eventually even wins the Powerball jackpot, after all, and that's about one in 300 million odds to guess correctly. If a flip a coin 10 times, it should statistically be heads 5 times an tails 5 times over a large enough sample. If I do it once, it's entirely possible that I could get heads nine times and tails only once. Worse yet, the random generator in an AppleTV is so bad that it seems to pick the same damn photos over and over every time I run slideshows in "random" mode.

Am I saying that you didn't hear the difference? No. Are two test trials, one of which produced random "guessing" results enough to draw a definitive conclusion? No. Have I any of way of knowing if there were any flaws in the test or even to 100% sure it even took place and you're not just posturing on the Internet to "win" an argument? No, I can't be certain of most things, not even that I'm not living in a variation of The Matrix when it comes right down to it. ;)
 
This morning, I took one of my own ALAC files from my library, and the ABX software linked above, and generated an uncompressed PCM (because the software doesn't support ALAC), and the 256kbps AAC (using iTunes), and ran some tests.

Doing the AB*X test - listening to a randomly chosen sample and choosing whether it was the AAC or PCM - I was hopeless. Completely random results.

But when I changed to doing a shootout - randomising the samples, comparing the two and choosing the "best" one - I only chose the AAC file ONCE. All of the other times, I chose the uncompressed, lossless file.

And that was using 256kbps AAC.

That's odd. How many rounds did you do of each? The sample has to be big enough to be viable.

Out of interest, you say you were hopeless in the ABX, but got it right in the shootout. Did YOU feel that you could tell the difference when doing the shootout but not the ABX? Just interested, my betting is if you came back and didi it again you'd get different results, or if you increased the number of times you did the test it would even out.

I gave that software as an example. I cannot verify how it works internally, so there's always a very slim possibility that one module is somehow different. That's why I described the test as reasonable, it wouldn't pass true scientific review.
 
Ah, so let's twist my words to mean I love 128kbps (even though I've NEVER used it myself) and run this dialogue and its contents into the ground about four more times.
...
nit-picking over the words I chose to convey that transparent sound starts to become apparent at 128kbps but is almost assured at 256kbps.

So, to be clear, are you saying that 128kbps is NOT transparent, and that 256kbps (or 192) AAC is a necessity, and not merely a "buffer"? And that therefore making statements about 128kbps based on a 20 year old discussion might be a little misguided?

And a repeat to remind us you actually find the use of earbuds acceptable (I do not in any circumstance because they sound like garbage all on their own; they don't need the help of a lossy format) but harp on AAC as being too inferior to use at home.

And still you try to discredit me over how I listen to audio (mostly spoken word podcasts, not music, btw), when I'm not at home and pulling around at 10 grand hi fi system on a cart isn't really an option, as if it has any relevance? Well, I guess if you want to look foolish, don't let me stop you.

I'm not the type to often believe in coincidental type
events being totally random, but how many total passes of this test did you do?

About a dozen passes of each test - I've actually got work to do as well, not just spending all day sitting around doing tests which you are clearly never going to believe anyway.
[doublepost=1452092887][/doublepost]
That's odd. How many rounds did you do of each? The sample has to be big enough to be viable.

About a dozen of each test. Obviously there is always more testing that you can do, but enough to doubt that it was random.

Out of interest, you say you were hopeless in the ABX, but got it right in the shootout. Did YOU feel that you could tell the difference when doing the shootout but not the ABX?

It was possibly more a flaw in the way I was doing the AB*X - bearing in mind that I was figuring out the interface. Note that in this test, A/B are constant, and X is randomly chosen (rather than having an X constant and A/B randomly assigned). So I was mostly just listening to X and trying to decide whether it was A or B, without actually comparing it to a or B.

In other words, 256kbps AAC was "good enough" in the sense that I couldn't easily tell it was a compressed file by listening to it in isolation, but side-by-side, I could detect a difference. But that was also using the best headphones that I had available next to my Mac - not using my actual hi fi, which is where I don't like listening to non-lossless streaming sources.

It should be noted though, that only Apple offer AAC for streaming - mostly the others are inferior mp3 or OGG formats. And Apple Music is still lacking in integration with network players. Otherwise, I might be prepared to switch to Apple - although I would still be prepared to pay a reasonable amount more for lossless.
 
So, to be clear, are you saying that 128kbps is NOT transparent, and that 256kbps (or 192) AAC is a necessity, and not merely a "buffer"?

No, I'm NOT saying that. I'm saying you are accusing me of arguing and yet you are running a comment meant to be my opinion based on the things I read about 128kbps into the fracking GROUND. That's all I was ever saying about that bitrate specifically. Frankly, your persistence at running it into the ground trying to force some kind of "win" about an opinion is starting to piss me off. I will not be replying to you anymore.

And that therefore making statements about 128kbps based on a 20 year old discussion might be a little misguided?

I find it far more likely you have an agenda of your own to push here to justify using ALAC or whatever lossless format and that it is simply unacceptable in your mind to use compression except when using crappy earbuds, which you don't seem to mind despite their crappy nature which is a disparity I personally find telling about how much you actually care about audio quality. But you do seem to want to feel superior about your choice at home. Frankly, I don't really care. I have no interest in running a scientific study on whether 128kbps was "really and truly" transparent or just "statistically transparent." To me, there's very little difference as statistics describe the overall ability of the general public to do something. Since I don't listen at 128kbps, I don't care whether it is or isn't transparent. I use 256kbps and Apple's encoder for CDs. I use XLD for everything else.

And still you try to discredit me over how I listen to audio (mostly spoken word podcasts, not music, btw), when I'm not at home and pulling around at 10 grand hi fi system on a cart isn't really an option, as if it has any relevance? Well, I guess if you want to look foolish, don't let me stop you.

I'm sorry; i didn't realize I was the Grand Inquisitor for the Board of Internet Reputations and my job was to "discredit" you. I simply find you attacks on my opinions absurd and most of the claims about "high-end" audio or "audiophiles" to be extremely dubious if not outright fraud. The fact you seem to agree on one aspect (24/96) that goes contrary to the audiophile agenda yet find high bit-rate compression to be unacceptable for any serious listening strikes me a bit odd, but you are free to believe whatever you want, of course. If you don't like AAC at home, don't use it. Why are you trying to convert me to your belief? I think "most" people who despise compression would prefer to be on the "safe" side and figure they might be missing something if they listen to AAC. I simply could not hear a difference on even my best sounding albums at 256kbps so I don't really give a crap about it. I would think my position would be clear by now, but you keep acting like I didn't read your previous posts and keep trying to paint me as trying to prove 128kbps is 110% transparent when I simply gave my opinion on it based on published studies, one of which you are now trying to bash me over the head with between the words "transparent", "statistically transparent" and some kind of "not absolutely transparent" and every room that isn't the studio it was made in, etc. Frankly, I'd find it amusing if you weren't so adamant about it mattering somehow that you differ in opinion.

About a dozen passes of each test - I've actually got work to do as well, not just spending all day sitting around doing tests which you are clearly never going to believe anyway.
[doublepost=1452092887][/doublepost]

About a dozen of each test. Obviously there is always more testing that you can do, but enough to doubt that it was random.

And I said you can flip a coin 10 times and get 9 out of 10 heads sooner or later. In order to actually prove a statistical difference in any scientific study you need more than one test by one person. But ah, we're back to "statistical" versus "absolute" again. Maybe you are the only one on the planet that can hear a difference at 256kbps, at which point I would recommend you avoid the iTunes store like the plague right now and given your low opinion of SACD and the like, stick with CDs and rip them yourself to ALAC or FLAC or whatever floats your boat.

It was possibly more a flaw in the way I was doing the AB*X - bearing in mind that I was figuring out the interface.

Of course. You will no doubt pass it 90%+ in all future trials should you choose to attempt them. I'm now convinced that AAC just plain sucks and will switch back to my ALAC library.
 
No, I'm NOT saying that.

So what are you saying? I'm just trying to clarify what in your opinion is necessary for transparency. Are you encoding to 256kbps "just to be safe" (e.g. a buffer), or do you believe you would perceive a loss at lower bitrates?

I read about 128kbps into the fracking GROUND. That's all I was ever saying about that bitrate specifically. Frankly, your persistence at running it into the ground trying to force some kind of "win" about an opinion is starting to piss me off.

You're the one that stated it was designed to be transparent at 128kbps (or was it 192, you're not quite sure). And you keep talking about proof while dismissing everything that is presented. I'm just seeing if there is anything you will accept is proof.

Here's an OBJECTIVE test - albeit rather limited. For the example chosen, this post shows how much audio information is lost in various different codecs / bitrates.

http://forums.anandtech.com/showthread.php?t=2168530

As I've accepted all along, AAC is the best performing of all the codecs. However, at 176kbps it has lost more information than a 227kbps MP3. (And it's not simply a case of that's what happens at lower bitrates - the AAC at 228kbps still retains more information than the OGG at 268kbps, for example).

Although subjectively, how that translates into an audible difference is uncertain (and yes, more important).

I find it far more likely you have an agenda of your own to push here to justify using ALAC or whatever lossless format and that it is simply unacceptable in your mind to use compression

Not at all. I have used lossy streaming even on my main system, prior to any service offering lossless streaming. All I've said is that I can hear a difference, and that I want to see all streaming service offering lossless CD quality *as an option*, and prior to any fussing over higher resolution formats.

I simply find you attacks on my opinions absurd and most of the claims about "high-end" audio or "audiophiles" to be extremely dubious if not outright fraud.

Except that isn't happening. Nobody is attacking your opinion. You can listen to whatever format you want.

You set out to discredit and attack everyone who claims that they can hear a difference between AAC and CD. Rather than respecting my experience, you've made an accusation of fraud. I'm defending my opinion.

The fact you seem to agree on one aspect (24/96) that goes contrary to the audiophile agenda yet find high bit-rate compression to be unacceptable for any serious listening strikes me a bit odd, but you are free to believe whatever you want, of course.

I find that I can hear a difference during serious listening when I use a lossy streaming service, compared to my lossless CD rips. I don't hear that difference on lossless streaming services.

I don't think it is entirely unacceptable, as a streaming service provides a solution to a specific problem, but I think it is unacceptable to use a lossy format when there is no benefit to doing so over a lossless format.

Note also that most streaming services don't use AAC - so their lossy compression is more noticeable. If Apple Music was supported on more devices, I would likely (as I haven't tested it specifically), use it over e.g. Spotify for sound quality, but whilst I can choose to use a lossless streaming service with a decent library for reasonable cost, I will always take that in preference.

Why are you trying to convert me to your belief?

If you mean your choice to use AAC, I'm not - asking you to accept that some people have heard a difference between formats, doesn't change the suitability of it for you.

I'm only trying to change your insistence that everybody who doesn't agree with you that a 256kbps AAC is indistinguishable from CD is wrong.

In order to actually prove a statistical difference in any scientific study you need more than one test by one person.

Doesn't matter what is true for a number of people. I'm the only person that listens to my music on my system in my house. You've been berating everybody on this forum that doesn't completely share your AAC opinion to do an ABX test - well, I've done it, and your experience is NOT my experience.

Demanding lossless CD quality when I am doing serious listening is right for me.

Objectively, we need 16-bit / 44.1 kHz sampling for a convincing sound.
Objectively, we don't need anything more.

Subjectively we might get away with a psychoacoustic compression that retains the dynamic and frequency range, but loses (subjectively) inaudible details.
 
It was possibly more a flaw in the way I was doing the AB*X - bearing in mind that I was figuring out the interface. Note that in this test, A/B are constant, and X is randomly chosen (rather than having an X constant and A/B randomly assigned). So I was mostly just listening to X and trying to decide whether it was A or B, without actually comparing it to a or B.

In other words, 256kbps AAC was "good enough" in the sense that I couldn't easily tell it was a compressed file by listening to it in isolation, but side-by-side, I could detect a difference. But that was also using the best headphones that I had available next to my Mac - not using my actual hi fi, which is where I don't like listening to non-lossless streaming sources.

Of course, this is quite a good point that I was not thinking of. Switching from one to another will allow you to hear smaller differences than just picking one in isolation. It's worth noting that you can switch between A, B, and X during the test if you wanted to try again, although this was not obvious to me to begin with as the interface is not perfect.

This shows up another aspect of defining what inaudible means..... It can be argued that the vast majority of people will never A/B two formats, so is it a relevant test? Certainly from a business point of view from Apple etc. I doubt such a corner case is in any way interesting.
 
This shows up another aspect of defining what inaudible means..... It can be argued that the vast majority of people will never A/B two formats, so is it a relevant test? Certainly from a business point of view from Apple etc. I doubt such a corner case is in any way interesting.

Depends on what question you are trying to answer.

Ultimately, we know there are badly produced albums out there; ones where you can hear that it doesn't sound good and you know it is not a limitation of CDs. (And when it occurs, it's a bigger difference than AAC vs CD).

If I didn't know what you were doing, and played back an AAC on a system I wasn't familiar with, I *might* think the system isn't providing the best sound rather than think you are playing an AAC.

If I didn't know and you played an AAC on my system, I *might* think that the production is a bit off, rather than pick out that it is an AAC.

For me, it will always be a trade off of cost benefit. If there is a difference that I can hear when played side-by-side, but can't immediately pick out in isolation, I'm going to want the "better" version, as long as the cost isn't obscene.
 
Sorry I wasn't very clear. I have my own music collection, ripped from CDs as ALAC in iTunes and comparing that to the Google service that allows you to upload your music and stream it. Sometimes I listen to Google and sometimes I listen to iTunes. I usually notice that I "forgot" to switch from Google to iTunes after I've listened for while cause of the feeling that something is missing, but never the other way around. This suggests to me that I notice something about their MP3 quality.



I've done this with friends by having headphones on and them switching on music via iTunes / Google randomly and having me listen and I was able to tell with a more than 80% reliability which one it was.

I think the problem that prevents people from demonstrating such feats is that in order to create proof like that you need to do a big ruckus. You either have to go to some place to be tested in an alien environment or (and this is already way harder to get going) have them come to you to test you in your known environment. In both cases the test subjects are pretty riled up from all the stuff happening and such "emotional" things cannot be tested well in these circumstances. This also shows however how tiny the difference can be. It might only be noticeable to some people, or only affect some people and not others. It goes to show how close the codecs are to providing an equal experience with such a reduced data rate in any case.

What you say rings true to me.

I prefer iTunes to YouTube quality. It's an obvious difference in sound to me. It's not just clarity; the whole sound of iTunes is much richer and more expansive. YouTube sounds boxed in and dull.
[doublepost=1452118670][/doublepost]
Yeah, it's basically a service like Apple's iTunes Match.



It's because when I'm not at home I listen to my music using Google's service and then come home and continue to listen here for a while before I notice I forgot to switch to iTunes. iTunes is better. The feeling I get that Google has "stuff missing" is fleeting and depends on both my own mood and the mood of the songs I listen to. But it always happens this way. I never listen to iTunes and get the feeling I might still be on Google.



That's true. It's possible that iTunes at max and Google at max in the browser result in a slightly different volume and it might affect my judgement.

I think you're being too apologetic.

I don't think it's a volume thing with the Google/iTunes difference; it's the quality of the sound.
[doublepost=1452119028][/doublepost]

I have one Mastered for iTunes classical album (orchestral Elgar) and I am unimpressed by it. It's one of the most boxed-in sounding albums I have. Such a shame. Maybe it's just how it was recorded, but it makes me gravitate to CDs these days. I do have some excellent-sounding albums from iTunes, though, even classical. They tend to be the ones recorded recently. CDs are normally much cheaper for whole albums, though, so for classical, CDs are better. For pop songs, iTunes fits the bill.
 
Last edited:
  • Like
Reactions: JamesPDX
Depends on what question you are trying to answer.

Ultimately, we know there are badly produced albums out there; ones where you can hear that it doesn't sound good and you know it is not a limitation of CDs. (And when it occurs, it's a bigger difference than AAC vs CD).

If I didn't know what you were doing, and played back an AAC on a system I wasn't familiar with, I *might* think the system isn't providing the best sound rather than think you are playing an AAC.

If I didn't know and you played an AAC on my system, I *might* think that the production is a bit off, rather than pick out that it is an AAC.

For me, it will always be a trade off of cost benefit. If there is a difference that I can hear when played side-by-side, but can't immediately pick out in isolation, I'm going to want the "better" version, as long as the cost isn't obscene.

Exactly my point. Apple/Google/etc. are going to be asking the question from the position of how good does it need to be to support our business plan (bearing in mind there are extra costs to them from the extra data, about a data size factor of 6.5). You and I have a different set of criteria. So in effect "inaudible" is defined differently depending on which side of the transaction you stand. I'd take lossless over lossy downloads for the serial encoding benefits, you want it for the theoretical sound quality difference, others don't much care. I wouldn't pay much more, you sound like you might. How many would? Would it be worth Apple's while to offer it? Time will tell I guess!
 
What you say rings true to me.

I prefer iTunes to YouTube quality. It's an obvious difference in sound to me. It's not just clarity; the whole sound of iTunes is much richer and more expansive. YouTube sounds boxed in and dull.
[doublepost=1452118670][/doublepost]

I think you're being too apologetic.

I don't think it's a volume thing with the Google/iTunes difference; it's the quality of the sound.
[doublepost=1452119028][/doublepost]

I have one Mastered for iTunes classical album (orchestral Elgar) and I am unimpressed by it. It's one of the most boxed-in sounding albums I have. Such a shame. Maybe it's just how it was recorded, but it makes me gravitate to CDs these days. I do have some excellent-sounding albums from iTunes, though, even classical. They tend to be the ones recorded recently. CDs are normally much cheaper for whole albums, though, so for classical, CDs are better. For pop songs, iTunes fits the bill.


Oh totally, but I bought Valentina Lisitsa's Decca release on iTunes. I'll never do that again, I'll just buy a CD, though classical tends to be way more expensive. I don't buy a lot of music, but when I do, I'll stay with 16/44.1 AIFF until I can get 24/44.1 or better AIFF. Then I roll my own AAC or Apple Lossless for my iDisposables.

The droplet with the tools is pretty great, but I've tweaked my droplet to encode at 320kbps for sending one-offs for my bandmates to "sign-off", etc. What do you think about this idea of putting up samples of different encodings against each other (inverted against the uncompressed for comparison)? Or do you think that no one here really gives a Sith?
 
  • Like
Reactions: AleXXXa
Oh totally, but I bought Valentina Lisitsa's Decca release on iTunes. I'll never do that again, I'll just buy a CD, though classical tends to be way more expensive. I don't buy a lot of music, but when I do, I'll stay with 16/44.1 AIFF until I can get 24/44.1 or better AIFF. Then I roll my own AAC or Apple Lossless for my iDisposables.

The droplet with the tools is pretty great, but I've tweaked my droplet to encode at 320kbps for sending one-offs for my bandmates to "sign-off", etc. What do you think about this idea of putting up samples of different encodings against each other (inverted against the uncompressed for comparison)? Or do you think that no one here really gives a Sith?

Putting up comparisons sounds like a good idea.

The problem is getting the same volume, as louder generally always sounds better.

I'm surprised you find classical CDs a lot more expensive. In England, I don't find that. For instance, the latest set of Suzuki Bach cantatas are for sale at £7.99 on iTunes or £120 for all 15; on CD, they are about £60.
 
Remember this supposedly "unloved" case?

apple_iphone_6s_smart_battery_case_with_builtin_antenna_1.jpg


Now imagine a more modern version designed for the iPhone 7 model with a "thicker" bottom to accommodate both the 3.5 mm headphone jack (with full iPhone in-line controller support) through a built-in headphone amplifier and a DAC that can even decode the much-rumored higher-quality Apple Lossless format? That way, you avoid the unsightly dongle to accommodate the older headphones and get a lot more usable time per charge with the iPhone 7.
 
  • Like
Reactions: hleewell
Putting up comparisons sounds like a good idea.

The problem is getting the same volume, as louder generally always sounds better.

I'm surprised you find classical CDs a lot more expensive. In England, I don't find that. For instance, the latest set of Suzuki Bach cantatas are for sale at £7.99 on iTunes or £120 for all 15; on CD, they are about £60.

I was using the Screen Shot 2016-01-10 at 6.48.13 PM.png last night with the AAC Roundtrip plugin. The A/B test for 320kbps/44.1 CBR AAC vs. the 24/44.1 .WAV master was really interesting. Maybe I could just put up "difference files" on Soundcloud -all things being equal, everything there gets converted to mp3. -But I could do a "null test" and show what happens. Not sure I should get Dropbox involved with this, but I could if people wanted clean WAV files. Let me know.
[doublepost=1452481076][/doublepost]
Putting up comparisons sounds like a good idea.

The problem is getting the same volume, as louder generally always sounds better.

I'm surprised you find classical CDs a lot more expensive. In England, I don't find that. For instance, the latest set of Suzuki Bach cantatas are for sale at £7.99 on iTunes or £120 for all 15; on CD, they are about £60.

Oh man! Did you get this back when it was more available? http://www.amazon.com/Great-Pianists-2-Twentieth-Century/dp/B00002EITT/ref=oosr#customerReviews
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
Remember this supposedly "unloved" case?

Now imagine a more modern version designed for the iPhone 7 model with a "thicker" bottom to accommodate both the 3.5 mm headphone jack (with full iPhone in-line controller support) through a built-in headphone amplifier and a DAC that can even decode the much-rumored higher-quality Apple Lossless format? That way, you avoid the unsightly dongle to accommodate the older headphones and get a lot more usable time per charge with the iPhone 7.

There are two problems with that case:

1) It makes the phone taller, and the larger screen plus is already at the limit of what is acceptable for the height of the phone (anything more is going to stick out of my pocket).

So it would mean taking a smaller phone size to accommodate the case.

2) The hump - seriously, it's just a stupid, ugly, unergonomic bit of design.

I don't mind having a case that makes the phone thicker to add battery life, but it should be flat / tapered to the edges, rather than a bump in the middle. Even if that means part of the space is unused.
 
  • Like
Reactions: Benjamin Frost
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.