Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Agreed. But...


Starting with a lossless file instead of a lossy one is a good thing, particularly in the era of bluetooth headphones.


Regardless of the fact that if we can hear the difference or not, this was inevitable, as the internet connections and infrastructures keep improving over the years. Lossy audio was always a compromise solution. Is not here to stay.


I don´t understand why to argue about this. A higher quality source is always a good thing.

Amazon, Spotify, Apple, Deezer, Tidal... all of them will leave lossy audio behind completely at some point.
Exactly. I don’t understand the seemingly relentless effort by so-called objectivists to keep everything RIGHT on the edge of undetectable, especially in cases where getting something better doesn’t cost anything. Even when it DOES cost something to get the tiny extra, it doesn’t cost you anything that I buy it. If Apple removed the option for lossy, maybe I could understand the resentment, but in this case there are literally no losers. People are debating for the sake of debating (so am I, so that’s more a matter of fact than a complaint…)
 
  • Like
Reactions: peanuts_of_pathos
I think the more noticeable difference will be Atmos support, which IS supported by all of their devices.
Unless they are doing something that isn’t clear in the communication (which is a possibility), actually no. Atmos music is not the same as spatial audio in video playback. Atmos in iTunes means music in surround. In headphones this is down converted to stereo. In principle it doesn’t matter whether you do this in the iPhone or in the headphones. The spatial feature for video with head tracking doesn’t make sense for audio only source.

Only if the headphones are doing some “fake surround” stereo based off the Atmos, that is not being done in the iPhone (although it could just as well be), will there be an actual difference. I suspect not, since Atmos is officially supported in all Airpods.

In other words, “spatial audio” in iTunes is for playing music in surround systems. Not for headphones (but I am willing to be proven wrong)
 
  • Like
Reactions: peanuts_of_pathos
Might as well also announce lossless video that nothing supports
Lossless video would actually make a significant difference… although the bitrates needed are “a little” prohibitive :)

Video is a great example of where you end up if you keep chasing “undetectable” quality deteriorations. Every time a new codec with lower bitrates show up, you have a demonstration how you can get “the same” quality at lower bitrate. You will never, ever, see someone demonstrating better quality at the same bitrate. This is because as stated above, in blind tests the differences have to be surprisingly large to be noticeable. So a test like that is not very impressive. So, you turn the argument around so that undetectable is good.

This is exactly what objectivists do in the audiophile discussions.
 
Keep in mind, the difference is factually there whether you can spot it in a blind test or not. Blind test is a poor - actually, scientifically unusable - tool for confirming “no difference”. A difference has to be very large, waaay past the point of “relevance”, for people to reliably call them out in a blind test. Meaning, there are many factually meaningful differences that you will not be able to confirm in blind tests, yet are relevant because as I state above, degradtions are cummulative.

In other words, blind tests do not and cannot provide proof that you can’t hear the difference. Saying otherwise is misusing and misunderstanding how blind tests work.
Nobody is really claiming there is no difference. What the results of a multitude of tests shows us is that people can’t tell the difference. How is a blind test not scientific? Blind tests are used all the time in scientific research, which researchers always attempting to use blind tests wherever possible. The entire point of statistical analysis is to test whether there is any meaningful difference, no matter how large, between a treatment and a non-treatment (in this case compressing vs not compressing). If the difference is imperceivable, then there is no meaningful difference.
 
In other words, blind tests do not and cannot provide proof that you can’t hear the difference. Saying otherwise is misusing and misunderstanding how blind tests work.

The ABX method is explicitly designed to test whether there are discernible differences between two inputs and it's considered valid if performed properly.

The main issue is that often it's just not performed properly, not in the method itself.
 
The ABX method is explicitly designed to test whether there are discernible differences between two inputs and it's considered valid if performed properly.

The main issue is that often it's just not performed properly, not in the method itself.
I mean, in audio it only tests wether you can discern if two audio sources are different - and because you are not listening to them simultaneously (impossible) you can't qualitatively tell which one you prefer. In other words, it is a poor test for audio with minuscule differences and an even worse one for figuring out wether those minuscule differences amount to a more subjectively pleasurable listening experience over time.

As a scientifically minded audio engineer my opinion of ABX testing of minuscule variants of lossy/lossless audio is fairly pointless. It's not how you listen to music you want to listen to over a period of time and your focus at trying to pick out differences overrides any subtle improvements you'd appreciate by just listening for enjoyment, wether with intent or defocused. A simple phase shift betwee the lossless and lossy can bring any differences into hearing range and prove to you that there is indeed a difference. Qualitatively that might not matter to you when just listening if oblivious to the source. But it can absolutely make a difference to your enjoyment, even if that effect is purely placebo. Your mind is a wonderful thing, and perception can be effected by even the smallest difference.
 
Video is a great example of where you end up if you keep chasing “undetectable” quality deteriorations.

No video codec in common use claims such thing though: the goal there is to achieve low enough bitrates without sacrificing too much fidelity, but that fidelity is being sacrificed is a given. AFAIK no such codec claims "undetectable quality deteriorations" and many have well known drawbacks in the way they operate which make some patterns easily detectable.

The difference is that with video typically the required bitrate to achieve undetectable fidelity is so high that compromises are a necessity, but that's not the case with audio where 16 bits at 40kHz are theoretically enough to cover the whole range of human hearing capability. 16 bit can cover the dynamic range between a mosquito and a jackhammer; 40kHz is able to perfectly sample the 20kHz sound frequency range.
 
A simple phase shift betwee the lossless and lossy can bring any differences into hearing range and prove to you that there is indeed a difference. Qualitatively that might not matter to you when just listening if oblivious to the source. But it can absolutely make a difference to your enjoyment, even if that effect is purely placebo. Your mind is a wonderful thing, and perception can be effected by even the smallest difference.
That's exactly the argumentation every "I can hear the difference clearly like night and day" golden ear switches to after failing a scientific blind test, lol.
 
Exactly. I don’t understand the seemingly relentless effort by so-called objectivists to keep everything RIGHT on the edge of undetectable, especially in cases where getting something better doesn’t cost anything. Even when it DOES cost something to get the tiny extra, it doesn’t cost you anything that I buy it. If Apple removed the option for lossy, maybe I could understand the resentment, but in this case there are literally no losers. People are debating for the sake of debating (so am I, so that’s more a matter of fact than a complaint…)

Can I quote you -saying your name (nick) of course- and this post, in a discussion that we are having about this in reddit, please?

Great post.
 
It's funny how often people here who insist that they hear the difference like day and night are simply asked to do this test and not ONE is getting back with a feedback on this. Seems to have its reason ... 😉
Misread post. Carry on.
 
Last edited:
I mean, in audio it only tests wether you can discern if two audio sources are different - and because you are not listening to them simultaneously (impossible) you can't qualitatively tell which one you prefer. In other words, it is a poor test for audio with minuscule differences and an even worse one for figuring out wether those minuscule differences amount to a more subjectively pleasurable listening experience over time.

The need for short samples is a well known caveat which needs to be taken into account when performing the test properly, but AFAIK when performed properly there is no credible study proving it leads to flawed results.

"Qualitatively telling which one is preferred" is outside the scope of the methodology: the methodology is designed to verify whether there are discernible differences, not whether one sample is "more pleasing" than another.

The question is more: if there are not discernible differences between the samples, why one sample would be "more pleasing" than the other? Theoretically there could be differences detected only subconsciously, but AFAIK there is no credible research supporting such hypothesis.

But it can absolutely make a difference to your enjoyment, even if that effect is purely placebo. Your mind is a wonderful thing, and perception can be effected by even the smallest difference.

I do agree that psychological aspects can play a big role, but that's actually why ABX tests are so important: they allow to figure out whether there is an objective, perceivable difference or not. Again, the goal of the ABX methology is not to figure out what is "more enjoyable", it's to figure out whether there are objective discernible differences.
 
Last edited:
Unless they are doing something that isn’t clear in the communication (which is a possibility), actually no. Atmos music is not the same as spatial audio in video playback. Atmos in iTunes means music in surround. In headphones this is down converted to stereo. In principle it doesn’t matter whether you do this in the iPhone or in the headphones. The spatial feature for video with head tracking doesn’t make sense for audio only source.

Only if the headphones are doing some “fake surround” stereo based off the Atmos, that is not being done in the iPhone (although it could just as well be), will there be an actual difference. I suspect not, since Atmos is officially supported in all Airpods.

In other words, “spatial audio” in iTunes is for playing music in surround systems. Not for headphones (but I am willing to be proven wrong)

The head tracking is not a pre-requisite.

By default, Apple Music will automatically play Dolby Atmos tracks on all AirPods and Beats headphones with an H1 or W1 chip.

Apple spatial audio takes 5.1, 7.1 and Dolby Atmos signals and applies directional audio filters, adjusting the frequencies that each ear hears so that sounds can be placed virtually anywhere in 3D space. Sounds will appear to be coming from in front of you, from the sides, the rear and even above. The idea is to recreate the audio experience of a cinema.


Apparently the stereo signal is not spiced up, but they actually process and play back a surround sound track using a series of techniques. When you play material in stereo, spatial audio is not used.

Must be similar as to how a couple of stereo paired homepod can now play dolby 5.1,7.1 and Atmos content from an Apple TV 4K (which will also support Dolby Atmos in Apple Music).
 
  • Like
Reactions: peanuts_of_pathos
Streaming is not like a traditional radio or linear TV in real time. Streaming actually downloads the title and plays that back while buffering. AFAIK. Apple music pre-loads the whole title after playback started.

It might just add additional cost or throttling when on a mobile network with data cap.
Yet during buffering and depending on the connection quality, bitrate can fluctuate sharply, now consider data usage between mp3/aac and lossless eg alac/flac/aiff which is roughly 10 times in size - both average bitrates being 256/320kbps vs 1411kbps - you’ll be waiting a bit before anything plays while a download actually completes or dealing with a surprising amount of data usage
 
  • Like
Reactions: peanuts_of_pathos
I can tell the difference between the qualities in these formats. Check out your hearing if you can’t.
That’s not very nice thing to say. Maybe you have an amp & decent enough speakers to hear the difference. I only managed to tell the difference recently with my new setup.
 
  • Like
Reactions: peanuts_of_pathos
Keep in mind, the difference is factually there whether you can spot it in a blind test or not. Blind test is a poor - actually, scientifically unusable - tool for confirming “no difference”. A difference has to be very large, waaay past the point of “relevance”, for people to reliably call them out in a blind test. Meaning, there are many factually meaningful differences that you will not be able to confirm in blind tests, yet are relevant because as I state above, degradtions are cummulative.

In other words, blind tests do not and cannot provide proof that you can’t hear the difference. Saying otherwise is misusing and misunderstanding how blind tests work.

So what kind of test would be correct then? Or are you saying human hearing is outside scientific testing?

The blind test is just listening to music, just that the listener doesn't know what he or she is listening too. If people can't consistently and reliably (significantly better then pure chance) hear the difference, the difference either isn't there or it doesn't matter.
 
Yet during buffering and depending on the connection quality, bitrate can fluctuate sharply, now consider data usage between mp3/aac and lossless eg alac/flac/aiff which is roughly 10 times in size - both average bitrates being 256/320kbps vs 1411kbps - you’ll be waiting a bit before anything plays while a download actually completes or dealing with a surprising amount of data usage

All the streaming services support offload playing. I almost never literally stream the music over the network.
 
I have Bose QC 35 II.
The kit included a wire for a wired connection.
If I will be listening to music via wire connection 'Will Not Be Completely Lossless" also?
Yes, I also have lightning adapter/Output 3.5 mm Headphones adapter which was in the box long long time ago
Your Bose are different than the AirPods as they actually have an analog input.
 
  • Like
Reactions: achehante
It's been explained that Apple never intended to announce this yet.
It's totally rushed as is evident that none of their devices are ready to support it.
The explanation was that it's totally due to the Trial that's been going on, and ended with Tim Cook on the stand a few days ago. Apple released this news as a small part of how they wishes to be perceived over the many points brought up during this trial.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.