Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Short version: You need twice the sample rate of the highest frequency you want to capture. Humans can hear from 20-20000Hz.
On top of that, only children can hear up to 20KHz. Once someone hits their mid-teens they can't hear above 18KHz or so. By the time you're 30 you're at 15-16KHz. And that's if you've taken good care of your hearing.
 
I can tell the difference between the qualities in these formats. Check out your hearing if you can’t.
Ya I know my hearing is just ok I can’t even hear high pitch noises so I know lossless quality would be useless even on my computer. Oh well just means I save more money plus save my ears the pain of not knowing the difference lol.
 
  • Like
Reactions: peanuts_of_pathos
Makes sense though. I don't understand why they discontinued the HomePod though.
Maybe they’ve ‘discontinued‘ it early so people don’t get annoyed when the next version comes along immediately having purchased the ‘old’ one. It’s less of an issue for other devices as the product cycles are quite predictable.

I want a HomePod/Airport combo…
 
  • Love
Reactions: peanuts_of_pathos
The lack of hardware support for HiRes makes me think Apple have been forced to rush out their service before it was originally planned to counter Spotify. It’s unlike them not to have the hardware and software in sync at launch.
 
“Apple Music's standard audio and lossless audio will be "virtually indistinguishable."

Is that right? Why the big fuss then, seems like a huge waste of resources the whole thing, even me following the saga…

It is right. The maths just works that way. You can completely recreate the analogue waveform from the digital sample at the sample rates Apple Music, Spotify use. There’s some argument about if that waveform is a true representation due to the other information removed by the specific codecs like AAC but really it’s very, very, very hard* to reliably and repeatedly tell the difference.

*Probably impossible

The fuss is because despite this, users still clamour for a lossless service because they see it as superior. I guess you could say that AAC and MP3 were born in a bandwidth constrained environment and if we now have the available bandwidth then why not take advantage of it but its not particularly efficient. Regardless, audiophiles want to get as close to the true sound as possible, however the reality is that there’s so many components in the recording chain, from microphone selection and positioning, to the quality of cables used, the quality of the consoles and engineers, and the studio environment itself, its very difficult to say what, “true” is. In acoustic instruments even variations in ambient atmospheric pressure would produce a different sound.

Having said that AAC and MP3 are a compromise and removing them gets you closer to that truth, even if ultimately it’s unachievable.
 
This entire rollout makes no sense to me. They’re making a big deal out of it, while at the same time admitting it is “virtually indistinguishable” from lossy format, and most of their devices dont even support it. It just seems very stupid. I know there’s zero chance I could ever tell the difference

I think Apple is out of control without any real leadership. Even if Homepod sales were in the toilet, why would they discontinue it when they were about to launch hi res audio? How can one group release $550 headphones and another group release their updated for the first time in 4 years AppleTV that's all incompatible with their new music format?

And then those other groups are covering by saying you can't tell the difference anyway. So one apple group saying another big exciting release is "virtually indistinguishable" from the old one.

All the pieces of Apple are doing their own inconsistent things while there is no actual leadership. Meanwhile Tim Crook is too busy carrying on his virtue signalling social justice work while sucking up to China. He's got way too much on his plate to worry about unimportant stuff like what's going on at Apple. I wonder if he even knows what Apple is about to release before he's handed his script at the launch event.
 
Ya I know my hearing is just ok I can’t even hear high pitch noises so I know lossless quality would be useless even on my computer. Oh well just means I save more money plus save my ears the pain of not knowing the difference lol.

The vast majority of people are unable to discern AAC 256 (Apple's standard format) vs. lossless. Those who claim they can should do a proper ABX test to figure out if they actually can. It is unlikely, but possible though, since the lossy format does transform the audio signal in potentially perceptible ways.

Nobody can perceive the difference between 44kHz vs. anything higher, so "Hi-Resolution" audio with higher sampling-rates (e.g. 96-192kHz) is completely useless for playback purposes. At worst, it can actually be detrimental since most audio setups are not designed to handle the ultrasonic part of the signal properly, potentially causing audible distortions during playback.
 
It is right. The maths just works that way. You can completely recreate the analogue waveform from the digital sample at the sample rates Apple Music, Spotify use. There’s some argument about if that waveform is a true representation due to the other information removed by the specific codecs like AAC but really it’s very, very, very hard* to reliably and repeatedly tell the difference.

*Probably impossible

As someone who has studied Numerical Analysis at the graduate level and in the early DVD days directed hardware engineers in implementing the iDCT algorithms I need for mp3 decoding in silicon, the only thing I can think of to say to that is:

"Huh?"

Any digital representation of an analog signal is an approximation, you can never recreate the original analog exactly from a digital sample. And as for removing information with DCTs like AAC and mp3 use...the coefficients of the polynomial function from the PCM audio take the exact same space as the cosine coefficients the algorithm generates. You don't save space with the algorithm until you start zeroing some of the coefficients of the high frequency cosines.

Also, it is not a purely mathematical algorithm. They use psychoacoustical modelling of human hearing to remove nuances that people are very bad at hearing.

And there have always been people who say nobody can tell the difference. I remember being told that 128kbps mp3 was impossible to tell from a CD. That's very not true, and the AAC using in Apple Music sounds far better than 128kbps mp3. Personally, I can't tell 256kbps AAC from a CD and least on the equipment I have, but my ears are older now and that doesn't mean there isn't a difference that a decent number of people can tell.

I remember audio tapes which sound like crap by modern standards being marketed as indistinguishable from live.
 
If you listen to rock music the intentional distortion always sounds like muddled garbage when compressed; lossless is actually listenable.
 
  • Like
Reactions: peanuts_of_pathos
Any digital representation of an analog signal is an approximation, you can never recreate the original analog exactly from a digital sample.

The devil is in the details. The point is not that you can exactly recreate the entirety of the original analog: it is that you can capture the complete signal limited to a given frequency, which is half of the sampling-rate frequency used to capture. Emphasis mine:

The sampling theorem introduces the concept of a sample rate that is sufficient for perfect fidelity for the class of functions that are band-limited to a given bandwidth, such that no actual information is lost in the sampling process.

The theorem is used in conjunction with the known limitations of human hearing, e.g. being unable to hear anything above 20kHz, to perfectly capture the entirety of an analogue signal within the frequency range perceivable by a human.
 
I read a Macrumors article earlier that said lossless is not supported over Lightning, at least over AirPods Max.
I saw this article but I think I missed this line or thought it meant something else:
“Apple's ‌AirPods Max‌ headphones are equipped with a Lightning port, but it is limited to analog sources and will not natively support digital audio formats in wired mode.”

I thought lightning was a digital only connection, but it appears in the APM’s case, it’s analog only. I don’t quite understand but that’s what this line seems to say.
Supposedly, the APM also doesn’t just pass along this analog audio signal to its speakers, but it first converts it back into digital, then back into analog for final output to its speakers. Why would it do that? Why wouldn’t it either accept digital audio or at least just pass along analog audio?
I'm pretty confused.
 
  • Like
Reactions: peanuts_of_pathos
When Amazon and Spotify jumped on Losless, Apple Management decided they had to follow. The Marketing dep did what it does best and sold it as the nest best thing ever, even while Engineering pointed out nobody would hear the difference and they had no supporting hardware. As a result, it blew up in their faces as everyone believed the market spin over what the experts say.

So now they have to scramble with support documents and fussy language stating it’s great but undistinguishable, all their devices produce exceptional quality even when it’s almost lossless, …

Next month this will all die down when people realise they really can’t hear any difference and notice their device storage is full as ALAC files are huge.
So your saying Apple is a follower not an innovator. Lol.
Of course it’s the market and not Apple. Lol
 
  • Disagree
Reactions: Maconplasma
As someone who has studied Numerical Analysis at the graduate level and in the early DVD days directed hardware engineers in implementing the iDCT algorithms I need for mp3 decoding in silicon, the only thing I can think of to say to that is:

"Huh?"

Any digital representation of an analog signal is an approximation, you can never recreate the original analog exactly from a digital sample. And as for removing information with DCTs like AAC and mp3 use...the coefficients of the polynomial function from the PCM audio take the exact same space as the cosine coefficients the algorithm generates. You don't save space with the algorithm until you start zeroing some of the coefficients of the high frequency cosines.

Also, it is not a purely mathematical algorithm. They use psychoacoustical modelling of human hearing to remove nuances that people are very bad at hearing.

And there have always been people who say nobody can tell the difference. I remember being told that 128kbps mp3 was impossible to tell from a CD. That's very not true, and the AAC using in Apple Music sounds far better than 128kbps mp3. Personally, I can't tell 256kbps AAC from a CD and least on the equipment I have, but my ears are older now and that doesn't mean there isn't a difference that a decent number of people can tell.

I remember audio tapes which sound like crap by modern standards being marketed as indistinguishable from live.

Hmm, OK maybe you know more about the maths side than me then. But I studied computing at undergraduate level and music to an advanced level. and thought that Nyquist-Shannon proved you could reconstruct the original waveform from a discrete sample given a sufficient sample rate, this was in a networking context and maybe I’m missing some detail?

“Also, it is not a purely mathematical algorithm. They use psychoacoustical modelling of human hearing to remove nuances that people are very bad at hearing”.

Yeah I agree, hence why I mentioned information being removed by the codecs, and some of what’s removed isn’t just hard to hear it’s impossible. But my larger point, is that it’s ultimately futile anyway because you first have to define what’s true. IF you define that as the master, then 256kbps AAC at 44100KHz is good enough that hardly anyone can reliably and repeatedly tell the difference. BUT even if you could tell the difference, are you really listening to what the musicians really sounded like? Almost certainly not because there‘s a bunch of other stuff between you and the musicians that you can’t take out of the equation. From microphones to the decisions of recording engineers.
 
  • Like
Reactions: peanuts_of_pathos
So your saying Apple is a follower not an innovator. Lol.
Of course it’s the market and not Apple. Lol
That’s not at all what I was saying. But in this case: yes. For years they were betting on AAC, they removed the headphone jack and went all in with wireless over Bluetooth. So it makes zero sense to offer Lossless when you strictly look at their current hardware portfolio. Hence the backlash.

if they had a whole new pro line of speakers, headphones and a high res DAC ready to launch with AM Lossless, it would have been a completely different story.
 
  • Love
Reactions: peanuts_of_pathos
Hmm, OK maybe you know more about the maths side than me then. But I studied computing at undergraduate level and music to an advanced level. and thought that Nyquist-Shannon proved you could reconstruct the original waveform from a discrete sample given a sufficient sample rate, this was in a networking context and maybe I’m missing some detail?

“Also, it is not a purely mathematical algorithm. They use psychoacoustical modelling of human hearing to remove nuances that people are very bad at hearing”.

Yeah I agree, hence why I mentioned information being removed by the codecs, and some of what’s removed isn’t just hard to hear it’s impossible. But my larger point, is that it’s ultimately futile anyway because you first have to define what’s true. IF you define that as the master, then 256kbps AAC at 44100KHz is good enough that hardly anyone can reliably and repeatedly tell the difference. BUT even if you could tell the difference, are you really listening to what the musicians really sounded like? Almost certainly not because there‘s a bunch of other stuff between you and the musicians that you can’t take out of the equation. From microphones to the decisions of recording engineers.
It’s not like the “Wall of Sound” actually faithfully reproduced what the instruments and vocalists sounded like either. That isn’t the goal of the final product. It is only the goal of the pickups. The final product is what is produced as the last step before leaving the “hands” of the content creators.

The goal of reproduction is only to try to replicate that final output to the best approximation possible that people will pay for.

So I guess that if “lossless” is something that people are excited to pay for, then it is worth the effort.
 
  • Like
Reactions: peanuts_of_pathos
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.