Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Head-tracking I don't like at all. I'm on iOS 15 Developer BETA1 so I have head-tracking available, and obviously it might improve in newer versions of the BETA but currently having head-tracking on my Apple AirPods Max for music and audio content doesn't make much sense to me.

Luckily it's not anchored to my iPhone when the screen is locked. That would be BAD as I have my phone laying somewhere so being anchored to my phone wouldn't work at all.

But there is something allowing AirPods Max to know how I'm tilting my head. So if I swivle my head to the right, all audio is moved to the left channel, if I swivle my head to the left it's the opposite. I suppose this makes "sense" as if I was listening to a speakers or being at a concert and I moved my head around the anchor point of the audio would behave in the same way. But headphones are far more effective so it becomes ten times more noticeable when you have this effect on my headphones. If I'm blasting music on a surround system or listening at a consert the audio is at a sound level having audio waves bouncing all over so even though I tilt my head it's not like the audio flips from being about 50/50 to being 80% in one ear, 20% in my other ear. But this is what happens when head-tracking tries to be clever about this on my AirPods Max.

And to be frank. Why would I want this effect at all? I listen to music while moving around in my apartment or while walking outside. I will swivle and tilt my head all over the place and I see no reason why this should affect the sound on my headphones. Is the idea here that when you sit down and try to really listen closely on a Dolby Atmos track you might want to move your head in order to pinpoint various audio sources and head-tracking is supposed to enhance this? So if I notice something interesting going on to the right in the audio spectrum I could orient my head trying to listen more closely to this portion of the audio? I can't see any reason why head-tracking makes any sense whatsoever for music.

Luckily the acceccibiltiy option to disable "Follow iPhone" seems to also disable the built-in head-tracking on the AirPods Max as well. So it's possible to disable this.
 
I think Dolby Atmos has great potential. But for it to really work the artist and the studio has to put time and effort into it making sense. And I'm doubtful this will be the case in the long run unless Dolby Atmos and Spatial Audio become the norm in the industry as a whole. Why would artists and studios spend time and effort on this if they could just toss out music that is flat and could basically be played back mono as compared to stereo without any noticeable differences, still hit various top lists and get tons of playback?

I'm sure some artists and studios care a lot more about soundstage, clever tricks and technology than others and will use this to their advantage and thus create some really awesome and new with Dolby Atmos. But I fear that most will just keep on pumping out music that barely utilises the stereo perspective and calls it a day. Sure they might pump out a "Dolby Atmos" version as well but the minimum effort will be put into it and in most cases, it will just sound less balanced overall compared to simply playing the track in stereo or even mono.

So far the list of tracks that feel enhanced with Dolby Atmos is rather small. And some genres don't really benefit from having a wider soundstage all that much. I love wide soundstages, that's why I love my Sennheiser HD800S's but some genres benefit more from the wider soundstage than others and some barely benefit at all. Genres like Hippop, Metal/Rock works very well in more V-shaped audio where the soundstage becomes more narrow and in your head compared to being wider and spacesious. While genres like Jazz, Classical and tracks that utilise orchestras tend to benefit a lot by having a greater soundstage. Some tracks with orchestras that aren't even available in Dolby Atmos sound better to me when I enable the "Spatialise Stereo" on my AirPods Max which basically takes the stereo version and tries to upsample it to Dolby Atmos. The major drawback is how the vocals become less pronounced, but everything else tends to sound better for me even when Dolby Atmos is simply emulated on a stereo track. For music with an orchestra that is.


EDIT:

I have to agree with Apple on the notion that for some Dolby Atmos will feel less natural because it's different. Tracks that are really utilising Dolby Atmos have many things going on making me focus less on the music. But I feel this is more about me not being used to the format and what it offers and not about it being worse.
 
Last edited:
The effect of head tracked stereo spatialization is that of removing your headphones and listening to perfectly placed high end stereo speakers in front of you. It's quite shocking to flip between the two modes.
 
Music is a soundfield, though, isn’t it? I mean, even if musicians are playing in front of you, they’re not all playing from the same point on stage, there’s audio coming from center left right and rebounding off of the surfaces around you. Artists and mixers sometimes even prefer one studio over another due to the room’s acoustics enhancing the audio, so it would appear that the soundfield is important, just not usually had an opportunity to be captured and delivered to consumers.
Well yes. And some live music, such as for instance some jazz in clubs, chamber music being played in rooms it was actually intended to be played in et cetera you may well want to capture that ambience.
The strange thing is that while binaural recording does a wonderful job of that, it may still not be what you want.
Oh, God. I’ll try not to write an article.
Sound is part of our localization system and work with our eyes to help us not get eaten as small non-ferocious mammals. We are good at localisation to the sides which helps us turn our vision and determine the source and track with our eyes. We’re actually not that great in determining the origins of sounds that are broadly in front of us (which makes sense as there is little in the way of phase and occlusion cues to help us). It’s also part of the reason we can watch movies without the sound actually coming from the actors’ mouth - the brain lets the visual data dominate in the interpretation of the whole. Even though this can lead to phenomena such as perceptual front-to-back inversion for binaural recordings, the most obvious result when recording music binaurally is that compared to traditionally miked setups there is ”too much room” and you hear the music without the aid of visual cues to help you separate the sound sources.
Example: I recorded a trad-jazz band for a friend of mine who plays banjo. He was super disappointed that he basically couldn’t be heard by the audience! Which was true in a way, but not quite how you perceive it from the perspective of a spectator, as seeing him play helps your brain separate/fill in the fragmented and weak sound cues that are there. Remove the visuals, and he effectively disappeared.
”Correct” is not necessarily ”good”, when it comes to music recording. Balancing the instruments, and (depending on what and how you record) placing them requires skill to do well.
Unfortunately, music appropriately recorded for stereo loudspeaker reproduction typically just doesn’t contain the information our minds need for other than ”in front” localization. That’s arguably even good, because then we don’t have a lot of ambience recording shooting at us from the front (wrong direction) and adding insult to injury, mismatching the real space our eyes see when playing the music, which also adds its own set of reflections and colouring.
Balance is a tricky thing.

I strongly recommend anyone curious about sound and recording to try binaural recording. The best way to try it is to buy a couple of omnidirectional mics that you can attach to glasses close to your ears. This will get the important stuff like distance between the ears, head occlusion, shoulder reflections, the sonic character of flesh (eugh! 😀) and so on correct. Don’t worry about the colouring of your outer ear and ear canal that recording with a dummy head would capture because 1) it is added anyway as you play it on circumaural headphones (and simulated by earphones) and 2) it is highly individual, so trying to capture it with anything but the listeners own ears will just introduce artifacts. (And there are no appropriate playback devices for such recordings so you have to try to reverse the colouring anyway, or the sound will have passed through TWO sets of ears before it reaches your ear-drum). Binaural recording is super interesting and drives home a lot of points about how our senses actually work. I recommend it for experimentation for everyone with an interest in audio recording and reproduction!

God dammit. I wrote an article on my phone. 😀
 
I’m probably in the minority among the posters here when I say I like the Spatialize Stereo better than the plain old stereo. Not everything sounds better and definitely not a fan of the anchor point for music but I’m enjoying the virtualization better for most stereo music content.

I haven’t been around MacRumors very long but one thing i noticed quickly about here - if you like something Apple you’re probably in the minority. IME do not take complaints here as anything but complaints here. (To be fair 1.it inevitably and eventually happens to most chat boards. Just the natural internet progression. 2. It’s a fair amount of new who will pop in just to push their anti view and/or they’re doing their job)
 
Last edited:
  • Like
Reactions: Ace of aces
The effect of head tracked stereo spatialization is that of removing your headphones and listening to perfectly placed high end stereo speakers in front of you. It's quite shocking to flip between the two modes.
This is what an ideal crossfeed circuit would ideally achieve. :)
Actually, it’s not nearly such a challenge to achieve good crossfeed in this digital day and age. I’ll point out a couple of things though - it will change the perceived frequency response of the recording, even if you don’t tinker with the response of the original channel, as you add some of the other channel with frequency dependent attenuation and time delay.

Arguably this is entirely appropriate as the original mix was supposed to be played back on speakers, but may nevertheless be perceived as "wrong" if you are used to the unprocessed signature.

Also, while you can make a good approximation of the sound being reproduced by two point sources hovering in space, in reality the sound of loudspeakers are strongly affected by the acoustics of the environment they play in. Taking room acoustics out of the equation is arguably one of the main benefits of headphone playback under any circumstances, just pointing out that it we are not doing a full environmental simulation. (We typically don’t add room reinforcement of bass for instance, nor, blessedly, standing waves.)

Headphones sounding bass weak even when objectively measuring flat is partly caused by these, as from loudspeakers both ears would typically pick up the bass from both channels with little attenuation, and it would have room reinforcement as well. Nor of course do headphones have the physical component of bass perception. The Harman curve for desired headphone response tries to broadly take this into account, but then fiddling with that response could yield undesireable results. An advantage Apple has is of course that they know the response of the target phones.

At the end of the day, most of us are since childhood used to music on headphones sounding a certain way, and we’ve come to accept that as "right" even though objectively it really isn’t. Anything that changes those traits are likely to face an uphill battle for acceptance.
 
At the end of the day, most of us are since childhood used to music on headphones sounding a certain way, and we’ve come to accept that as "right" even though objectively it really isn’t. Anything that changes those traits are likely to face an uphill battle for acceptance.
Just another example of Apple skating to where the puck will be. Any 10 year old listening to music on Apple Music today is going to hear a lot of songs (and more over time) in this NEW way of presenting music and will likely always think of ”stereo” as being a little flat. :)
 
I'm so new to this feature, what I don't understand is when I'm watching a movie with Spatial Audio on, the Spatial Audio button has an animation on it, while listening to music with Spatial Audio label on it doesn't animated. Is this normal? And I think there is no difference when I turn Spatial Audio on or off while music is playing through AirPods Pro. I set the Dolby Atmos to always on 😶
 
Am I the only one who finds that stereo sounds better than spatial audio? I played a song, then went into the music settings and switched it between “off” and “always on” as the track was playing and found stereo to be so much better. I’m using AirPods Pro btw.
I agree. It bass heavy and I certainly don’t get the feel of being in the middle of it all. I’m using Powerbeats wireless and Sonos arc 5.1.2 via Apple TV.
 
I'm so new to this feature, what I don't understand is when I'm watching a movie with Spatial Audio on, the Spatial Audio button has an animation on it, while listening to music with Spatial Audio label on it doesn't animated. Is this normal? And I think there is no difference when I turn Spatial Audio on or off while music is playing through AirPods Pro. I set the Dolby Atmos to always on 😶
Headtracking vs static
 
Not impressed so far. If you watch Captain Marvel, Nirvana’s Come As You Are sounds fantastic in Spatial Audio in the movie. Nothing that I have heard in Apple Music sounds anywhere close to how that sounds
 
After getting my hands on the public beta… Spatial audio has gotten better imo… the head tracking maybe? That wasn’t there before but I’m definitely digging it loads more! Obviously I knew it’d get better anyway when people know how to mix it more etc etc… but yeah weird how it just sounds much better
 
  • Love
Reactions: ErikGrim
After getting my hands on the public beta… Spatial audio has gotten better imo… the head tracking maybe? That wasn’t there before but I’m definitely digging it loads more! Obviously I knew it’d get better anyway when people know how to mix it more etc etc… but yeah weird how it just sounds much better
Yes, spatial audio is so much better and more impressive in iOS 15.
 
As a music producer who spends a lot of time and effort mixing to get just the right stereo sound field and 'wall of sound,' putting this kind of crap on top of one of my tracks would make me cry. lol. Please don't dishonor the work of many audio engineers and technicians, mastering experts, etc. by applying this junk to their tracks.
It's like saying people should not be able to use mods on games they purchased as the devs spent time on how the game is supposed to be played. Don't feel bad, this is a good thing that we have options. A lot will still appreciate your work and at the same time, regardless if you see Spatial Audio/Stereo as a gimmick or not, let's just let people enjoy what they want.
 
As a music producer who spends a lot of time and effort mixing to get just the right stereo sound field and 'wall of sound,' putting this kind of crap on top of one of my tracks would make me cry. lol. Please don't dishonor the work of many audio engineers and technicians, mastering experts, etc. by applying this junk to their tracks.
What about people who will listen to your tracks with their default $9 headphones or default crappy stereo of their car or whatever ? I mean for a *lot* of people they will never listen to your music with the same or the right equipment so, is it not « denaturing » your work in a way ?
When I talk about headphones with some people, including my wife, it never crosses their mind to spend good money for good equipment to listen to music …
If some people are *so much* sensitive about the difference between lossy and lossless and that it’s a huge difference for them, what about the difference between the default crappy speakers million of people use everyday ?
 
Spatialize stereo just sounds worse to me on the AirPods Max, so I'll be keeping this setting off. I also don't see the benefits of this feature.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.