Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

paulchiu

macrumors 6502
Original poster
Feb 26, 2009
423
356
nyc
Just took 3 whole hours to upgrade to IOS16 on my 12PM.
The homescreen is nice and all but now after scanning both my ears with this 3D ear mapping for spatial audio, the real thrills begins.

The effect now with a pair of Airpods Pro is noticeable. The sound stage is wider, with more clarity within the music. The congestion is less, and music appears outside the head, beyond both ears. Listen to something like Vogue, you'll know what I mean.
 
It's the definition of gimmick.

A well recorded and produced stereo recording gives you spatial awareness of the audio. All this function (and functions like this) do is trying to add information that simply isn't there.

It also makes stuff that is a fairly proper stereo recording sound really really really weird since it tries to enhance the spatialness (in lack of better word) to a recording that already is spatial.

Most studio music recordings really don't have that much spatialness to begin with but are mixed with levels moved between right and left channel to have a the sound that the producer wants (and you will never be able to recreate that without having exactly the same speakers and room layout as the producer).
 
Does this have an effect on non dolby atmos tracks at all?

Yes, tried it on Blackpink's latest Pink Venom track which isn't ATMOS. The placement of the members from center to left or right as I rotate my body or head is spot-on.
 
It's the definition of gimmick.

A well recorded and produced stereo recording gives you spatial awareness of the audio. All this function (and functions like this) do is trying to add information that simply isn't there.

It also makes stuff that is a fairly proper stereo recording sound really really really weird since it tries to enhance the spatialness (in lack of better word) to a recording that already is spatial.

Most studio music recordings really don't have that much spatialness to begin with but are mixed with levels moved between right and left channel to have a the sound that the producer wants (and you will never be able to recreate that without having exactly the same speakers and room layout as the producer).
Thats not true. Soundstage in over-ear headphones varies drastically between brand/models, due to differences in headphone shape, ear shape, openness in the housing, etc etc. With an earbud, you can't really replicate that without doing some audio processing magic. That's what this is about
 
It's the definition of gimmick.

A well recorded and produced stereo recording gives you spatial awareness of the audio. All this function (and functions like this) do is trying to add information that simply isn't there.

It also makes stuff that is a fairly proper stereo recording sound really really really weird since it tries to enhance the spatialness (in lack of better word) to a recording that already is spatial.

Most studio music recordings really don't have that much spatialness to begin with but are mixed with levels moved between right and left channel to have a the sound that the producer wants (and you will never be able to recreate that without having exactly the same speakers and room layout as the producer).
Ok. Regardless of your explanation, respectfully, it still sounds awesome to me and completely blows away left/right stereo sound. Period.
 
  • Like
Reactions: Brad7 and JDnLex
It's the definition of gimmick.

A well recorded and produced stereo recording gives you spatial awareness of the audio. All this function (and functions like this) do is trying to add information that simply isn't there.

It also makes stuff that is a fairly proper stereo recording sound really really really weird since it tries to enhance the spatialness (in lack of better word) to a recording that already is spatial.
Spatial audio is NOT for music alone. It’s a great tool to watch movie with surround soundtrack like what we get from Apple TV+.
 
I don’t understand how a scan of the exterior of your ears can improve the sounds of an ‘in ear’ headphone.

Can anyone ELI5?

I’m skeptical.
I wanted to write a short explanation but this quickly became a wall of text... I'm not very knowledgeable about this but I'm going to explain it the way I understood it. I'm sure someone will correct me if I say something wrong on the internet anyway.

Every human has a unique ear shape. Sound waves that reach your ears will bounce on the shape of your external ears before reaching your eardrums. The frequency of a sound wave will change when the wave bounces in your ear. Even the size/shape of your head and other factors play a role here. This means that every person is going to experience music in a different way.

Spatial awareness is also affected by the shape of the ears. The reason we can tell if a sound is coming from above or below is because sound waves bounce in our ears in a particular way depending on the direction. Headphones (especially in-ear) usually ignore the shape of your ears and deliver audio directly to the eardrums. This means that the sound frequency will be slightly off. Usually stereo audio has horisontal spatial awareness by having different sound volume for each ear but that's far from perfect.

It's worth mentioning here that binaural microphones exist. These are basically two microphones placed inside the ears of a doll that has a generic shape of a person. Sounds that are recorded with these will bounce inside the artificial ears which means even if you have normal headphones you'll be able to hear spatial audio. These microphones are used for ASMR stuff you can find online.

Apple introduced Spatial Audio as a way to emulate how sound is received by generic human ears. Sound sources can be placed in a virtual space and a Head-related transfer function (HRTF) is used to compensate for headphones ignoring the shape of our ears. Apple even added head tracking into AirPods so these audio sources stay where they are if you move your head.

The problem with using binaural microphones or spacial audio with a generic HRTF is that your ears are unique. By letting your iPhone's TrueDepth camera scan your head and ears, Apple can create a personalized HRTF just for you. Spacial audio will just be more accurate than before and you will be able hear sound closer to how it would be if you had speakers playing sound all around you.

My favorite way of testing that this works is by enabling head tracked spatialized stereo and turning my head around to see if I can track where the sound is coming from in 3D.
 
Last edited:
I feel like I am getting deaf or I just have no "ear" for it lol or expect too much from all the marketing "wows"

I own AirPod Pro and Sony WF-1000xm4. I also have a TIDAL Hifi Plus subscription and tried "Atmos" and "360 Reality Audio" and I do not hear any difference compared to my basic "Spotify" quality. It certainly does not sound like the sound is coming from everywhere or like "standing in a concert" as people describe it online.
 
Last edited:
  • Like
Reactions: GrandeLatte
I feel like I am getting deaf or I just have no "ear" for it lol or expect too much from all the marketing "wows"

I own AirPod Pro and Sony WF-1000xm4. I also have a TIDAL Hifi Plus subscription and tried "Atmos" and "360 Reality Audio" and I do not hear any difference compared to my basic "Spotify" quality. It certainly does not sound like the sound is coming from everywhere or like "standing in a concert" as people describe it online.
You're not alone. I'm kind of tone deaf or not receptive to better technology in sound. 😆
 
  • Like
Reactions: contacos
Thats not true. Soundstage in over-ear headphones varies drastically between brand/models, due to differences in headphone shape, ear shape, openness in the housing, etc etc. With an earbud, you can't really replicate that without doing some audio processing magic. That's what this is about
So does this (scanning your ears) have an impact on music only if you’re using spatial audio? It’d be nice if it improved the soundstage of music on my AirPods Max.
 
I logged in to write a reply before noticing @GubbyMan pretty much covered everything - really nice post.

One thing worth knowing if you're in this Apple Music ecosystem is these algorithms are not static and have changed a few times - the HRTF stuff I imagine will also iterate a bit. If you wrote-off Spatial Audio a year+ ago but still have access to it I suggest switching it on now and then (or using Atmos on a Mac if you have the hardware to do it).

Apple's renderer has gotten way, way better than it was at launch. I even like using "Spatialize Stereo" for Podcasts now which is not something I ever expected when I first heard it butcher music ~15 months ago. Turning off that head-tracking nonsense is important though unless you're seated watching a movie IMO.

Producers are also getting their hands around mixing for Atmos and there is a lot more subtlety. I think Apple will get to a point where Spatial Audio is almost always 'as good or better' than Stereo for headphone listening at least, based on the improvements I've seen happen throughout the last year. I just wish that classical music App would launch because sitting in front of a real Orchestra is magical and not something most people get the chance to experience - it'll be a great use case for this technology if they pull it off correctly.
 
I had problems setting it up for my right ear until I found (on the support site) that I hold my hand at about 45 degrees from my front and move my head to the left. It was the opposite for my left ear. I had been trying it with my phone at 90 degrees trying to scan my ear and that didn't work. It says it is now set up and I'll try it later.
 
  • Like
Reactions: paulchiu
I have an iPhone X (patiently waiting for 14 Pro Max delivery today) and it took several minutes to scan my ears.

The first two times I tried it took more than a minute just to complete the processing and the settings app just shut down abruptly. I had to restart the phone before it somehow worked. The hardest part was scanning my ears because I had to hold the phone steady for at least 10 seconds at each of the 3 angles for it to succeed. Impossible to do without a mirror.

I’ve seen the process happen on an iPhone 12 and it was at least 10 times faster.
 
  • Like
Reactions: matrix07
My only major gripe with using spatial audio in Apple Music is the major discrepancy in volume between atmos and non atmos tracks.

It forces me to use sound check and that has the consequence is making my bass heavy tracks drastically less impactful due to the normalization effect.

It’s a noticeable difference otherwise in the way the audio sounds after doing the ear scanning. Atmos tracks themselves have come a long way since their debut it’s just the volume level that bugs me to this day.
 
  • Like
Reactions: paulchiu
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.