It’s going to preface this and admit that I’ve never owned (or even worn) a pair of AirPods Max. I know that sort of invalidates my argument, but I want to lay out my reasoning.
Before I do, I think it’s safe to assume the new models dropped support for wired input. There is no usb-c dongle/ADC, and they’re clear in their marketing material that the usb-c port is for power only. That’s contrasted to the Beats Studio Pro (which I had for a bit). They accurately market that those have a built in DAC. When plugged in, the beats show up as a class compliant audio interface at 48khz (more on that later).
The first sign with the original AirPods Max is that the 3.5mm adapter is an ADC. That would be, idk, a 512 sample buffer at 48khz. That’s probably fine for normal listening. The alternative is the lightning input, but this is where I veer off into speculation. I believe that the AirPods Max compresses the sample buffer to AAC in real time in the right earcup and beams that data over to the left earcup. I think this because:
A) The teardown photos lead me to believe that the headband only transfers power between the earcups.
B) Apple never marketed the AirPods Max as being lossless (like they do with Powerbeats Pro)
C) It makes sense from an engineering standpoint to do the wireless transfer / synchronization between earcups since the other AirPods already do that.
I returned the Powerbeats Pro because you can’t do headtracked Spatial Audio when using the wired connection, and that was why I got them. Plus I felt the earcups were too small. I firmly believe headtracked Spatial Audio is the future, and I was eager for years that this iteration would be a way to finally produce music natively in the format. Specifically, I was hopeful these would support the low latency lossless protocol from the Vision Pro / usb-c AirPods Pro, which I enjoy. I’ve tested both lightning and usb-c AirPods in that setup and confirm the new protocol is very fast. From what I’ve gathered, the audio isn’t compressed or packeted - just a scrambled bitstream at this peculiar 20bit / 48k. (I think it’s a 24 bit signal with 4 bits used for error correction or positional data). It’s very possible that this protocol is extremely short range and therefore exclusive to Vision Pro. I’ve even tried taking the Vision Pro off while using the AirPods to see if the signal would break, but it’s kinda hard to do because you have to cover the lenses to keep it from turning off. Would be way easier just to have someone else wear the AirPods and walk away.
Sorry for the rant. In writing it, I’m realizing I’m taking this stuff way too seriously. I just think today showed that producing music in head tracked spatial / Atmos is farther away than I had hoped. Maybe you’ve tried that waves nx clip. I think I might go that route in the interim.
Thanks for the reply, I’ll take this further because I think it may be interesting for you and anyone else who happens to read it, first part will be about spatial then I’ll circle back about the cable.
I was tangentially involved with getting Apple Music’s Spatial algorithm fixed immediately after launch working back-and-forth with a producer involved with Taylor so I also am …enthusiastic... about the potential to say the least. I didn’t get paid for this, I just care. A lot.
Apple initially used their own interpolation (metaphorically) of Atmos when Apple Music Spatial Audio launched, but this was either curbed heavily or rolled back entirely, I’m not entirely sure because it’s still somewhat of a black box. Certainly they are doing something with the HRTF ear measurement etc. Now when you mix Atmos or Spatial Audio you can get predictable results which was not initially the case, many producers and artists took a checkbox approach and either farmed out their conversion or had automated tools do it which is why some of the frist playlists sounded like absolute garbage.
There’s an ongoing problem with Apple Music though, you can only submit 2 versions of the tracks to Apple via Connect, which isn’t enough. Even though Atmos folds-down into Stereo well, it isn’t quite as good as a regular Stereo Mix for e.g. Component Systems, some vehicles, etc. especially because Loudness levels are not normalized and there is significantly more headroom with Atmos vs. Stereo, and it winds up making the LUFS not match. If you don’t have the “Sound Check” option checked to level this out it is incredibly jarring, and a quiet Atmos track will swap to a regular Stereo one next in the playlist and blow your ears apart.
Since Apple only allows 2 mixes for submission, you are forced to submit Spatial / Atmos and Stereo. But there is a third option, Binaural, that would benefit
all headphone users. I’m not sure if Apple is purposefully limiting the options in order to push adoption of Spatial Audio or if they just don’t want to add the confusion of a third option but it should be there. I try to make this known whenever I can because producers want it, musicians want it, and the public would want it if they knew it existed. Binarual can arguably provide better imaging in some instances vs. Spatial Audio, too, but we aren’t given that choice in this ecosystem because the Artists and Labels cannot submit those tracks, so almost nothing is mixed in Binaural which is one of the supported distinct formats from Dolby and is in their Atmos tooling.
=====
Anyway, about the wired lightning connector, yes there is ADC going on which can introduce latency but in my experience there is virtually none, I’d guess it’s on the order of a couple milliseconds. When using high-speed interconnects you can even downsample with zero latency, one of my pro audio interfaces does this with optical and iI use it to control studio monitors because it has a higher quality DAC than my main interface.
You may be right about the cable between the ear cups, I haven’t seen evidence either way, but there is an H1 chip in each ear that does the connection and ensures precise sync between them. It’s possible you got a bad pair or ran into bugs, or simply that the old AirPods Max were better. at this than the Beats, but they did support 24-bit 48KHz audio using the cable. For all intents and purposes, assuming there is no lossy conversion happening, it is effectively lossless quality, but not high-res lossless, which is fine because most of the music out there isn’t even mastered at higher bitrates anyway. There are exceptions and you can tell if you have extremely high-end equipment and excellent ears but it is a very small gain for a ton of expense.
TL;DR they are good enough for Stereo despite the multiple conversion steps. There’s a fun article that compares DACs and Apple’s little 3.5mm one that connects to an iphone directly beat out some DACs that cost hundreds and I think in one case even thousands of dollars. Here’s one i found with a quick google
https://www.audioreviews.org/apple-audio-adapter-review/ but there are others that precisely measure the distortion etc. Apple usually knows what they’re doing with audio and a lot of ex-B&O engineers work there.
I 100% agree that the H2 chip should be put in the AirPods Max, and i was planning to upgrade to get it. Since they didn’t do that, and don’t mention whether the USB-C version even supports USB-C Audio it seems like this might actually be a downgrade across the board outside of maybe better battery life if they removed some circuitry that supported the conversion being done when you used the cable. It’s truly baffling.
Hopefully it’s the case that they’ll do a software-update to support usb-c audio and it may have the bandwidth to support Atmos / Spatial Audio and lossless. Since the 3.5mm jack is literally wired for Stereo there’s no way to use it and maintain the ability to get more than 2 channels, unfortunately. This isn’t very clear in their literature either so I don’t blame you for expecting it and not getting what you thought you were.
Hopefully we get a third generation in a year or two that solves all of this. As it is I don’t know how any producer could travel with the new AirPods Max and use them for critical work, and this even could include podcasting depending on how sensitive they are to latency. Weird, weird decision.
=====
On topic, this does somewhat relate to USB-C so I request the mods keep this in if they see it. I think it’s pertinent and useful information.