You’re wrong. Latency isn’t just a wireless problem, and it’s not true that wireless can’t be low latency. Sound itself takes 1ms to travel 1 foot through air, and modern wireless systems already achieve under 10ms.
Right the best wireless systems right now for musicians are around 2.4ms which is low enough. I think Apple’s engineers can achieve this with some proprietary work.
As you said standing a few feet from the speaker will introduce latency, personally when playing guitar I need my entire signal chain to always be less than 12ms, ideally under 8ms.
I can notice the difference between 1-2ms and 4-5ms when using headphones, it’s kind of surreal because you never hear sound represented that way and it feels like it arrives before you’re done striking the note. A rare few standalone guitar amp sims run through a high quality 192khz interface can get latency around 1.6ms. Mixwave Benson can do this and I recommend any guitarist with the equipment try it out sometime because it’s wild.
I don’t suggest getting used to it becuase everything will probably start to feel slow if you start playing that way a lot, but it’s nice for tracking very fast parts.
…
The lossless sucks arguments are hilarious, nobody should comment on them unless they have been to an audiologist or they’re in their early teens. Most people do not protect their hearing and have a substantial amount of loss, and even then it’s dependent on the entire audio chain to reproduce faithfully,
and dependent on the source material.
I
did blind abx test high-res audio and can hear it but it took a hell of an audio system and I was using Studio Monitors that cost more than most Macs. It’s
marginal at best.
The difference between lossy and lossless though is very noticeable and likely even so on the AirPods Max which punch above their weight in the closed-back class of headphones, particularly once you measure your ears and apply the HRTF. If you can’t hear it, enjoy the lower file size and bandwidth savings. It’s mostly a big deal for the artists and producers and engineers that need to do this critical work for a living, and then sadly have that work translate to an iPhone speaker in many cases.
This is why using Apple’s renderer to check mixes is a big deal, you want to test exactly what you’re getting out. I’ve had discussions with real grammy-winning pros about this back when Spatial Audio launched and Apple at the time was really screwing things up with their renderer by applying a second pass that was wildly different than Dolby. Thankfully they moved quickly and corrected it but I had to point them in the direction.
That first batch of Spatial Audio mixes that were done mostly through slapdash automation was …
yikes. But now it’s really good for the most part with head tracking disabled, and you get a ton more headroom with Atmos / Spatial mixes vs. Stereo which also probably makes the lossless vs. lossy thing even more important since there’s more information physically present and compression algorithms would reduce the nice headroom that the Dolby spec provides.
Don’t even get me started on Binaural… there’s a real complaint you can make about Apple since you can only upload 2 masters to iTunes connect instead of 3 which should be the standard (Stereo, Binaural, Atmos). Instead we have to settle for folddowns which is a ****** way to force Apple users into Spatial Audio when Binaural may be more engaging depending on the content. Nothing’s perfect.