Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The Sony XM4 something or other over-ear AND in-ear support Sony’s proprietary LDAC hi-res lossless format, but sadly only Android devices currently support LDAC.

AFAIK LDAC is not lossless at CD-DA quality, let alone "hi-res" (regardless whether that's actually useful).

LDAC has as highest bandwidth setting 990kbps, which is significantly less than what CD-DA quality would require (16bit * 44.1kHz * 2 channels = 1411.2kbps).
 
Did I read correctly on the product page that the new AirPods won't be fully supported (i.e. just regular Bluetooth ear buds) on iPads and Macs until the latest versions of iPadOS and MacOS Ventura come out later? Or did I just misinterpret the system requirements?

View attachment 2053471
I hope not as I have a late-2015 27” Retina 5K iMac (fully maxed out) which stopped at Monterey, but I don’t really care as even if I do pull the plug on the 2nd Gen AirPods Pro (which is doubtful as I have the 1st Gen ones which I got in an Amazon flash sale for $169 which included AppleCare+), I use them only with my iPhone 13 Pro Max 1TB (soon to be the 14 Pro Max 1TB on 9/16), and my AppleTV 4K 2nd Gen.

Though I wish they had 2TB iPhones just like they do on newer iPad Pros.

What’s funny though is that my iPhone Apple Upgrade Program’s (which I use to upgrade yearly) monthly payment even with “AppleCare+ with Theft & Loss” (though I’ve never had one stolen or lost one, or even damaged one since the front glass on a 7 Plus by falling a mere 1/2” onto its backside onto tile), is $2.92 per month less than the 13 Pro Max 1TB which was $77.83/month and the 14 Pro Max 1TB is $74.91/month. Explain that, Apple, though I’m perfectly content with it.

And now the AppleCare+ with or without theft and loss is different than last year:
This year: they offer unlimited damage to glass for a $29 deductible and any other damage for a $99 deductible, and two incidents per year of theft or loss for a $149 deductible.
Last year: they offer two incidents of damage per year to glass or insides for a $25 deductible, and all different deductible amounts to replace whichever phone you had; in my case the 13 Pro Max 1TB had a $340 deductible.

Which year AppleCare+ do you prefer more, 2021 or 2022? (I haven’t decided yet.)
 
AFAIK LDAC is not lossless at CD-DA quality, let alone "hi-res" (regardless whether that's actually useful).

LDAC has as highest bandwidth setting 990kbps, which is significantly less than what CD-DA quality would require (16bit * 44.1kHz * 2 channels = 1411.2kbps).
I’ve compressed some CD tracks into 450Kbps ALAC and others require over 990Kbps. Sony has some unique compression algorithm that sends a portion of the signal at a time and combines them back together on the other end by using a buffered, slightly delayed signal that you aren’t hearing in real time.
 
1) Let's not group all lossy into the same bin. I, for one, can usually tell the difference between 128kbps and lossless. When it gets to 256 or 320, I start to lose the ability to distinguish at all.
2) People clamoring for lossless are within their right to do so, but with many technologies Apple has a proven philosophy of not getting into the realm of diminishing returns. Their screen pixel density has remained relatively constant since the iPhone 4. People screamed for increased ppi to compete with Samsung for years, but that noise has died down as people came around to the fact that it literally almost never is noticeable to go higher than about 330 for LCD and 450 for OLED (due to pixel structure).
When LCD iPhones at 326ppi went to OLED at 460ppi, I was able to see a huge difference in detail (and can still see individual pixels at times, particularly in solid grayscale colors).
 
You mentioned you performed some tests and that you were scored in accuracy on those tests. If they are scientifically validated tests they should have a published methodology that details exactly what they are testing, how, why the results are relevant and how to calculate how much the results are significant from a statistical point of view.

I'm asking if you have a reference to said methodology's documentation since I'm interested in it.
When you are adjusting to correct the difference in what you hear on a 20-band EQ to apply to the MP3 file what frequencies and how many decibels per band you hear is missing in that MP3 from the lossless audio file, it scores your accuracy by comparing the resulting waveform comparison.
 
I hope not as I have a late-2015 27” Retina 5K iMac (fully maxed out) which stopped at Monterey, but I don’t really care as even if I do pull the plug on the 2nd Gen AirPods Pro (which is doubtful as I have the 1st Gen ones which I got in an Amazon flash sale for $169 which included AppleCare+), I use them only with my iPhone 13 Pro Max 1TB (soon to be the 14 Pro Max 1TB on 9/16), and my AppleTV 4K 2nd Gen.

Though I wish they had 2TB iPhones just like they do on newer iPad Pros.

What’s funny though is that my iPhone Apple Upgrade Program’s (which I use to upgrade yearly) monthly payment even with “AppleCare+ with Theft & Loss” (though I’ve never had one stolen or lost one, or even damaged one since the front glass on a 7 Plus by falling a mere 1/2” onto its backside onto tile), is $2.92 per month less than the 13 Pro Max 1TB which was $77.83/month and the 14 Pro Max 1TB is $74.91/month. Explain that, Apple, though I’m perfectly content with it.

And now the AppleCare+ with or without theft and loss is different than last year:
This year: they offer unlimited damage to glass for a $29 deductible and any other damage for a $99 deductible, and two incidents per year of theft or loss for a $149 deductible.
Last year: they offer two incidents of damage per year to glass or insides for a $25 deductible, and all different deductible amounts to replace whichever phone you had; in my case the 13 Pro Max 1TB had a $340 deductible.

Which year AppleCare+ do you prefer more, 2021 or 2022? (I haven’t decided yet.)
I bet I just misinterpreted it and it just means the newer functionality like the personalized spatial audio won't be available until the new software updates are released.
 
I’ve compressed some CD tracks into 450Kbps ALAC and others require over 990Kbps. Sony has some unique compression algorithm that sends a portion of the signal at a time and combines them back together on the other end by using a buffered, slightly delayed signal that you aren’t hearing in real time.
I believe in newer Bluetooth revisions, each earphone can actually receive separate audio streams. So if the 1411kbps uncompressed 16/44.1 stereo pcm is sent as 2 separate channels, that would lower the necessary bitrate to broadcast to each earphone by half. And lossless compression would lower the bitrate even further. It’s not out of the realm of possibility to have lossless Bluetooth at 16/44.1, so much as a matter of who will implement it.
 
When LCD iPhones at 326ppi went to OLED at 460ppi, I was able to see a huge difference in detail (and can still see individual pixels at times, particularly in solid grayscale colors).
I can also see a difference, but only when I hold my phone closer than I normally use it at. That’s how they came up with 326ppi in the first place. The “Retina” screen is a simple distance/ppi calculation for people with normal vision. Also, OLED ppi numbers are super misleading due to diamond subpixel structure. Only green is actually 460; red and blue are still 326 (460 divided by square root of 2). In other words, 326 OLED would look far more pixelated than 326 LCD.
 
If it's not pre-upsampled to a rate compatible with the DAC before feeding into it, sure, but that's not a shortcoming in the input, it's a failure in properly converting the input for playback.

The input signal at 44.1kHz is perfectly fine, as long as it's converted properly.

You don't have to pre upsample at the source before you feed it to DAC. You can literally feed 16 bit 44.1 KHz on a NOS DAC and the DAC will straight up convert it as is without further oversampling that's happening before conversion. If you upsample a 44.1 KHz to 88.2 KHz using HQPlayer then feed that to the NOS DAC, you will hear a difference between the two formats. On a delta sigma DAC, it'll be harder to notice the difference because all samples are bit-crushed to 5-6 bits and 16fs during conversion.
 
I’ve compressed some CD tracks into 450Kbps ALAC and others require over 990Kbps. Sony has some unique compression algorithm that sends a portion of the signal at a time and combines them back together on the other end by using a buffered, slightly delayed signal that you aren’t hearing in real time.

From what I could understand from the various articles I found, it uses lossless if possible but reverts to lossy if not. It makes sense because a lossless compression algorithm cannot guarantee the original will be always compressed to 70% of its original size, which is what would be needed to transmit over 990kbps.

I found no reference of the special buffering you mention, but I also was unable to find the technical specification.
 
When you are adjusting to correct the difference in what you hear on a 20-band EQ to apply to the MP3 file what frequencies and how many decibels per band you hear is missing in that MP3 from the lossless audio file, it scores your accuracy by comparing the resulting waveform comparison.

That's not nearly enough to properly specify a scientific test. What was the null hypothesis? Why is the test supposed to test it? How are the results evaluated? How does the test achieve statistical significance?

As stated, a proper scientific test should have a documentation detailing all those vital information to properly understand how it works. If it's a scientific test it should have been devised or used in some scientific paper, which must includes such information.
 
You don't have to pre upsample at the source before you feed it to DAC. You can literally feed 16 bit 44.1 KHz on a NOS DAC and the DAC will straight up convert it as is without further oversampling that's happening before conversion. If you upsample a 44.1 KHz to 88.2 KHz using HQPlayer then feed that to the NOS DAC, you will hear a difference between the two formats. On a delta sigma DAC, it'll be harder to notice the difference because all samples are bit-crushed to 5-6 bits and 16fs during conversion.

My understanding is different, but to be honest I have a very bad opinion on NOS DACs and I'd rather avoid opening that can of worm.
 
I can also see a difference, but only when I hold my phone closer than I normally use it at. That’s how they came up with 326ppi in the first place. The “Retina” screen is a simple distance/ppi calculation for people with normal vision. Also, OLED ppi numbers are super misleading due to diamond subpixel structure. Only green is actually 460; red and blue are still 326 (460 divided by square root of 2). In other words, 326 OLED would look far more pixelated than 326 LCD.
That would make sense, as grayscale colors look like actual printed halftones. My eyes have gotten blurrier over the years but I can still see the difference. Now I know why it has a halftone shaped pattern. Thx for the info, if accurate. I haven’t read about the dpi of each pixel being different, but as they differ in size on TVs I would assume the same for phones.
 
  • Like
Reactions: Return Zero
I bet I just misinterpreted it and it just means the newer functionality like the personalized spatial audio won't be available until the new software updates are released.
I’m not sure. The 1st Gen AirPods Pro have personalized Spatial audio. It wanted a picture of my ear. (The purpose of that is to customize its sound to your HRTF.)
 
I’m not sure. The 1st Gen AirPods Pro have personalized Spatial audio. It wanted a picture of my ear. (The purpose of that is to customize its sound to your HRTF.)
I forgot the personalized spatial audio was part of iOS, not the firmware. Getting the pictures of my ears was a real pain.
 
You don't need lossless audio in £170 in ear buds - you can't hear the difference. You can't hear the difference on a £10,000 mastering studio setup with golden ears, it's been proven time and time again by the worlds best ears. No one here has the worlds best ears, despite what placebo they think - the upgrades to the drivers and processing are much much much more important and impactful.

If they tell you otherwise, they're wrong.

Edit - click disagree all you want, you're wrong - scientifically proven to be wrong, no ifs, no buts, no opinions, you're wrong, end of.
As with all blanket statements this is totally inaccurate. What are you comparing to? AAC? You can absolutely hear the difference on 5” Behringer crap let alone a $10K mastering suite designed for such a purpose. Provide evidence for your ridiculous claims, because otherwise you’re just pissing in the wind and looking like a pompous jackass. I can find just as many articles and YouTube videos that prove it as I can disprove it. That is not support of your claim, at best it proves that everyone’s ears are different. Go figure…
 
  • Haha
Reactions: DaPhox and dannys1
As with all blanket statements this is totally inaccurate. What are you comparing to? AAC? You can absolutely hear the difference on 5” Behringer crap let alone a $10K mastering suite designed for such a purpose. Provide evidence for your ridiculous claims, because otherwise you’re just pissing in the wind and looking like a pompous jackass. I can find just as many articles and YouTube videos that prove it as I can disprove it. That is not support of your claim, at best it proves that everyone’s ears are different. Go figure…
Nope, you're wrong - it can't be heard, and you'll fail every test to prove it. Like everyone in this thread has, including the guy who tried to doctor his results.
 
Thank you for replying for me to Mr Right. As I said in another thread: "Simple A/B testing done on my Sennheiser HD450BT in wired and wireless modes using .WAV and AAC256 files demonstrate the resolution differences" with the source material that I have been using to calibrate systems for 20 years internationally. Whether Mr Right or anybody else believes me or not is irrelevant and unimportant..

*edit typo
So you’ve been doing something half assed for 20 years. You wouldn’t be the first dinosaur.
 
Nope, you're wrong - it can't be heard, and you'll fail every test to prove it. Like everyone in this thread has, including the guy who tried to doctor his results.
I’m not wrong. You still haven’t defined what it is you’re comparing to… Firthermore, I didn’t make a claim about me personally. I said I can find as many articles and videos to support my claim as I can to counter it. The only conclusion that a reasonable observer could draw is that peoples ears hear differently. And would that truth surprise you…? I’ve seen this test demonstrated. As has been said before, whether you believe it or not is irrelevant. The facts are immune to your hair splitting. Anyone with a phone can google it and see for themselves, while you continue to argue your point in this bubble.
 
  • Haha
Reactions: DaPhox
I’m not wrong. You still haven’t defined what it is you’re comparing to… Firthermore, I didn’t make a claim about me personally. I said I can find as many articles and videos to support my claim as I can to counter it. The only conclusion that a reasonable observer could draw is that peoples ears hear differently. And would that truth surprise you…? I’ve seen this test demonstrated. As has been said before, whether you believe it or not is irrelevant. The facts are immune to your hair splitting. Anyone with a phone can google it and see for themselves, while you continue to argue your point in this bubble.

You're welcome to prove you can hear the difference with ANY equipment you want to listen to.

Here you go - http://abx.digitalfeed.net/
 
  • Like
Reactions: DaPhox
You're welcome to prove you can hear the difference with ANY equipment you want to listen to.

Here you go - http://abx.digitalfeed.net/
I think that test greatly depends on how well you know the tracks.

I got 100% on Daft punk twice in a row, (beefier bass end, more high end distortion) 80% third time with tired ears. I know the track well, and what to listen for. My equipment is not fancy, MDR-V6's and an el-chepo Behringer 302 mixer.

That does not mean there is audiable differences in the other tracks, but I haven't heard them thousands of times in order to make the tiny distinctions (the music selection is rather limited).

It also does not mention the bitrate of the lossless comparison. Are they simply are ripped from CD @ 16/44.1 or hi-def lossless masters?
 
It also does not mention the bitrate of the lossless comparison. Are they simply are ripped from CD @ 16/44.1 or hi-def lossless masters?

Well you can't hear the difference between 16/44.1 and anything "higher definition" either - another scam of the audio world. But they're all the original 16bit 44.1 format, AAC 256kbps vs flac.

On another note I do slightly envy anyone who can listen to the same track "thousands" of times.
 
  • Like
Reactions: cbautis2
As with all blanket statements this is totally inaccurate. What are you comparing to? AAC? You can absolutely hear the difference on 5” Behringer crap let alone a $10K mastering suite designed for such a purpose. Provide evidence for your ridiculous claims, because otherwise you’re just pissing in the wind and looking like a pompous jackass. I can find just as many articles and YouTube videos that prove it as I can disprove it. That is not support of your claim, at best it proves that everyone’s ears are different. Go figure…
Actually, you have the burden of proof in this case.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.