Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Hey, I get it, everyone needs their superstitions.
A widely respected expert in music production claims that the ideal bit depth and sample rate would be around 18-bits/60 khz. Standard CD is 16-bits/44.1 khz.


It's all way above my head, but I do know that when they were creating the CD standard, several major companies wanted the sample rate higher than 44.1 khz, which was one of the lower sample rates considered (and the one ultimately accepted). I believe movies and TV use 48 khz.
 
  • Like
Reactions: JeffPerrin
TBH I have only subscribed to Tidal to get CD quality so i can feed Roon player. It does not really matter which one of the pipes i would be using, tidal, qobuz, amazon unlimited, because the player to beat is Roon. It has an amazing amount of information on music, it has a kicking ass algorithm to find new music, to rediscover my own music. Apple should just learn a trick or two from them. And I´m not sure what sourcery it uses, but it always sound better than the source (tidal) somehow. And yes, you need proper good speakers and amp to listen to that.
 
  • Like
Reactions: Fordski
Can AirPods even stream at CD quality? Typical Bluetooth sure can’t, and Apple seem to be very lax on adding APTX to their iPhone Bluetooth capabilities. If “HiFi” in this case isn’t at least CD quality then it’s pretty useless.
Indeed, I think you’re correct. I’m not sure Bluetooth supports lossless or CD quality. I believe it needs to be a wired connection at the moment.
 
Indeed, I think you’re correct. I’m not sure Bluetooth supports lossless or CD quality. I believe it needs to be a wired connection at the moment.
Bluetooth 5.0 doubles bandwidth, so lossless compression might now be feasible.
 
New AirPods featuring the AirTag like functionality built-in are bound to be launching in the very new future, so probably together at a combined music focussed event.
 
I hope this also applies to iTunes Match (by allowing to upload your personal tracks in FLAC or similar) and albums bought in the iTunes store (by allowing streaming or downloading the lossless version).
 
  • Like
Reactions: Robospungo
A "high-res" option should be about lossless. High sample rates? That'll just be for marketing hype. Here's why high sample rates don't matter, and aren't needed. (Long video, but a classic - probably the best, easiest-to-follow demo on sample rates & bit depth I've come across over the years.)

 
Last edited:
  • Like
Reactions: dude-x
A "high-res" option should be about lossless. High sample rates? That'll just be for marketing hype. Here's why high sample rates don't matter, and aren't needed. (Long video, but a classic - probably the best, easiest-to-follow demo I've come across over the years.)

Haven’t watched the video, but does it state anywhere that any digital reproduction of an analog signal can be considered lossless? Because that kinda indicates the level of seriousness of the presenter.
 
Haven’t watched the video, but does it state anywhere that any digital reproduction of an analog signal can be considered lossless? Because that kinda indicates the level of seriousness of the presenter.
That's a good video, definitely worth the watch. It's been a few years since I watched that video, but I do believe he states a band-limited signal to half the sampling frequency can be losslessly captured.
 
  • Like
Reactions: JeffPerrin
That's a good video, definitely worth the watch. It's been a few years since I watched that video, but I do believe he states a band-limited signal to half the sampling frequency can be losslessly captured.
Ah, thanks. The OP indicated that “high sample rates don’t matter” but, apparently, high sample rates are at the core of even considering something lossless.
 
A "high-res" option should be about lossless. High sample rates? That'll just be for marketing hype. Here's why high sample rates don't matter, and aren't needed. (Long video, but a classic - probably the best, easiest-to-follow demo on sample rates & bit depth I've come across over the years.)


This guy, who is far more reputable than YouTube man, disagrees:

http://www.lavryengineering.com/pdfs/lavry-white-paper-the_optimal_sample_rate_for_quality_audio.pdf
 
I wonder why the inventor of a lossy audio codec would discount the need for higher sample rates. 🤔
Well, he gives away his code as open source, so it's not like he's got a real horse in the game. (Unlike the author of the oft-quoted Lavry white paper, who well... is in the business of selling high-end converters.)

Look him up on wiki. No question Montgomery knows his stuff. And if you haven't already, watch the video. It's a great science-based demo that, without question, sets the story straight on the realities of digital audio.
 
  • Like
Reactions: oryan_dunn
I wonder why the inventor of a lossy audio codec would discount the need for higher sample rates. 🤔
Does he really discount the need for higher sampling rates? The section where he talks about "do the gaps between points lose information?" seems to indicate that this is not problem so long as your bandwidth is sufficient (to keep the noise floor low). It seems like the point is to not to worry about the small details in between two points (or that you can overlap two sources).

But I feel it doesn't make a point against higher sampling rates.

That being said, Monty of Xiph did make the argument that 16-bit/44khz is good enough for a majority of listening because extra information is hardly discernible in blind tests.

I know from experience that A/B listening to 320kbit AAC/MP3 to CD it's hard to hear a difference (I do hear a difference but I have to really pay attention unless things like cymbals get muddied throughout) so in that regard I don't feel bad for owning or streaming lossy music, but the fact that there is a difference means that high quality files do have some value.
 
Does he really discount the need for higher sampling rates?
Well, the title of the his original blog post (and quoted in the YouTube description) was: "Why you don't need 24 Bit 192 kHz listening formats". 😁

That being said, Monty of Xiph did make the argument that 16-bit/44khz is good enough for a majority of listening because extra information is hardly discernible in blind tests.
This is true and was previously found by a famous study performed by another very credible scientist, James Moore. (bio in link below) This study was discussed in the well-respected pro-audio trade magazine Mix in an article titled, "The Emperor's New Sampling Rate." https://www.mixonline.com/recording/emperors-new-sampling-rate-365968
 
Well, the title of the his original blog post (and quoted in the YouTube description) was: "Why you don't need 24 Bit 192 kHz listening formats". 😁


This is true and was previously found by a famous study performed by another very credible scientist, James Moore. (bio in link below) This study was discussed in the well-respected pro-audio trade magazine Mix in an article titled, "The Emperor's New Sampling Rate." https://www.mixonline.com/recording/emperors-new-sampling-rate-365968
I like this article. The author himself says he can hear a difference in 96Khz recordings and thinks because it is because of smaller time intervals affects how the left and right hear the sound. I wonder if people can devise experiments with such tight tolerance to prove/disprove this theory like the same way they came up with a way to detect gravity waves.

It should be reassuring to people who traded their CD collection for (lossy, though lossy is orthogonal to the sampling discussion) streamed audio so that they don't feel that they're missing out on a lot. I've been happy with the iTunes songs I've collected over the years despite growing my CD collection (which is collecting dust).

I did build a small collection of SuperAudioCD's and while they do sound better I think it's because of new mastering rather than the effects of DSD. Though using a program like Audirvana which lets me upsample music to higher rates and to DSD (to whatever level your DAC supports) DSD sounds a bit worse(? or maybe different) but it also feels more pleasing to listen casually. Upsampling to 768Khz just sounds different, maybe even worse as though something is lost but I have no words to describe it.
 
  • Like
Reactions: ErikGrim
I like this article. The author himself says he can hear a difference in 96Khz recordings and thinks because it is because of smaller time intervals affects how the left and right hear the sound.

Well, in the end the TLDR of it is people had the same chance of picking the high-res audio in the test as they did flipping a coin. (And interestingly, those with superb hearing - above 15khz capable - actually had a poorer results.)

Upsampling to 768Khz just sounds different, maybe even worse as though something is lost but I have no words to describe it.

It may introduce pleasing distortion? Who knows... If it sounds good it is good. All other things being equal, though, most people won't/can't hear the difference between 44.1 and 96. (The fact that the vast majority of audio engineers who actually produce the music stick with 44/24 or 48/24 is revealing as well.) However, as all things are rarely equal, there are valid reasons for recording at a higher rate. (lower latency when recording live instruments, maybe a particular piece of gear performs better at a higher sample rate, the client demands it, etc. etc.)

At any rate, I'll sign off with this: It's no big secret that lossy codecs seriously detract from the master recordings, the worst offenders introducing digital clipping and distortion along the way. (A prime reason for Apple encouraged engineers to enroll in the "Mastered for iTunes" program.) As such, I would gladly welcome a high-res service from Apple and the ability to purchase music or stream tracks via higher quality, lossless codecs.

Can't wait to see what Apple has in store. Cheers all! :)
 
  • Like
Reactions: dude-x and ErikGrim
Well, in the end the TLDR of it is people had the same chance of picking the high-res audio in the test as they did flipping a coin. (And interestingly, those with superb hearing - above 15khz capable - actually had a poorer results.)
This is what I find most significant about it. That, even though the math shows there SHOULD be a discernible difference, in reality, there’s not, even among the trained. By extension, this should also mean that, given the proper instrument, it should be very hard to audibly tell the difference between a real instrument and a software modeled one if it’s computationally accurate enough.

This being the case, I wonder if the Apple solution is really just “more bits” OR “better bits”.
 
  • Like
Reactions: JeffPerrin
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.