Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Bits or sample rate above 44.1/16 won't result in better sound quality for the consumer. There are other reasons studio engineers will use higher bit rates while recording (it allows them to record at very low levels and not have to worry about overloading or clipping converters) but for the end user...there are virtually no commercially available recordings that are so low in level that 16 bits aren't sufficient to accurately reproduce the music.


Where 24-bit truly shines is when listening to acoustic sounds. Jazz, classical, and piano work sounds better at 24-bit. The ring-outs are much cleaner. Cymbals aren't harsh. Generally it's the extreme ends of the audio spectrum that benefit the most.
 
Sorry, but bit depth has nothing to do with producing the higher frequencies.
Yes it does. The encoding has fewer steps of volume as frequency increases because more bits are taken up to encode frequency and fewer are available to encode volume. Volume becomes less accurate, more like a square wave and less like a smooth function. This causes the harshness some listeners note. Another thing that causes problems is the incompetent design of the volume encoding. Our ears are logarithmic systems, so higher resolution is required at low volumes than high. Red book encoding has that reversed, less resolution at low volume than at high volume. This causes high-order distortion. Wider encodings have a lot more bits for volume, smoothing out both of these sources of distortion.
 
Yes it does. The encoding has fewer steps of volume as frequency increases because more bits are taken up to encode frequency and fewer are available to encode volume. Volume becomes less accurate, more like a square wave and less like a smooth function. This causes the harshness some listeners note. Another thing that causes problems is the incompetent design of the volume encoding. Our ears are logarithmic systems, so higher resolution is required at low volumes than high. Red book encoding has that reversed, less resolution at low volume than at high volume. This causes high-order distortion. Wider encodings have a lot more bits for volume, smoothing out both of these sources of distortion.


You're absolutely correct. High frequencies suffer from lower bit depth and sample rates.
 
Apple effectively killed high-res audio a decade ago in the form of SACD and DVD-A, by feeding MP3s to hordes of "you can't tell the difference anyway" iSheep via the iPod, relegating hi-res digital formats to "elitist snobs" and "pretentious" vinyl enthusiasts with good hearing and hi-fi stereo systems. Some people don't use earbuds, some people aren't deaf, and some people DO care.

The iRony that most music is produced on Macs and yet Apple insists on degrading the artist and the listener by shoving lossy down our throats for the past decade.

"All the kids will eat it up, if it's packaged properly".

Screw you Apple, your insistence on using proprietary ALAC instead of open-source FLAC has guaranteed you will get none of my money in this realm.
 
Apple effectively killed high-res audio a decade ago in the form of SACD and DVD-A, by feeding MP3s to hordes of "you can't tell the difference anyway" iSheep via the iPod, relegating hi-res digital formats to "elitist snobs" and "pretentious" vinyl enthusiasts with good hearing and hi-fi stereo systems. Some people don't use earbuds, some people aren't deaf, and some people DO care.

The iRony that most music is produced on Macs and yet Apple insists on degrading the artist and the listener by shoving lossy down our throats for the past decade.

"All the kids will eat it up, if it's packaged properly".

Screw you Apple, your insistence on using proprietary ALAC instead of open-source FLAC has guaranteed you will get none of my money in this realm.


Isn't a good portion of the issue due to DRM needs that their labels demand?
 
  • Like
Reactions: JamesPDX
I am a lurker. Once or twice a year I'll actually comment.

As a former professional PT Certified audio engineer - this comment is 100% accurate. And it needed to be said.

You can't improve the dynamic range of masters. They are already compressed and limited.

The "Loudness Wars" has brought us "here". And for most popular music the consumers/listeners have chosen more punch, less fidelity.

If you want to change the dynamic range you'll have to convince the music industry and record labels to drastically change an essentially ingrained trend.

It won't happen.

If you truly want dynamic range you will have to go back to analog records.

Convenience has trumped the details.

24bit audio .wav/.aiff files are a noticeable and welcome option for me.

*This is why Jimmy Iovine was an important acquisition.
I have a BS in audio engineering production and technology from MTSU's best in world RIM program. I prefer live engineering and stay current by doing so part time but I had to learn everything about analog and digital audio (including how a CD player works down to what bits mark what, quantization, etc.).

It's nice to get validation from a fellow colleague every once in a while. Thank you, truly.
 
  • Like
Reactions: JamesPDX
This doesn't explain anything. Just airy marketing speak with a complete lack of technical substantiation.
More likely it's snake oil designed to sell you more expensive "certified" devices that you don't need. Here's a long thread with a more educated discussion:

https://www.hydrogenaud.io/forums/index.php?showtopic=107666


Ya, that sounds a lot like good ol' "MP3PRO". Once you start talking about backwards compatibility, you're talking about a ton of baggage.

Oh, and someone is getting a licensing cut... because Money.
 
  • Like
Reactions: cycledance
You're absolutely correct. High frequencies suffer from lower bit depth and sample rates.

50% correct. High frequencies can suffer from low sample rates, but bit depth has absolutely nothing to do with frequency, all the bit depth does is increase the resolution with regard to what magnitude the signal is at when sampled. Increased resolution (higher bit) does reduce the noise floor, but the 16-bit noise floor is already incredibly low, it doesn't need to go any lower. The dynamic range is also as high as it needs to be with 16-bit. It isn't the format's fault if the audio engineers don't use the capability of the dynamic range, but that has nothing to with what the format is capable of. Additionally, 44.1KHz sampling satisfies Nyquist theorem for all audio within our hearing range. Anything higher than that is a waste on the listening end. Please stop spreading misinformation.
 
Lossless doesn't cut it for me. I need the full raw uncompressed quality to hear it as it was mastered. Lossless dulls out some of the minute details, sharpness and punchy bass elements that you don't really get until you jump up past about 900k -- ALAC or FLAC is not good enough, you need AIFF or WAV to really hear it as it was intended...better yet, vinyl :)

Basically, anytime you add any kind of compression algorithm to the original it dumbs it down, fuzzes up the highs and makes the sharper elements of the bass less pronounced. It takes either a really good stereo system or high end speakers or headphones to pick up on this -- usually the larger ones with more bass response can differentiate the higher quality audio better from the compressed versions. With the bundled earbuds that come with the iPhone, they can't reproduce the higher end bass elements like a larger speaker system can, or even high end studio over-ear monitors, so you would not be be able to tell a difference.

I bet one million dollars you cannot hear the difference. This is why nobody can take audiophiles seriously.
I partly agree with WardC. At home I have setup a Sonos playbar, sub and play5. I then have both a Spotify and Deezer Elite account. You can queue up the same tracks on both services and most times you'll hear a noticeable difference. The best way to describe the difference is the Deezer higher bitrate tracks have better low-ends/sharper (bass) and better high-end notes. The spotify tracks do a good job of having missing/blunted high-end notes and the low-end is playing more between the speakers and bass. It is more muddied.

If Apple would offer higher bitrate, I'd be on board and stop buying CDs. There is a market for this. Most users who only use the standard headphones are also usually the ones who also only buy SD and not HD. Classical music is a good example. The music can be very subtle and demands detail. Many of it's listeners are older and can spend hundreds for good speakers/headphones because it's about getting the best experience possible.
 
Yes it does. The encoding has fewer steps of volume as frequency increases because more bits are taken up to encode frequency and fewer are available to encode volume. Volume becomes less accurate, more like a square wave and less like a smooth function. This causes the harshness some listeners note. Another thing that causes problems is the incompetent design of the volume encoding. Our ears are logarithmic systems, so higher resolution is required at low volumes than high. Red book encoding has that reversed, less resolution at low volume than at high volume. This causes high-order distortion. Wider encodings have a lot more bits for volume, smoothing out both of these sources of distortion.

Every bit in a sampled audio signal is for amplitude only. Every time a sample is taken you take a 16-bit value that denotes where the amplitude at that moment is. If you sample more frequently, you end up with more 16-bit amplitudes. That is it, any additional bits in there beyond that are just telling the computer how to parse and use those samples correctly to reproduce the signal. Higher frequencies do not require more bits to reproduce accurately. Frequencies only require that nyquist theorem is satisfied with regard to sample rate. 44.1Khz sampling satisfies Nyquist theorem for our entire hearing range allowing an accurate reproduction of any and all frequencies in that range. 16-bits is enough to ensure the noise floor is low enough that we can't hear it without cranking the volume up to dangerous levels, and to provide dynamic range to allow good reproduction of levels as low as a whisper up to extremely loud without adjusting the volume knob while listening. There is literally no reason to increase the bit depth or sampling on the listening end.
 
Last edited:
  • Like
Reactions: milo and drumcat
WardC said:


If Apple would offer higher bitrate, I'd be on board and stop buying CDs. There is a market for this. Most users who only use the standard headphones are also usually the ones who also only buy SD and not HD. Classical music is a good example. The music can be very subtle and demands detail. Many of it's listeners are older and can spend hundreds for good speakers/headphones because it's about getting the best experience possible.

Hang on a second... this story was about a streaming audio format. Once we start talking about STREAMING for classical music, we've lost the plot.

Spotify, et al, might have a questionable business practice, but there's no doubt they built a streaming business on the formats currently in play. Why? 90+% of what they stream out is pop music. Pop music, the predominant product, does NOT benefit from the improved quality *within the marketplace*. If we want to start talking about LSO and 5.1 streaming for movies, BBC streams from Proms, etc., that's a different ballpark altogether.

This entire exercise might be a new streaming format that has 99% of the quality that 128 m4a has, but at half the data... just spitballing, but let's not get ahead of ourselves. Having that whole 64k wma = "cd quality" shoved around for so long, I'm not going to get super excited about a *streaming* format coming in and offering local-file quality of any kind. In fact, we're still a long way from Netflix making BluRay "obsolete", yet Netflix (et al) have quite a business.

Unknot those knickers until we get some facts.
 
High frequencies can suffer from low sample rates, but bit depth has absolutely nothing to do with frequency, all the bit depth does is increase the resolution with regard to what magnitude the signal is at when sampled. Increased resolution (higher bit) does reduce the noise floor, but the 16-bit noise floor is already incredibly low, it doesn't need to go any lower.
More bits isn't really about noise floor it's about quantization error.
Additionally, 44.1KHz sampling satisfies Nyquist theorem for all audio within our hearing range. Anything higher than that is a waste on the listening end. Please stop spreading misinformation.
Nyquist isn't the point. Dithering and low-pass filtering are.
 
More bits isn't really about noise floor it's about quantization error.

Nyquist isn't the point. Dithering and low-pass filtering are.


Adding 50% file size for a listener who would be quite literally attempting to listen for a rounding error that doesn't add up to anything worse than true CD audio isn't relevant to a streaming conversation.

Dithering and low-pass are done at the mastering level.

8-bit audio (Nintendo stuff) - 256 points of difference.
16 bit is 65,536. That's CD audio.
24 bit is the fallacy that people using actual ears can sort out the potential "rounding errors" of 256 points between any one level of 65,536.

24, and 32 bit audio is very relevant. Just not at all to streaming. It's a waste of a byte for every two bytes.
 
  • Like
Reactions: milo and HVDynamo
More bits isn't really about noise floor it's about quantization error.

Nyqvist isn't the point. Dithering and low-pass filtering are.

Quantization error is what defines the noise floor, so they are referring to the same ultimate effect. Nyquist is the point, Dithering only helps to flatten/reduce harmonics at the noise floor. It may increase the noise floor a little bit, but the trade off is removing the harmonics. Then they can shape it further, and it effectively gives you a few more decibels of dynamic range. Low pass filtering that is done correctly will not impact the frequencies in our hearing range.
 
  • Like
Reactions: drumcat
I partly agree with WardC. At home I have setup a Sonos playbar, sub and play5. I then have both a Spotify and Deezer Elite account. You can queue up the same tracks on both services and most times you'll hear a noticeable difference.
I wish people would stop posting this kind of comparison. You *can not* compare codecs based on material from different sources. Spotify and Deezer could be using different source material, apply different mastering, or perhaps run their music through certain filters (e.g. loudness filters) to make them sound "better" (yes, this kind of cheating is very common in the industry). It's just not a valid methodology.

If you want to do a real comparison, you need to use two encodings of the *same* source material. You also need to do the test blind, otherwise your own bias *will* fool you (everybody thinks they are immune to this, but nobody is).
 
I'm not getting my hopes up until they actually announce something as we've seen similar reports several times in the past few years and nothing has materialised.
 
Where 24-bit truly shines is when listening to acoustic sounds. Jazz, classical, and piano work sounds better at 24-bit. The ring-outs are much cleaner. Cymbals aren't harsh. Generally it's the extreme ends of the audio spectrum that benefit the most.

The main reason is probably that the algorythms that compress don't really seem to care i they mess up the complex harmonics of certain instruments; they just quash them all to a note and you lose the subtlety and delicacy.

Not even sure it's the bits that make a big diff, I think it's simply the compression itself from my own experience testing things on my own setup.

But, on 99% of music out there, it makes no diff at all unless the bitrate is low enough and then compression artefacts show up even in that kind of music.
 
The article is talking entirely about audio for the streaming end, which has nothing to do with production.
Of course it does. The article MR quotes addresses improving audio hardware standards to accommodate HQ audio. MR goes into the whole HQ sound file download issue and linking to past articles about it, but that doesn't make this article "entirely about" audio for streaming. Why does the article state manufacturers are preparing their own Lightning cables if it's all about streaming? If Apple improves audio capability of the iPhone, then the iPhone will be able to create HQ audio natively to match its pro 4K video abilities, and now we're discussing production. If the iPhone can record HQ audio, then it's got to be able to play it back somehow, so the user can work with it, right? Currently when I open a 24bit/48K ProTools session on my Mac, I cannot stream it over AirPlay. I can record it, but I have no way to work with it over AirPlay. It's similar to the 4K video issue in which the only way to see native 4K playback is to export the video file to a device which will play it. Apple is rumored to be improving that ability for audio, if not video. Maybe the Apple TV 5 will support 4K video streaming as well. Regardless, if Apple enables HQ streaming via AirPlay, then "produced" content created on the iPhone will be the only audio that will be able to take advantage of it, until Apple actually offers commercial content, which is probably going to be a big problem for the labels, not to mention that it's not likely to materialize any faster than 4K content is, or BluRay did.

I don't read the article as 'Apple is rumored to enable HQ audio for streaming purposes only'. Not sure why you do.
 
Last edited:
  • Like
Reactions: JamesPDX
You are very sadly misinformed. There's a reason DAC equipment that can produce higher bit rates and depth are selling like hotcakes in the several thousand dollar range. They do sound closer to the real thing. I've done several A/B comparisons with the Berkley Alpha DAC and if you can't hear the difference then perhpas you need new batteries in that hearing aide.

By the way, Meridian's new MQA format is supposed to be the best yet.

oh how very sad that he is misinformed. i am shedding a tear. :(

the joke is on you!

both things manufactured solely to take advantage of ppl like you.
 
Of course it does. The article MR quotes addresses improving audio hardware standards to accommodate HQ audio. MR goes into the whole HQ sound file download issue and linking to past articles about it, but that doesn't make this article "entirely about" audio for streaming. Why does the article state manufacturers are preparing their own Lightning cables if it's all about streaming? If Apple improves audio capability of the iPhone, then the iPhone will be able to create HQ audio natively to match its pro 4K video abilities, and now we're discussing production. If the iPhone can record HQ audio, then it's got to be able to play it back somehow, so the user can work with it, right? Currently when I open a 24bit/48K ProTools session on my Mac, I cannot stream it over AirPlay. I can record it, but I have no way to work with it over AirPlay. It's similar to the 4K video issue in which the only way to see native 4K playback is to export the video file to a device which will play it. Apple is rumored to be improving that ability for audio, if not video. Maybe the Apple TV 5 will support 4K video streaming as well. Regardless, if Apple enables HQ streaming via AirPlay, then "produced" content created on the iPhone will be the only audio that will be able to take advantage of it, until Apple actually offers commercial content, which is probably going to be a big problem for the labels, not to mention that it's not likely to materialize any faster than 4K content is, or BluRay did.

I don't read the article as 'Apple is rumored to enable HQ audio for streaming purposes only'. Not sure why you do.

I read it as manufacturers readying their products to support streaming audio out at the greater bit/sample rate, not anything about the production side of things. I don't know why you are reading into it so far. The whole rest of the post is discussing music for the purpose of streaming, so it makes far more sense that the hardware mentioned is for the purpose of streaming/playing audio, not recording and producing audio.
 
I had
Every bit in a sampled audio signal is for amplitude only. Every time a sample is taken you take a 16-bit value that denotes where the amplitude at that moment is. If you sample more frequently, you end up with more 16-bit amplitudes. That is it, any additional bits in there beyond that are just telling the computer how to parse and use those samples correctly to reproduce the signal. Higher frequencies do not require more bits to reproduce accurately. Frequencies only require that nyquist theorem is satisfied with regard to sample rate. 44.1Khz sampling satisfies Nyquist theorem for our entire hearing range allowing an accurate reproduction of any and all frequencies in that range. 16-bits is enough to ensure the noise floor is low enough that we can't hear it without cranking the volume up to dangerous levels, and to provide dynamic range to allow good reproduction of levels as low as a whisper up to extremely loud without adjusting the volume knob while listening. There is literally no reason to increase the bit depth or sampling on the listening end.

Back in 1992, I had Casio DAT recorder that only did 16/48. And while it sounded way better than anything else I could afford at the time (like a Synclavier or a Panasonic SV3700) the anti-aliasing filters could make for some fuzzy fade-outs. The point is that 24 bit audio for production and masters is critical if you are using dynamic instruments. It's not the loud-end of the production that's a problem, it's the noise floor of the quiet-end when you can hear the quantization noise. Luckily, these days it's pretty easy to get decent converters on just about everything except toys.

That being said, I will concede that 16/44.1 is overkill for 99% of what's out there: The beats you are hearing over and over and over and over and over and over and over and over and over and over and over and over and over and over and over again were originally created on machines that ticked out 12-bit samples at maybe 25-32khz at best. So when you take everything and mix it "in the box" and "make it loud" (apparent loudness) so that it only has about 4dB of dynamic range and autotuned vocalizations, even AAC VBR 128kbps is overkill for a production master. -Because people don't know that they don't know.

I had the great honor of recording a classical pianist last year on a Fazioli F278. I rented two Neumann TLM 49's for the occasion and recorded into my little Apogee Duet at 24/88.2. In editing, there is no eq, no compression or limiting and it came out gorgeous. I wish everybody could hear it because it's exactly what that piano sounds like: It was all one big sweet spot and it would have been fine even in mono because of this: Talent+Material+Instrument+Mic+Converter. I didn't have fancy mic-preamps, but those fat condensers on that 10-foot grand made it great. -Even the mp3's on Soundcloud came out better than I thought because of my workflow. If you're interested, here's what a real piano played by a 70-something person sounds like: https://soundcloud.com/jameslongpdx/lchubert-death-and-the-maiden
 
I had


Back in 1992, I had Casio DAT recorder that only did 16/48. And while it sounded way better than anything else I could afford at the time (like a Synclavier or a Panasonic SV3700) the anti-aliasing filters could make for some fuzzy fade-outs. The point is that 24 bit audio for production and masters is critical if you are using dynamic instruments. It's not the loud-end of the production that's a problem, it's the noise floor of the quiet-end when you can hear the quantization noise. Luckily, these days it's pretty easy to get decent converters on just about everything except toys.

That being said, I will concede that 16/44.1 is overkill for 99% of what's out there: The beats you are hearing over and over and over and over and over and over and over and over and over and over and over and over and over and over and over again were originally created on machines that ticked out 12-bit samples at maybe 25-32khz at best. So when you take everything and mix it "in the box" and "make it loud" (apparent loudness) so that it only has about 4dB of dynamic range and autotuned vocalizations, even AAC VBR 128kbps is overkill for a production master. -Because people don't know that they don't know.

I had the great honor of recording a classical pianist last year on a Fazioli F278. I rented two Neumann TLM 49's for the occasion and recorded into my little Apogee Duet at 24/88.2. In editing, there is no eq, no compression or limiting and it came out gorgeous. I wish everybody could hear it because it's exactly what that piano sounds like: It was all one big sweet spot and it would have been fine even in mono because of this: Talent+Material+Instrument+Mic+Converter. I didn't have fancy mic-preamps, but those fat condensers on that 10-foot grand made it great. -Even the mp3's on Soundcloud came out better than I thought because of my workflow. If you're interested, here's what a real piano played by a 70-something person sounds like: https://soundcloud.com/jameslongpdx/lchubert-death-and-the-maiden

That is a fantastic recording, Thank you for sharing it!
 
  • Like
Reactions: JamesPDX
I read it as manufacturers readying their products to support streaming audio out at the greater bit/sample rate, not anything about the production side of things. I don't know why you are reading into it so far. The whole rest of the post is discussing music for the purpose of streaming, so it makes far more sense that the hardware mentioned is for the purpose of streaming/playing audio, not recording and producing audio.
Because I see this rumor as something directly tied to Apple offering "pro" audio on their devices. Based on what you're saying, Apple, and all these manufacturers are gearing up to offer HQ streaming audio created elsewhere, but won't be offering any production access to that HQ audio. Right?

It's clear to me, this rumor is based around being able to play the content created on the device itself. Because where exactly is Apple going to instantly get access to millions of 24 bit/96k master recordings of commercial music to stream from iTunes and Apple Music to begin to warrant this HQ support?

If this rumor is true, I can create and edit "pro" audio on my ProTools app, something I can't do right now on an iOS device.
 
  • Like
Reactions: JamesPDX
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.