Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
This is a terribly-written article. Why are you "confirming" the iPhone 6 isn't capable of hi-res audio? The rumor was about iOS 8, not the iPhone 6. If memory serves, every iPod and iPhone Apple has released has been capable of 24/96 playback because their audio chips have always supported it. It's just iOS that has never supported it.

So I don't understand why you're mentioning the iPhone 6 at all. Oh, I see, the Mashable article also does, so you just parroted what they said. I suppose that's much more sensational and clickbaity to try to blame the new phone. I wish there was an Apple news site that was more responsible with its reporting.
 
There is no noticeable difference between the 44.1kHz (we mostly listen to now) and 96kHz.
The human ear can't even hear above 20kHz, and the rest is just for filter purposes.

I disagree. It really depends on the equipment you're using to reproduce the sound. There are overtones above 20khz that many people can perceive (or feel) and which give certain sounds (like a cymbal crash) more presence.

But even more important that the frequency is the bit rate. Higher bit rates are capable of much more clarity in the sound. It's hard to describe, but once you've heard it the difference is pretty obvious.
 
That's not what Shannon's theorem says. It doesn't state that sampling at twice the frequency is enough to be able to reproduce a perfect signal. What it states is that if you sample under twice the frequency, there is no way you will be able to reproduce a signal at that frequency... It's a minimum, not a maximum.

...

Holy****! Someone who knows what they're talking about! So rare on this board. (At in least the top-rated comments section.)
 
People falling for the 24/192 "HD" hoax are usually the same people lining up to buy $150 HDMI cables...

but but but the salesman at best buy said i needed a $600 power cord and gold plated mosnter cables

In all seriousness I think about the only format change that would be a noticeable improvement would be multi-channel digital audio for those of us that listen on 5.1 systems, and even I'm not sure that would be a significant improvement over DTS Neo X.

There is such a huge difference in production quality among what's available on iTunes and elsewhere that the best place to focus efforts is on getting quality remasters out there (and I think sometimes when people listen to an "HD" track and think it sounds better than normal is because it's been remastered, not because it's "HD").
 
I'm sorry but 44khz/16 bit AAC with 256kbps compression is "good enough." I challenge audiofreaks to legitimately tell the difference between AAC 256k and uncompressed in a blind test. You can't do it.

In most blind tests the discernible differences never materielize. In fact the technically lower quality audio format will be regarded as better with a slight advantage in decibels alone.
 
Makes Sense

Why make competitive products sound better? I am sure that Apple will enable HD audio through iTunes and their own radio service before they let other companies support HD audio. This is not news.
 
That's not what Shannon's theorem says. It doesn't state that sampling at twice the frequency is enough to be able to reproduce a perfect signal. What it states is that if you sample under twice the frequency, there is no way you will be able to reproduce a signal at that frequency... It's a minimum, not a maximum.

For instance, a young person will hear a signal at 20kHz. To capture that signal, you need to sample at at least 40kHz. Then, that young person will be able to hear something, but you will have lost a lot of characteristics of the signal - for instance, you will not be able to know if the original signal was a sawtooth, a square or a sinusoid. So, significant information will have been lost.

That's why CD recordings sounded metallic at first. The solution, which is applied on all CDs, was to cut the frequency around 16kHz to avoid the destruction of the characteristics of the signal around Shannon frequencies.

That's why 96kHz is interesting, because it keeps quality in the upper part of the spectrum.

Moreover, 24-96 is not only about 96kHz, it's also 24 bit. And there, you gain a lot. The problem with CD and digital capture in general is that the scale is linear while most of our senses use a logarithmic scale.
The result is that when you go at the bottom of your intensity, you have a very very low resolution in your sample, while the human ear (or eye) still have a good resolution. This is especially visible in photography: if you brighten the shadows, you will see a lot of banding, because the sample resolution is very low in the shadows. It's the same problem with audio: CD killed the dynamic range (hence the loudness war), because it's not that good when you have a lot of dynamic during the low volume ports.

The thing is, by far the largest problem with music reproduction is the original mastering. A well mastered CD quality file with proper dynamic range is more than good enough - the problem is the ever-increasing rarity of good mastering with engineers just aiming for "loud" in most instances. The dynamic range of CD covers all that can be heard so any tiny perceptible difference "HD" audio may bring is largely irrelevant when the source is likely to be compromised in the first place.
 
Bono is "secretly" working with Apple to create a new digital format, he will be quite happy when his check arrives from Apple!!


I've wondered about this.. No offence to Bono, but what would he really have to offer with respect to a new audio format? I doubt he's writing the algorithms or anything. or is it simply that U2 will be recording music for release in the new format to have some music ready at launch?
 
This is 100% BS. The only thing you are correct about is that ABX testing isn't relevant to the 24/96 "HD" audio question, but only because we can definitively prove mathematically (ironically, using the same sampling theorem you cited) that bit depths / sampling rates above 16/44.1 are useless for playback and simply waste space.

See here for a thorough explanation of the the math: https://xiph.org/~xiphmont/demo/neil-young.html

People falling for the 24/192 "HD" hoax are usually the same people lining up to buy $150 HDMI cables...


Also don't ever group techno and electronica with jazz and above classical music on a scale of sonic complexity again.

Every time this subject comes up, there's misinformation from both sides.

44.1 kHz doesn't equal a maximum frequency of 22500 Hz as there is a low pass filter up there. LP filters don't act like brick wall where, for example, 20000 Hz gets passed and 20001 Hz is removed completely.

Instead, they work on a curve, progressively rolling off more amplitude through the higher frequencies.

In an ideal world, you'd want a well-designed LP filter to operate WAY out of the range of human hearing and a sample rate of 44.1 kHz isn't going to give you that. 96kHz is overkill and 60 kHz would be ample bandwidth but that ship has sailed.

It's a subtle argument and not one someone listening through Apple's crappy earbuds and their cheap DAC is ever likely to benefit from.

24 bit is a different, and even more compelling argument.

The biggest problem is that people don't know what constitutes 'good' sound quality because they've never heard it. They'll spend $2,000 on a TV and pick up some god-awful stereo bar thing or some Bose monstrosity and believe they're listening to high quality audio. Similarly, people hide speakers in weird places, put them in corners, and have living rooms that bounce sound around like crazy, leading to phase issues. In those kinds of situations, 44.1 vs. 96 is the least of their problems and HD audio is really just going to drive up sales of 4TB hard drives with next to no benefit.
 
I would get a new iPhone immediately if there was a noticeable improvement in the DAC or the headphone jack output. It's such a pain to bypass the iPhone DAC now.

It's strange to see people arguing over how many more PPI are necessary for the 5" phone screen when we already have retina, but then see arguments that the average user can't hear the difference between 256kbps and lossless.
 
I've wondered about this.. No offence to Bono, but what would he really have to offer with respect to a new audio format? I doubt he's writing the algorithms or anything. or is it simply that U2 will be recording music for release in the new format to have some music ready at launch?

It sounds like BS to me. There's zero need for a new format audio-wise. Maybe it's some kind of multimedia monstrosity.

I hope they're not asking him for feedback on audio quality. After 30 years of standing behind stage monitors, warbling away while a drummer smashes cymbals behind him, I doubt he can hear much over 14 kHz anyway.
 
The thing is, by far the largest problem with music reproduction is the original mastering. A well mastered CD quality file with proper dynamic range is more than good enough - the problem is the ever-increasing rarity of good mastering with engineers just aiming for "loud" in most instances. The dynamic range of CD covers all that can be heard so any tiny perceptible difference "HD" audio may bring is largely irrelevant when the source is likely to be compromised in the first place.

Exactly.

The main difference in quality these days vs the 70s, 80s and early 90s is due to The Loudness War.

There has been no peer-reviewed scientific proof that 96/24 "HD Audio" supplies any more audio information to the listener than 44/16 when actually mix/mastered in a way that doesn't eliminate a wide dynamic range far before it gets to the listener (and, in fact, it has been shown mathematically to be true that 44/16 pr 48/16 covers the entire human range of hearing).

The *only* place that 96/24 (or higher) helps is in the studio, where it allows an engineer to use digital effects on a raw recording without hitting the "ceiling" that would cause clipping (undesired distortion). This can happen because effects and other manipulation can introduce "noise" or other artifacting (either intentionally or due to flakey design). But if the effects fit into the "data space" of a 44.1/16 it doesn't matter. A properly engineered mixdown/mastering from 44.1/16 or 96/24(or 192/24) will sound the same when its sitting in a FLAC file or any other "lossless" medium.

There is no discernible difference on the listener end from simply encoding/outputting higher than 44/16.
 
Last edited:
Basically iPhone 6 is a flop.

Nah, Just the rumor mill creates unrealistic rumors, and other wishes that don't always become true, hence the term rumor.

noun
a currently circulating story or report of uncertain or doubtful truth: they were investigating rumors of a massacre | rumor has it that he will take a year off.

Haha.

Besides the rumors, there are facts though. The phone bends in the front pocket. Some of its big new features are things Android users have already been enjoying. It doesn't help that iOS8 cut off so many people's cellular activity in the wake of iPhone 6's release.

Apple just saw a 3.8% drop in shares.

iPhone 6 kind of is a flop.
 
Better ears

A whole article, based on a rumor, that Apple never announced, and most people's ears in most any ear buds or Beats are not going to notice any significant difference in quality, even if it were enabled.

Not true, most people would hear the difference, especially going from 128k mp3s to 24bit, 96khz AIFs. More and more people are streaming music from higher quality music sites, and more HD sites are popping up all over the web.

Apple is not outwardly supporting higher audio arts right now. Mac Pros don't have PCIe slots for digital audio cards, as an example. With their focus going for an update to Photo and photo editing, an improved camera on the iPhone 6/6+, it's only a matter of time before their higher quality audio comes into focus (pun attempted). The entire iTunes store has small bitrate AAC files to download. When they start to offer 24bit/96khz AIFs for download in the iTunes store, that will be reflected in the iPhones, iPads, and iPods.

Hopefully, buying Beats was a start in that direction.
 
In an ideal world, you'd want a well-designed LP filter to operate WAY out of the range of human hearing and a sample rate of 44.1 kHz isn't going to give you that. 96kHz is overkill and 60 kHz would be ample bandwidth but that ship has sailed.

It's a subtle argument and not one someone listening through Apple's crappy earbuds and their cheap DAC is ever likely to benefit from.

24 bit is a different, and even more compelling argument.

Agreed on all points. Well, I don't agree 96 is "overkill" -- with high-quality source material I can pick out 96 from 48, but I have trained "golden" ears. 192 is definitely overkill though. But as you say, 24-bit resolution should be the real focus here. Going from 16 to 24 makes a huge difference because of the extra resolution afforded per sample. Well, assuming the source material is mastered at 24-bit or good bandwidth audio tape.

I agree with the earbuds, and I now wonder if the acquisition of Beats, along with the rumor of a new format, is part of an overall rollout of HD Audio. So that Apple can push Apple-branded headphones that are "HD Audio" ready.
 
I disagree. It really depends on the equipment you're using to reproduce the sound. There are overtones above 20khz that many people can perceive (or feel) and which give certain sounds (like a cymbal crash) more presence.

There is nothing to agree/disagree about; it's just math. I challenge you to find a reliable study demonstrating human perception of frequencies meaningfully about 20khz. If anything, higher sampling rates may actually reduce the fidelity of the recording due to intermodulation distortation caused by ultraphonics.

But even more important that the frequency is the bit rate. Higher bit rates are capable of much more clarity in the sound. It's hard to describe, but once you've heard it the difference is pretty obvious.

:rolleyes: Really? Bit depth has nothing to do with "clarity" or "resolution" of the audio. Bit depth is merely a measure of the dynamic range the format is capable of encoding. 16-bit is enough to encode 120dB of dynamic range, or roughly the difference between an empty soundproof room and a sound loud enough to permanently destroy your hearing in seconds. I'm not sure you need any more than that.

----------

Every time this subject comes up, there's misinformation from both sides.

44.1 kHz doesn't equal a maximum frequency of 22500 Hz as there is a low pass filter up there. LP filters don't act like brick wall where, for example, 20000 Hz gets passed and 20001 Hz is removed completely.

Instead, they work on a curve, progressively rolling off more amplitude through the higher frequencies.

In an ideal world, you'd want a well-designed LP filter to operate WAY out of the range of human hearing and a sample rate of 44.1 kHz isn't going to give you that. 96kHz is overkill and 60 kHz would be ample bandwidth but that ship has sailed.

It's a subtle argument and not one someone listening through Apple's crappy earbuds and their cheap DAC is ever likely to benefit from.

24 bit is a different, and even more compelling argument.

The biggest problem is that people don't know what constitutes 'good' sound quality because they've never heard it. They'll spend $2,000 on a TV and pick up some god-awful stereo bar thing or some Bose monstrosity and believe they're listening to high quality audio. Similarly, people hide speakers in weird places, put them in corners, and have living rooms that bounce sound around like crazy, leading to phase issues. In those kinds of situations, 44.1 vs. 96 is the least of their problems and HD audio is really just going to drive up sales of 4TB hard drives with next to no benefit.

This is wrong. Please read the linked article before responding.
 
Agreed on all points. Well, I don't agree 96 is "overkill" -- with high-quality source material I can pick out 96 from 48, but I have trained "golden" ears. 192 is definitely overkill though. But as you say, 24-bit resolution should be the real focus here. Going from 16 to 24 makes a huge difference because of the extra resolution afforded per sample. Well, assuming the source material is mastered at 24-bit or good bandwidth audio tape.

I agree with the earbuds, and I now wonder if the acquisition of Beats, along with the rumor of a new format, is part of an overall rollout of HD Audio. So that Apple can push Apple-branded headphones that are "HD Audio" ready.

Do you have statistically significant results from an ABX test to back up your claim? Because what you are claiming to be able to do is a mathematical impossibility...
 
On a positive note, audiophiles over at HeadFi.org are reporting that the new DAC in the iPhone 6 gives a warmer and more detailed sound through quality headphones than previous models.

In other words, iPhone 6 is the best sounding iPhone to date! I just bought the B&W P7 headphones and this is actually a big selling point for me. Will definitely upgrade from my iPhone 5 now.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.