There is no noticeable difference between the 44.1kHz (we mostly listen to now) and 96kHz.
The human ear can't even hear above 20kHz, and the rest is just for filter purposes.
That's not what Shannon's theorem says. It doesn't state that sampling at twice the frequency is enough to be able to reproduce a perfect signal. What it states is that if you sample under twice the frequency, there is no way you will be able to reproduce a signal at that frequency... It's a minimum, not a maximum.
...
People falling for the 24/192 "HD" hoax are usually the same people lining up to buy $150 HDMI cables...
Yes and I guess this is another sign that Apple is doomed.![]()
Basically iPhone 6 is a flop.
I'm sorry but 44khz/16 bit AAC with 256kbps compression is "good enough." I challenge audiofreaks to legitimately tell the difference between AAC 256k and uncompressed in a blind test. You can't do it.
That's not what Shannon's theorem says. It doesn't state that sampling at twice the frequency is enough to be able to reproduce a perfect signal. What it states is that if you sample under twice the frequency, there is no way you will be able to reproduce a signal at that frequency... It's a minimum, not a maximum.
For instance, a young person will hear a signal at 20kHz. To capture that signal, you need to sample at at least 40kHz. Then, that young person will be able to hear something, but you will have lost a lot of characteristics of the signal - for instance, you will not be able to know if the original signal was a sawtooth, a square or a sinusoid. So, significant information will have been lost.
That's why CD recordings sounded metallic at first. The solution, which is applied on all CDs, was to cut the frequency around 16kHz to avoid the destruction of the characteristics of the signal around Shannon frequencies.
That's why 96kHz is interesting, because it keeps quality in the upper part of the spectrum.
Moreover, 24-96 is not only about 96kHz, it's also 24 bit. And there, you gain a lot. The problem with CD and digital capture in general is that the scale is linear while most of our senses use a logarithmic scale.
The result is that when you go at the bottom of your intensity, you have a very very low resolution in your sample, while the human ear (or eye) still have a good resolution. This is especially visible in photography: if you brighten the shadows, you will see a lot of banding, because the sample resolution is very low in the shadows. It's the same problem with audio: CD killed the dynamic range (hence the loudness war), because it's not that good when you have a lot of dynamic during the low volume ports.
The S version is usually when the real iPhone comes out
Bono is "secretly" working with Apple to create a new digital format, he will be quite happy when his check arrives from Apple!!
This is 100% BS. The only thing you are correct about is that ABX testing isn't relevant to the 24/96 "HD" audio question, but only because we can definitively prove mathematically (ironically, using the same sampling theorem you cited) that bit depths / sampling rates above 16/44.1 are useless for playback and simply waste space.
See here for a thorough explanation of the the math: https://xiph.org/~xiphmont/demo/neil-young.html
People falling for the 24/192 "HD" hoax are usually the same people lining up to buy $150 HDMI cables...
Also don't ever group techno and electronica with jazz and above classical music on a scale of sonic complexity again.
I've wondered about this.. No offence to Bono, but what would he really have to offer with respect to a new audio format? I doubt he's writing the algorithms or anything. or is it simply that U2 will be recording music for release in the new format to have some music ready at launch?
The thing is, by far the largest problem with music reproduction is the original mastering. A well mastered CD quality file with proper dynamic range is more than good enough - the problem is the ever-increasing rarity of good mastering with engineers just aiming for "loud" in most instances. The dynamic range of CD covers all that can be heard so any tiny perceptible difference "HD" audio may bring is largely irrelevant when the source is likely to be compromised in the first place.
Basically iPhone 6 is a flop.
Nah, Just the rumor mill creates unrealistic rumors, and other wishes that don't always become true, hence the term rumor.
noun
a currently circulating story or report of uncertain or doubtful truth: they were investigating rumors of a massacre | rumor has it that he will take a year off.
A whole article, based on a rumor, that Apple never announced, and most people's ears in most any ear buds or Beats are not going to notice any significant difference in quality, even if it were enabled.
In an ideal world, you'd want a well-designed LP filter to operate WAY out of the range of human hearing and a sample rate of 44.1 kHz isn't going to give you that. 96kHz is overkill and 60 kHz would be ample bandwidth but that ship has sailed.
It's a subtle argument and not one someone listening through Apple's crappy earbuds and their cheap DAC is ever likely to benefit from.
24 bit is a different, and even more compelling argument.
I disagree. It really depends on the equipment you're using to reproduce the sound. There are overtones above 20khz that many people can perceive (or feel) and which give certain sounds (like a cymbal crash) more presence.
But even more important that the frequency is the bit rate. Higher bit rates are capable of much more clarity in the sound. It's hard to describe, but once you've heard it the difference is pretty obvious.
Every time this subject comes up, there's misinformation from both sides.
44.1 kHz doesn't equal a maximum frequency of 22500 Hz as there is a low pass filter up there. LP filters don't act like brick wall where, for example, 20000 Hz gets passed and 20001 Hz is removed completely.
Instead, they work on a curve, progressively rolling off more amplitude through the higher frequencies.
In an ideal world, you'd want a well-designed LP filter to operate WAY out of the range of human hearing and a sample rate of 44.1 kHz isn't going to give you that. 96kHz is overkill and 60 kHz would be ample bandwidth but that ship has sailed.
It's a subtle argument and not one someone listening through Apple's crappy earbuds and their cheap DAC is ever likely to benefit from.
24 bit is a different, and even more compelling argument.
The biggest problem is that people don't know what constitutes 'good' sound quality because they've never heard it. They'll spend $2,000 on a TV and pick up some god-awful stereo bar thing or some Bose monstrosity and believe they're listening to high quality audio. Similarly, people hide speakers in weird places, put them in corners, and have living rooms that bounce sound around like crazy, leading to phase issues. In those kinds of situations, 44.1 vs. 96 is the least of their problems and HD audio is really just going to drive up sales of 4TB hard drives with next to no benefit.
There is no noticeable difference between the 44.1kHz (we mostly listen to now) and 96kHz.
The human ear can't even hear above 20kHz, and the rest is just for filter purposes.
Agreed on all points. Well, I don't agree 96 is "overkill" -- with high-quality source material I can pick out 96 from 48, but I have trained "golden" ears. 192 is definitely overkill though. But as you say, 24-bit resolution should be the real focus here. Going from 16 to 24 makes a huge difference because of the extra resolution afforded per sample. Well, assuming the source material is mastered at 24-bit or good bandwidth audio tape.
I agree with the earbuds, and I now wonder if the acquisition of Beats, along with the rumor of a new format, is part of an overall rollout of HD Audio. So that Apple can push Apple-branded headphones that are "HD Audio" ready.
Legitimate question here... I haven't experienced HD audio myself.
If you need a special setup to test if the audio is coming through in HD instead of just listening, how much of a difference is really there?