Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Here's my guess: the iPhone 6, iPhone 6+ and the 2014 model iPad Air and iPad mini with the A8 system on a chip (SoC) do support playback of 24-bit 96 kHz sampling rate Apple Lossless files. However, that playback support will not be enabled until a later iOS 8.x version arrives, probably around February 2015.

Does this explain why the iPhone 6 models come in 64 GB and 128 GB versions? I also think Apple may publish a spec to allow external decoding of this new Apple Lossloss format with an external DAC device connected to the iPhone/iPad via the Lightning port--which only means a new generation of portable headphone amps that plug directly into the Lightning port that not only decode this new format, but also can control playback with its own controls on the amplifier.

(EDIT: I believe that even with the data limitations of the Lightning connector--which is around 280 megabits per second sustained transfer rate per USB 2.0 spec--it should be more than enough to handle the rumored 24-bit 96 kHz Apple Lossless format.)
 
Last edited:
How is the 6 a "huge" step up from 5s in anything beyond a bump in screen size?

Speed about same
RAM same
DPI same
Sound quality same
NFC no one uses it

There's the 128 gig option now though. Is that what you mean?

I've compared my 5s with my 6 and the screen (although same dpi) looks obviously better. Don't know if it's the glass or some kind of filter.
Definitely faster and my LTE connection and WIFI connection on speed test is about 30% and 10% faster respectively....not sure why.

My signal is better for the 6 than the 5s in my office in the basement of my house (1 dot different) Same # of signal dots upstairs though which makes me think the antenna may be better on the 6.

My BTLE connection more reliable and faster after turning wahoo fitness app on to HRM and Adidas speed cell (http://micoach.adidas.com/speed_cell/) for when I go running.

Thats about it.
 
Well if you're going to put the goalpost wherever you'd like, you can consider pretty much anything a failure.

Michael Jordan only scored 58 points last night?
Avengers made only 1.5 billion worldwide?





There IS nothing more. A 22k sampled signal will play back the original exactly (assuming it's also below the input and output filtering, which is why we would generally talk about 20k signals, not 22).

A 22k signal means a 22k sine wave. If it's not a sine wave that simply means other signals are present along with it. If those signals are above 22k they will be filtered out (as they should be) and if they are below they will also be recorded perfectly.



First, aliasing is what happens when digital output is NOT filtered. Modern digital playback is neither "bad" nor is aliasing an issue with high frequencies. And all digital audio playback uses a low pass filter on output, it's simply part of the design of how digital audio works. It takes out frequencies over 20k, not sure where you got this 16k idea from, and it's not to hide some sort of problem, it's just a part of how the process works properly.

If you ever hear aliasing on output it's because you have a defective player that is not doing LPF properly (or at all). Any working CD player will play up to 20k without aliasing, period. That's just how digital audio works.



96dB for 16 bit is false?? Are you kidding me, that's just a basic fact that everyone accepts, and you're trying to dispute it? Seriously? It's 6dB of dynamic range for each bit. Six times sixteen is ninety six.



It's hard to believe but it seems like you don't even understand how binary numbers work.

Upper half? It's a sixteen digit binary number, not some weird way of splitting up bits. It's a range of amplitudes with 65536 possible values, so any amplitude in the range has that many options and anything below the low end of that isn't loud enough to get recorded. Bits aren't somehow split up, each bit that is added doubles the number of possible amplitude values. Which is exactly why it works just fine with something logarithmic like audio.



Wow.



You can't have pianissimo? You're trolling, right? You're saying when I listen to an orchestral CD and I hear passages that are so quiet that I have trouble hearing them over the furnace in my house, those aren't pianissimo? The ones that I can actually measure as dozens of dB quieter than the loudest parts?

Half of it is bad, for the love of all that is holy.



Quoted for humor.



Insert trippy Scooby Doo flashback sound effects here.


And I hate to break it to you, but CD audio blows away the dynamic range of vinyl and audiocassette. There also is no easter bunny.

Unfortunately for you, Milo, Liktor is right and you are wrong.

Read up some more on digital theory, particularly how bit-depth relates to volume / dynamic range. Liktor is correct.
 
It's to much for me to read all 9 pages, so people, if someone mentioned something similar before, I apologize.

As far as I understand, you all would like to listen HD audio on an iPhone, with whatever headphones you have.......get serious people, without right HIFI setup, DAC, amplifier and speakers, or active speakers, or real headphones...your ears will only bleed and suffer......nonsense.
 
This is 100% BS. The only thing you are correct about is that ABX testing isn't relevant to the 24/96 "HD" audio question, but only because we can definitively prove mathematically (ironically, using the same sampling theorem you cited) that bit depths / sampling rates above 16/44.1 are useless for playback and simply waste space.

See here for a thorough explanation of the the math: https://xiph.org/~xiphmont/demo/neil-young.html

People falling for the 24/192 "HD" hoax are usually the same people lining up to buy $150 HDMI cables...


Also don't ever group techno and electronica with jazz and above classical music on a scale of sonic complexity again.

I was on another forum where they took a 16 Bit sound file and up sampled it to 24/96 and compared it to a 24/96 version and there was at least one person that did pass the ABX test.

It is possible to pass an ABX test. Now, I have a 16 Bit rebook of an album that was subsequently released in 24/96 or 176and there was a HUGE difference. There might have been a difference in the amount of audio compression during the mastering process, but there are definitely sonic differences at least in certain recordings that were originally done on analog tape. So, these 24 bit conversions from analog can many times be a lot better than the 16 Bit Redbook versions originally released. It's hard to tell how much of the difference is based solely on just being higher resolution because they were converted from analog using different equipment at different time periods, etc. But the bottom line is the 24 Bit versions sound a LOT better and it's a LOT more noticeable that a lot of people could easily hear a difference blind folded.

To address your $150 HDMI cable statement. here's the scoop in a nutshell

HDMI has video and audio.

With video, you need higher bandwidth over long distances for certain applications. If you have a 4K projector and want the best performance and need a 50 ft run, you are going to have to get the more expensive cables. Especially if you need 18Gbps second bandwidth, the more expensive cables will have at least 10.2Gbps over long cable runs, the cheaper cables generally only go about 15 ft before they lose bandwidth, so in certain applications, you have to get the more expensive cables for video.

In the audio portion of HDMI, the more expensive cables simply have less noise problems which result in cable timing issues which create digital distortion known as jitter. The more expensive cables have less noise creating less jitter resulting in better audio. Now, if you don't have high end equipment and have long cable runs, then it doesn't matter, but for those that are using higher end equipment and have longer cable runs, then the cable is a more important factor. Has this been proven? Yes, it has.

Now, in the audio world, people that download or rip digital audio files to their computer aren't using HDMI to go from the computer to their stereo system to listen to audio. Most computer audio systems are using USB from the computer to the DAC. Is there a difference in USB cables? For some people/equipment there can be audible differences because you have the issue with USB as it has both data and power running along side one another and the power creates noise which can effect the data. Some higher end equipment running high bit and sample rates need a consistent and high bandwidth, otherwise it doesn't work. The cheap USB cables many times won't even work with some of the ultra high end equipment because they demand quality cables, so there are high end DAC mfg that have to have high end USB cables to work. Digital signals are not 1's and 0's, there are electronic pulses and in playing audio, those pulses have to have proper timing, no errors, etc. That's why these high end cable mfg crawl out of the wood work because there are high end equipment mfg and people listening to this equipment can hear subtle differences if they have trained listening abilities.

Just to see what cable has better transfer rate, I took two USB 2.0 cables and ran speed tests and was able to get better results in a speed test with a cheap cable compared to a more expensive cable, so there was one test that showed a difference in USB cables for just data transfer.

Now, if you can't hear the difference, then don't spend the money, but if you can, that's another story.
 
Last edited:
There isn't enough 24/96 content even available.

If you look at HD Tracks, which is the leader in 24 bit content download sales, there is only a little over 1,000 albums even available, which potentially another 1,000 to 1,500 albums as they just signed on another major record label that plans on releasing that many more albums. With only about 2,500 albums, that's just not enough content.

They currently have over 3,500 albums available in 24/88 or 24/96 format. Qobuz, the French hi-res store, has over 10,000 albums in the same categories.

I think your figures are a bit out of date.

I agree it's still a small fraction of what's currently available on iTunes . . . but you know that Apple has been collecting 24/96 masters for most new releases for several years now, so they have a pretty large collection queued up if they choose to release it.
 
Blind tests are the only way you'll ever actually *know* you can tell the difference. If you've never done one, you're almost certainly relying on your knowledge of which track is compressed vs. which track is CD audio to be able to distinguish between them.

The *vast* majority of people who have done blind ABX tests can't tell the difference between an iTunes-quality AAC or MP3 track, and the same track off the CD, much less a 24/96 'HD audio' track. Even the self-proclaimed 'golden ear' crowd generally can't tell the difference. Much of the time, a really good microphone and wave-form analysis can't tell the difference.

AAC does a lot weird things to the audio. Bass gets boomier, dynamic range is compressed, the overall sound is harsher. That's great if you can't hear all that, but I can. Please don't tell me what I can or can not hear.
 
I was on another forum where they took a 16 Bit sound file and up sampled it to 24/96 and compared it to a 24/96 version and there was at least one person that did pass the ABX test.

The test should be done taking the 24/96 version and comparing it with a direct downsample.

It is possible to pass an ABX test. Now, I have a 16 Bit rebook of an album that was subsequently released in 24/96 or 176and there was a HUGE difference. There might have been a difference in the amount of audio compression during the mastering process, but there are definitely sonic differences at least in certain recordings that were originally done on analog tape. So, these 24 bit conversions from analog can many times be a lot better than the 16 Bit Redbook versions originally released. It's hard to tell how much of the difference is based solely on just being higher resolution because they were converted from analog using different equipment at different time periods, etc. But the bottom line is the 24 Bit versions sound a LOT better and it's a LOT more noticeable that a lot of people could easily hear a difference blind folded.

And this is the reason the test should be done downsampling the higher quality version: to make sure you are comparing the same thing. With potentially a different master and surely a different post-processing it's obvious the two versions sound different.

Take the "higher fidelity" version, downsample it to 16/44 and compare it with the original and you'll be able to test whether the sampling (and only the sampling) actually makes a difference.

----------

AAC does a lot weird things to the audio. Bass gets boomier, dynamic range is compressed, the overall sound is harsher. That's great if you can't hear all that, but I can. Please don't tell me what I can or can not hear.

I think it was proved that AAC (256?) can be discerned from the uncompressed (or losslessly compressed) original. This doesn't mean 16/44 uncompressed can be discerned from 24/96 (or higher) uncompressed.
 
Unfortunately for you, Milo, Liktor is right and you are wrong.

Read up some more on digital theory, particularly how bit-depth relates to volume / dynamic range. Liktor is correct.

Could you be less vague? This is not helpful.
 
I agree it's still a small fraction of what's currently available on iTunes . . . but you know that Apple has been collecting 24/96 masters for most new releases for several years now, so they have a pretty large collection queued up if they choose to release it.

I think you're correct in that assessment. I believe most digital master tapes for albums and/or individual songs in recent years use 24-bit 48 or 96 kHz sampling rate encoding.

(A little tidbit--some people wonder why Apple would use 24-bit 96 kHz sampling rate for the new Apple Lossless format. I believe Apple chose this to maintain compatibility with the 24-bit 96 kHz audio encoding done on Dolby True HD and DTS-HD Master Audio audio tracks used on Blu-ray discs in order to save on audio mastering costs.)
 
Too funny :D

Anywho, from a sound engineer's perspective. I'd love this. But the general public doesn't care. They care more about higher quality picture and garbage sound as long as they can carry more music. Most of the people I know can't even hear the difference between FM quality and CD quality :(

This is such a truth, if the iPhone was meant for high quality audio, it probably wouldn't have a speaker. Its a general public device, and a social sharing device at that. On the note from Mabus, I'm an engineer too (probably on a lot lighter level :p) and i've caught that people don't even seem to care mono vs. stereo might as well bit or sample rate.
 
Basically iPhone 6 is a flop.

Yeah, nobody's buying it. Apple is DOOMED, I tell ya. You'd better sell short before the market opens on Monday.

----------

Apple shares dropped 3.8%, 23$ billion yesterday. We can't call iPhone 6's effect on their shares a success even if they bounce back. We can arguably call it a flop.

----------



Yes, we are in agreement. I just wanted to add that detail.

Or we can call it market manipulation triggered by a fake scandal. I wonder if the SEC will check it out?

----------

Haha.

Besides the rumors, there are facts though. The phone bends in the front pocket. Some of its big new features are things Android users have already been enjoying. It doesn't help that iOS8 cut off so many people's cellular activity in the wake of iPhone 6's release.

Apple just saw a 3.8% drop in shares.

iPhone 6 kind of is a flop.

And yet, AAPL still has 3 times the market cap of Samsung Electronics. Geez, all those Apple buyers must be so...uninformed.
 
bummer. seeing as they touted "mastered for itunes" albums, i am surprised this isn't enabled.

knowing apple, they are probably waiting to market a new product or revision with this capability. they are the masters of gouge.
 
I think he should have said "relative flop." Or at least left out the "basically."

The phone is a flop relative to other Apple releases. Besides its great sales, there was huge drop in Apple shares this week because of the phone's tendency to bend in people's front pockets, plus iOS8's probs.

Not at all a complete failure, but Apple must be slapping their foreheads over this.

Why would they slap their foreheads over another bogus "scandal".
 
Or we can call it market manipulation triggered by a fake scandal. I wonder if the SEC will check it out?


That's interesting. But who would be doing the manipulation? A competitor? I think #Bendgate was just caused by public overreaction to that video, which is now being called dubious by some people, as well as reports from a small number of users whose phones bent.

----------

Code:
Why would they slap their foreheads over another bogus "scandal".

I don't know if they would. Do you? I can't tell how many phones actually bent. They say only 9 people complained. I imagine they might be wishing they'd tested them more, since it only took that small number of people to give fuel to the scandalous fire.

They certainly must feel regretful about iOS8 cutting off cell activity for many users.
 
I think you're correct in that assessment. I believe most digital master tapes for albums and/or individual songs in recent years use 24-bit 48 or 96 kHz sampling rate encoding.

(A little tidbit--some people wonder why Apple would use 24-bit 96 kHz sampling rate for the new Apple Lossless format. I believe Apple chose this to maintain compatibility with the 24-bit 96 kHz audio encoding done on Dolby True HD and DTS-HD Master Audio audio tracks used on Blu-ray discs in order to save on audio mastering costs.)

I was on another forum where they took a 16 Bit sound file and up sampled it to 24/96 and compared it to a 24/96 version and there was at least one person that did pass the ABX test.

It is possible to pass an ABX test. Now, I have a 16 Bit rebook of an album that was subsequently released in 24/96 or 176and there was a HUGE difference. There might have been a difference in the amount of audio compression during the mastering process, but there are definitely sonic differences at least in certain recordings that were originally done on analog tape. So, these 24 bit conversions from analog can many times be a lot better than the 16 Bit Redbook versions originally released. It's hard to tell how much of the difference is based solely on just being higher resolution because they were converted from analog using different equipment at different time periods, etc. But the bottom line is the 24 Bit versions sound a LOT better and it's a LOT more noticeable that a lot of people could easily hear a difference blind folded.

I use 1/4" tape all the time. The differences between different machines (and even the same machine set up in two different ways) will vastly overshadow any potential differences between the performance of an ADC or DAC at two different sample rates (which if the converter is designed properly should be nil to humans).

I do my tape laybacks/transfers at 88.2 KHz because some limiters which mastering engineers like to use to increase the level of the recording (and trash the quality of sound in the process) seem to work better at higher sample rates.

Once this is done, down-converting to 44.1 KHz leads to zero perceivable difference in sound.

Remember I have the ability to do this without even going through a digital stage, I can go straight from the tape machine to the amplifiers driving the speakers through a simple analog volume control. I'd notice if much changed when adding in the digital stage and the change in sound is virtually non-existent when compared to the differences caused by changing the setup of the tape machine.

I guess I'm saying I simply cannot believe that any human would be able to pick the differences between two audio files of exactly the same content but resampled at two different sample rates (as long as they are both above 44.1 KHz).



To address your $150 HDMI cable statement. here's the scoop in a nutshell

HDMI has video and audio.

With video, you need higher bandwidth over long distances for certain applications. If you have a 4K projector and want the best performance and need a 50 ft run, you are going to have to get the more expensive cables. Especially if you need 18Gbps second bandwidth, the more expensive cables will have at least 10.2Gbps over long cable runs, the cheaper cables generally only go about 15 ft before they lose bandwidth, so in certain applications, you have to get the more expensive cables for video.

In the audio portion of HDMI, the more expensive cables simply have less noise problems which result in cable timing issues which create digital distortion known as jitter. The more expensive cables have less noise creating less jitter resulting in better audio. Now, if you don't have high end equipment and have long cable runs, then it doesn't matter, but for those that are using higher end equipment and have longer cable runs, then the cable is a more important factor. Has this been proven? Yes, it has.

Now, in the audio world, people that download or rip digital audio files to their computer aren't using HDMI to go from the computer to their stereo system to listen to audio. Most computer audio systems are using USB from the computer to the DAC. Is there a difference in USB cables? For some people/equipment there can be audible differences because you have the issue with USB as it has both data and power running along side one another and the power creates noise which can effect the data. Some higher end equipment running high bit and sample rates need a consistent and high bandwidth, otherwise it doesn't work. The cheap USB cables many times won't even work with some of the ultra high end equipment because they demand quality cables, so there are high end DAC mfg that have to have high end USB cables to work. Digital signals are not 1's and 0's, there are electronic pulses and in playing audio, those pulses have to have proper timing, no errors, etc. That's why these high end cable mfg crawl out of the wood work because there are high end equipment mfg and people listening to this equipment can hear subtle differences if they have trained listening abilities.

Just to see what cable has better transfer rate, I took two USB 2.0 cables and ran speed tests and was able to get better results in a speed test with a cheap cable compared to a more expensive cable, so there was one test that showed a difference in USB cables for just data transfer.

Now, if you can't hear the difference, then don't spend the money, but if you can, that's another story.

Some of the differences you are hearing do not exist - you're ignoring the effects of perception bias. It's a very, very powerful effect by the way, so there is nothing wrong with being fooled, many great engineers are regularly fooled into thinking they're changing the sound of a track by adjusting an EQ, only to find the EQ isn't even patched in and in-circuit.

It's interesting you say digital isn't only 1s and 0s. Digital is only 1s and 0s. When these 1s and 0s are converted to a medium which runs in real-time (such as audio), the timing does become very important. However, remember the clock signal is reconstructed with a PLL in the DAC. Even further to this, an Asynchronous DAC will completely dis-regard the clock signal provided down the USB cable (which is basically the computer's bus clock, virtually useless anyway)!

I'm not diminishing the importance of a good clock with a consistent phase angle, but as long as the cable is well shielded and capable of carrying the signal while maintaining the correct impedance presented at both ends, there is absolutely no need to even consider super-expensive audio cables. Most studios (even the really high-end ones) use very standard cabling for this stuff.

Again, more myths floating around about digital audio. The theory is actually very simple and easy (although fairly expensive) to implement in the real world.
 
Even better than that due to dollar cost averaging market fluctuations actually increase your returns.
Absolutely, I was simply speaking in general. Never in my wildest dreams did I ever, that first year I invested in Apple, think I would make so much money, nor did my fellow investors.
 
I was on another forum where they took a 16 Bit sound file and up sampled it to 24/96 and compared it to a 24/96 version and there was at least one person that did pass the ABX test.

It is possible to pass an ABX test. Now, I have a 16 Bit rebook of an album that was subsequently released in 24/96 or 176and there was a HUGE difference. There might have been a difference in the amount of audio compression during the mastering process, but there are definitely sonic differences at least in certain recordings that were originally done on analog tape. So, these 24 bit conversions from analog can many times be a lot better than the 16 Bit Redbook versions originally released. It's hard to tell how much of the difference is based solely on just being higher resolution because they were converted from analog using different equipment at different time periods, etc. But the bottom line is the 24 Bit versions sound a LOT better and it's a LOT more noticeable that a lot of people could easily hear a difference blind folded.

To address your $150 HDMI cable statement. here's the scoop in a nutshell

HDMI has video and audio.

With video, you need higher bandwidth over long distances for certain applications. If you have a 4K projector and want the best performance and need a 50 ft run, you are going to have to get the more expensive cables. Especially if you need 18Gbps second bandwidth, the more expensive cables will have at least 10.2Gbps over long cable runs, the cheaper cables generally only go about 15 ft before they lose bandwidth, so in certain applications, you have to get the more expensive cables for video.

In the audio portion of HDMI, the more expensive cables simply have less noise problems which result in cable timing issues which create digital distortion known as jitter. The more expensive cables have less noise creating less jitter resulting in better audio. Now, if you don't have high end equipment and have long cable runs, then it doesn't matter, but for those that are using higher end equipment and have longer cable runs, then the cable is a more important factor. Has this been proven? Yes, it has.

Now, in the audio world, people that download or rip digital audio files to their computer aren't using HDMI to go from the computer to their stereo system to listen to audio. Most computer audio systems are using USB from the computer to the DAC. Is there a difference in USB cables? For some people/equipment there can be audible differences because you have the issue with USB as it has both data and power running along side one another and the power creates noise which can effect the data. Some higher end equipment running high bit and sample rates need a consistent and high bandwidth, otherwise it doesn't work. The cheap USB cables many times won't even work with some of the ultra high end equipment because they demand quality cables, so there are high end DAC mfg that have to have high end USB cables to work. Digital signals are not 1's and 0's, there are electronic pulses and in playing audio, those pulses have to have proper timing, no errors, etc. That's why these high end cable mfg crawl out of the wood work because there are high end equipment mfg and people listening to this equipment can hear subtle differences if they have trained listening abilities.

Just to see what cable has better transfer rate, I took two USB 2.0 cables and ran speed tests and was able to get better results in a speed test with a cheap cable compared to a more expensive cable, so there was one test that showed a difference in USB cables for just data transfer.

Now, if you can't hear the difference, then don't spend the money, but if you can, that's another story.

Did you even read that xiph.org link at all? It's from the developers of FLAC, an open source lossless audio codec, and they know what they are talking about. https://xiph.org/~xiphmont/demo/neil-young.html

[...] ultrasonic content is never a benefit, and on plenty of systems it will audibly hurt fidelity. On the systems it doesn't hurt, the cost and complexity of handling ultrasonics could have been saved, or spent on improved audible range performance instead.

Any differences heard were just distortion from trying to playback the upsampled audio. You can't magically create more detail by upsampling sounds. It's like resizing an image from 100x100 to 1000x1000 and saying it's "better" just because the numbers are bigger.

I particularly like the comparison to "spectrophiles" that claim they can see microwaves and X-rays everywhere.

I also suggest you watch the entire video here too: http://xiph.org/video/vid2.shtml It shows examples using actual hardware
 
Last edited:
Killyp, let me ask you one question: when you mastered your recordings, were they done with 16-bit encoding?

With 24-bit 96 kHz sampling rate digital recording (which a lot of studio-recorded albums are done nowadays), one thing it does is better capture acoustical musical instruments with a lot of high frequency energy such as cymbals, flutes, violins and the higher notes on a piano. Having listened to a Super Audio CD some years ago, the first thing you notice when you listen to a full orchestra recording on SACD is how natural sounding the crash of cymbals, the violin section and the flute section are--in contrast, you can sometimes hear a slightly unnatural harshness of the treble frequencies of a full orchestra on a Compact Disc recording.

Of course, the treble frequencies probably sound better on a two-channel 1/4" 15 fps analog reel-to-reel recorder, but being physical playback format, the sound quality will start to deteriorate over time as the tape and the physical tape head wear out.

In the end, I myself would welcome a new Apple Lossless format with the same digital encoding rate used on Blu-ray audio tracks. It would certainly benefit recordings of acoustical instruments and a full symphony orchestra (but not modern electronic music, since the signal processing used in electronic music pretty much negate the benefits of the 24/96 format).
 
Last edited:
Killyp, let me ask you one question: when you mastered your recordings, were they done with 16-bit encoding?

With 24-bit 96 kHz sampling rate digital recording (which a lot of studio-recorded albums are done nowadays), one thing it does is better capture acoustical musical instruments with a lot of high frequency energy such as cymbals, flutes, violins and the higher notes on a piano. Having listened to a Super Audio CD some years ago, the first thing you notice when you listen to a full orchestra recording on SACD is how natural sounding the crash of cymbals, the violin section and the flute section are--in contrast, you can sometimes hear a slightly unnatural harshness of the treble frequencies of a full orchestra on a Compact Disc recording.

Again? Any comparison is worthless unless you compare exactly the same content with only the bit depth/sampling rate being different (at least if you want to determine the impact of bit-depth/sampling-rate).

It could very well be that the SACD you listened to had a superior source recording or superior post-processing as the cause of the "natural" sounds you did hear.

Take the SACD audio, downsample to 16/44 and test whether the "natural" sounds went away or are still there: according to the theory and various blind tests, if you downsample it correctly you won't be able to hear any difference.
 
Exactly.

The main difference in quality these days vs the 70s, 80s and early 90s is due to The Loudness War.

There has been no peer-reviewed scientific proof that 96/24 "HD Audio" supplies any more audio information to the listener than 44/16 when actually mix/mastered in a way that doesn't eliminate a wide dynamic range far before it gets to the listener (and, in fact, it has been shown mathematically to be true that 44/16 pr 48/16 covers the entire human range of hearing).

The *only* place that 96/24 (or higher) helps is in the studio, where it allows an engineer to use digital effects on a raw recording without hitting the "ceiling" that would cause clipping (undesired distortion). This can happen because effects and other manipulation can introduce "noise" or other artifacting (either intentionally or due to flakey design). But if the effects fit into the "data space" of a 44.1/16 it doesn't matter. A properly engineered mixdown/mastering from 44.1/16 or 96/24(or 192/24) will sound the same when its sitting in a FLAC file or any other "lossless" medium.

There is no discernible difference on the listener end from simply encoding/outputting higher than 44/16.

It depends on the equipment and how it was originally tracked if done digitally. ou definitely have a higher dynamic range with 24/96 or 24/192 than 16/44.1. Depending on the converters used there are other aspects that might have improved sound as well, so to just compare one sample/bit rate against another actually depends on the equipment to whether or not it is an audible difference. SACD (DSD) recordings were originally done as a means to archive older analog recordings because they can convert to PCM if needed. But there are recording studios that have compared analog to DSD 2x and 24/192 and 16/44.1 and DSD 2x did sound better than 24/192 or 16/44.1 which is what places like Blue Coast Records is offering. But jitter is an issue that will effect whether or not 24 bit is better than 16 bit, etc. So, to make a generalization as to what sounds better really depends on a lot of factors, not just bit and sample rates.
 
It depends on the equipment and how it was originally tracked if done digitally. ou definitely have a higher dynamic range with 24/96 or 24/192 than 16/44.1.

Assuming you are talking about the final master meant for reproduction, it of course has higher dynamic range, but it doesn't mean the human hear is able to notice it: 16 bit are enough to exceed the human hearing limits. Quote from the already mentioned article everyone should really read, with emphasis mine:

The dynamic range of 16 bits

16 bit linear PCM has a dynamic range of 96dB according to the most common definition, which calculates dynamic range as (6*bits)dB. Many believe that 16 bit audio cannot represent arbitrary sounds quieter than -96dB. This is incorrect.

[...]

How is it possible to encode this signal, encode it with no distortion, and encode it well above the noise floor, when its peak amplitude is one third of a bit?

Part of this puzzle is solved by proper dither, which renders quantization noise independent of the input signal. By implication, this means that dithered quantization introduces no distortion, just uncorrelated noise. That in turn implies that we can encode signals of arbitrary depth, even those with peak amplitudes much smaller than one bit [12]. However, dither doesn't change the fact that once a signal sinks below the noise floor, it should effectively disappear. How is the -105dB tone still clearly audible above a -96dB noise floor?

The answer: Our -96dB noise floor figure is effectively wrong; we're using an inappropriate definition of dynamic range. (6*bits)dB gives us the RMS noise of the entire broadband signal, but each hair cell in the ear is sensitive to only a narrow fraction of the total bandwidth. As each hair cell hears only a fraction of the total noise floor energy, the noise floor at that hair cell will be much lower than the broadband figure of -96dB.

Thus, 16 bit audio can go considerably deeper than 96dB. With use of shaped dither, which moves quantization noise energy into frequencies where it's harder to hear, the effective dynamic range of 16 bit audio reaches 120dB in practice [13], more than fifteen times deeper than the 96dB claim.

120dB is greater than the difference between a mosquito somewhere in the same room and a jackhammer a foot away.... or the difference between a deserted 'soundproof' room and a sound loud enough to cause hearing damage in seconds.

16 bits is enough to store all we can hear, and will be enough forever.


Signal-to-noise ratio

It's worth mentioning briefly that the ear's S/N ratio is smaller than its absolute dynamic range. Within a given critical band, typical S/N is estimated to only be about 30dB. Relative S/N does not reach the full dynamic range even when considering widely spaced bands. This assures that linear 16 bit PCM offers higher resolution than is actually required.

It is also worth mentioning that increasing the bit depth of the audio representation from 16 to 24 bits does not increase the perceptible resolution or 'fineness' of the audio. It only increases the dynamic range, the range between the softest possible and the loudest possible sound, by lowering the noise floor. However, a 16-bit noise floor is already below what we can hear.

If instead you are talking about the bits of the source material or during post-processing, from the same article:

When does 24 bit matter?

Professionals use 24 bit samples in recording and production [14] for headroom, noise floor, and convenience reasons.

16 bits is enough to span the real hearing range with room to spare. It does not span the entire possible signal range of audio equipment. The primary reason to use 24 bits when recording is to prevent mistakes; rather than being careful to center 16 bit recording-- risking clipping if you guess too high and adding noise if you guess too low-- 24 bits allows an operator to set an approximate level and not worry too much about it. Missing the optimal gain setting by a few bits has no consequences, and effects that dynamically compress the recorded range have a deep floor to work with.

An engineer also requires more than 16 bits during mixing and mastering. Modern work flows may involve literally thousands of effects and operations. The quantization noise and noise floor of a 16 bit sample may be undetectable during playback, but multiplying that noise by a few thousand times eventually becomes noticeable. 24 bits keeps the accumulated noise at a very low level. Once the music is ready to distribute, there's no reason to keep more than 16 bits.
 
My goodness, where did you get this? Dynamic range of 16 bit CD is 96dB, 24 bit is 144dB. Even the 16 bit is a big dynamic range for playback, especially dithering properly. Obviously bigger dynamic range is better but 96dB didn't "kill" anything particularly compared to earlier recording formats. The problem with early CD was not using dither (or bad dither) which caused low level signal to have audible quantization errors, with proper dither the noise floor fades into noise just like earlier analog formats. There's a limit to how quiet a signal can be recorded and played back, just like any other recording format. But those quietest signals are exactly that, quiet. Unless you're listening to recordings at a damagingly loud level or have a dead quiet listening environment, even things like air conditioning or the ambient noise of a room that isn't soundproofed is going to render the very lowest signals inaudible. Even with 16 bit.

And the loudness war had nothing to do with the quiet end of CD dynamic range, it only happened because people wanted their mixes to sound louder than everyone else's on the radio. If CD and digital recording contributed to that it's because it made it possible to create masters that were way over compressed and limited. Not because it made that necessary, there are tons of great digital recordings that use the full dynamic range available.


24 bit does have advantages, no question about it. But even with that I would argue the advantages are more on the recording side, generally overkill on the distribution side.

Isn't your dynamic ranges of 96dB and 144dB THEORETICAL and NOT REALITY? The dynamic range depends on equipment. The most I've ever seen any AD or DAC is about 120dB or so with regards to dynamic range. They aren't even close to meeting 144dB, at least from what I've seen. Don't get caught up with theories as they are simply that. THEORY. one must look at the reality of the equipment, the other thing is that some of these specs don't always explain which sounds better than another. I've ran into that on many occasions where what looked better on paper didn't sound as good as something else.

Either way, when I listen to music, it peaks around 96dB from my listening position, and it's averaging around 85dB, so I would rather have dynamic range above 96dB if you don't mind.

I also don't like heavily processed recordings. I listen to mostly jazz, classical, and various recordings with as little processing as possible. Obviously I can't get away from it completely, but I try to as much as possible.

I also listen to a LOT of content with acoustic instruments and I prefer as natural of a sound quality as possible. I also am very sensitive to any form of distortion, so it's critical that I get as good of sound as possible with what I listen to.

I've heard some 16/44.1 recordings that were actually quite good as they did a decent job converting from analog tape, and I've heard really badly recorded or converted analog to digital. Also, most of the digital recordings up until around 10 years ago were all 16/44.1. Most of the content more recently were probably done at 24/96 and some at 24/192, but 16 Bit is still by far, the most common in terms of how it was originally recorded when it comes to digital. As far as conversions from analog, some of them are very good and some just completely suck and it's all dependent on the equipment and the engineer.
 
Killyp, let me ask you one question: when you mastered your recordings, were they done with 16-bit encoding?

No, 24 bits are encoded while recording because noise adds up by 3 dB with every channel used. Better to have that sampling error noise down at -141 dB than already at -96 dB. With the channel counts most modern projects are reaching, the noise from the A/D converters (which although encoded at 24 bits, only really yields around 20 bits of information due to the noise from the analog stages at the front of the A/D) often stacks up to be higher than the -93 dB noise floor in a dithered 16 bit file.

24 bits are used because of a simple engineering problem faced by engineers in the studio. We then encode the audio to 16 bits for playback because 16 bits work absolutely fine.


With 24-bit 96 kHz sampling rate digital recording (which a lot of studio-recorded albums are done nowadays), one thing it does is better capture acoustical musical instruments with a lot of high frequency energy such as cymbals, flutes, violins and the higher notes on a piano.

No it really doesn't in my experience. The benefits of 24/96 are:

I can peak in level at -30 dB while recording without worrying about the noise from the quantisation error (as this is 48 dB lower in 24 bits).
A lot of Limiters, some compressors and a lot of delays + other fancy FX behave better then they are supplied supersonic information. Don't ask me why, I don't design audio processing algorithms so couldn't comment.
Every time the audio is processed, the noise floor stays very low. 16 bits means a higher noise floor, that's all.
Some converters sound worse at 96 KHz, some sound better. I will often track at whichever sample rate sounds better to my ears. Once a good quality sample rate converting algorithm has been used (Weiss Saracon is the best to my ears and many others too I believe), I cannot tell the difference between a 96 KHz and 44.1 KHz file (when played back correctly of course, not re-sampled by the terrible re-sampling in OS X).

Having listened to a Super Audio CD some years ago, the first thing you notice when you listen to a full orchestra recording on SACD is how natural sounding the crash of cymbals, the violin section and the flute section are--in contrast, you can sometimes hear a slightly unnatural harshness of the treble frequencies of a full orchestra on a Compact Disc recording.

Listen to the Telarc recording of Stravinsky's Firebird suite - recorded on 50 KHz 18 bit converters in the late 1970s (when digital audio was still mostly a concept) - absolutely staggering recording.

SACD is completely unnecessary and also presents a HUGE number of problems. Most SACDs are first produced in PCM format because editing and working in DSD has (up until recently) been basically impossible. IE, your SACD audio has actually come off a 24/44.1 or whatever-other sample-rate file.

Of course, the treble frequencies probably sound better on a two-channel 1/4" 15 fps analog reel-to-reel recorder, but being physical playback format, the sound quality will start to deteriorate over time as the tape and the physical tape head wear out.

Tape is not transparent. I use it because it changes the sound in a way I (usually) like. Once it's come off the tape back into digital it sounds identical (if a good set of A/D and D/A converters are used), 'even' at 44.1 KHz.

In the end, I myself would welcome a new Apple Lossless format with the same digital encoding rate used on Blu-ray audio tracks. It would certainly benefit recordings of acoustical instruments and a full symphony orchestra (but not modern electronic music, since the signal processing used in electronic music pretty much negate the benefits of the 24/96 format).

It will present speaker and amplifier designers a whole host of problems they don't currently have to worry about, with no actual benefit in sound.

The problem is how the current mediums are used, not the nature of the mediums themselves. I could play you CDs on my system which would absolutely blow you away with their fidelity.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.