Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Uh, such DACs have existed for years. And already exist with Lightning, too.

Yeah, examples. (1) http://v-moda.com/vamp/ (2) http://www.moon-audio.com/adl-x1-24-192-usb-dac-headphone-amplifier.html

Amplification as well as impedance play a large role. Headphones sounding different in each device because of different amps and impedances was extremely annoying.

Yeah, I am curious if they'll make something beyond 50 ohm in their Beats line if they start getting heavily custom in the audio chain.
 
This kind of thing right here is the real Apple.

Sadly, these are the kinds of stories that get swept under the rug when people complain about how Apple are more concerned with how things look rather than how they function.
 
"...and create a new 24-bit capable version of its In-Ear headphones, but those reports have not yet panned out."

All headphones and speakers are analog. The digital conversion process needs to occur prior to amplification. So there's no such thing as 24bit capable headphones unless you're talking about digitally transmitting the data to the headphone via lightning or bluetooth. This is not a limitation of technology.... it is a limitation of human physiology unless you have a digital input implanted into your brain. So "digital" headphones convert the audio to analog and amplify the signal instead of your device. There's no gain for the end user to have the headphone convert the audio vs your iPhone unless you're headphone cable is 100ft long.

It doesn't matter, most people would not be able to tell the difference between 12, 16 or 24bit audio sources or 44.1k, 48k or 88.2 or 96k. This is not like the picture quality of SD, HD, 4k vs 8k

This is more like properly encoded H.264 vs H.265 vs AppleProRes with the same pixel count and framerate. You're not going to be able to tell the difference unless you've dropped a few hundred grand into a home theater setup and knew what to look for to see a difference. And even then there would no diminished entertainment quality between the various formats.

My point being that AAC 256kbps is not the weak link in your audio chain. It's your headphones or speakers. 192kbps lossy audio compression method far exceeds the reproduction quality of most headphones or speaker systems and 99.99% of people would never hear the difference unless switching between the same source at two different qualities in a properly designed recording studio.

I can only hear the difference on my high end system between 256 and 320 joint stereo mp3 in the most dissonant, multi-harmonic material in totally quiet conditions at not too high volume (that's not even the volume that material is supposed to be listened too :) and I can't even tell you which one I like the best (I just know there is a minute difference).

AAC at 256kbs is better than 320kbs joint mp3 so I'm pretty sur I can't tell the difference for anything at that level in any listening conditions. Maybe some under 25 year old guy with bat ears, perfect speakers and perfect amplification in a quiet perfect room could, but that guy is 1 in 25 million probably.
 
I omitted studies and figures for the sake of brevity.

96k is not what most professionals use for the multi-track recording either. For the master mix from an analog console 96k 24bit is king but the overhead/production limitations outweigh the slight quality improvement. Even film, which has higher audio quality standards than music, do not record above 48k 24bit.

----------



Amplification as well as impedance play a large role. Headphones sounding different in each device because of different amps and impedances was extremely annoying.

You are seriously misinformed. Top notch studios and movies are using higher standards. A quick Google search will bear this out.
 
This is great news. For those who don't know, here's the Sony Oxford console. :cool:
 

Attachments

  • studioC.jpg
    studioC.jpg
    126.8 KB · Views: 142
I'm wondering if Apple is going to be the ones (with their massive ties to labels) to push 3D audio into the mainstream. It's an interesting thought...
 
You are seriously misinformed. Top notch studios and movies are using higher standards. A quick Google search will bear this out.

No. The majority of albums and film multi tracks are recorded at 48k 24bit though many record music multi tracks at 44.1k.

Thats not to say that no one records at 96k or 88.2k, but most do not. Yes, all studios have the capability to record at 96k or even 192k but most projects are not because of the system requirement inhibit the size of the production.
 
Last edited:
I can't help but think that this hire is not just to improve the mac's and iDevice's DAC's but something bigger. Perhaps he was hired as a consultant in the engineering of Beat's hardware?

"...and create a new 24-bit capable version of its In-Ear headphones, but those reports have not yet panned out."

All headphones and speakers are analog. The digital conversion process needs to occur prior to amplification. So there's no such thing as 24bit capable headphones unless you're talking about digitally transmitting the data to the headphone via lightning or bluetooth. This is not a limitation of technology.... it is a limitation of human physiology unless you have a digital input implanted into your brain. So "digital" headphones convert the audio to analog and amplify the signal instead of your device. There's no gain for the end user to have the headphone convert the audio vs your iPhone unless you're headphone cable is 100ft long.

It doesn't matter, most people would not be able to tell the difference between 12, 16 or 24bit audio sources or 44.1k, 48k or 88.2 or 96k. This is not like the picture quality of SD, HD, 4k vs 8k

This is more like properly encoded H.264 vs H.265 vs AppleProRes with the same pixel count and framerate. You're not going to be able to tell the difference unless you've dropped a few hundred grand into a home theater setup and knew what to look for to see a difference. And even then there would no diminished entertainment quality between the various formats.

My point being that AAC 256kbps is not the weak link in your audio chain. It's your headphones or speakers. 192kbps lossy audio compression method far exceeds the reproduction quality of most headphones or speaker systems and 99.99% of people would never hear the difference unless switching between the same source at two different qualities in a properly designed recording studio.

This guy is part of the elite high end pro field. Not someone I would expect to be focusing on how to make iPad speakers sound better. I would hope the pro market is in for some future surprises.
 
You are seriously misinformed. Top notch studios and movies are using higher standards. A quick Google search will bear this out.

Some film release formats support higher than 48k but that doesn't mean they're actually creating content in that format. A few studios recording music may use 96 or 192 but it's probably not that common - as with the gear companies, it's more for the sake of bragging rights than actual sound quality.
 
I can't help but think that this hire is not just to improve the mac's and iDevice's DAC's but something bigger. Perhaps he was hired as a consultant in the engineering of Beat's hardware?

I agree. DAC technology has become a commodity since the last major obstacles were overcome in the 1990's. You can now get an insanely good stereo DAC for less than $10, at the volumes that Apple purchases they'll probably cost half that. If Apple would want to integrate a top quality DAC into their own silicon they could license a design for pocket change.

It's more likely something very central to Apple's product pipeline if they want to hire at this level. Perhaps he's hired to focus on Apple's own speech recognition silicon to bring Siri everywhere, from Apple TV to iMac, from iPad to Apple Watch.
 
What's with Apple and hiring people that have come out Oxford ?

That's just 1 place..... There are others i'm sure... but Apple doesn't wish to explore that..
 
A top notch studio will ask what sample rate you want to record at. They won't force anyone to use 96k or above. A lot of people choose 48k/24bit and blumpy is bang on. There's no such thing as 24bit headphones.

Now they need to hire Paul Frindle and we'll see all Logic (and garageband) plug-ins fly.
 
There are benefits to DSD

I hope not, there are no real benefits over PCM and a whole lot of headaches.

Read this from a DAC maker who understands what they are doing and puts their R&D and investment on the line:

"One of the approaches we decided that would help us move a step closer to the original sound quality was to provide native DSD playback without having to convert it into PCM. This was indeed a difficult task. In order to provide native DSD playback we had to make a bold move which was to start from scratch.

We have asked ourselves: Does the DAC support Native DSD?
Will it allow for the Dual DAC setup? What else is needed for the perfect Native DSD support?

In order to answer these three questions, we have decided to use the Cirrus Logic CS4398 chip in the Dual DAC setup with an added exclusive XMOS chip that will allow for Native DSD support.

Through countless hours of testing, we were able to achieve Native DSD playback by having the main CPU process the data, send it to the exclusive XMOS chip, then through the DAC to deliver the sound.


It did not take us long to figure out that Dual DAC and balanced output resulted in the best sound. During the development we were able to move a step closer towards the original sound by then creating a balanced output terminal at the end.
 
Little late Apple, but I know you need selling points for later products. Beats obviously not strong enough in the digital audio quality area.
 
It's long- but not a lot of posts on this topic

I think we’re asking all the wrong questions. (this is a pretty long and opinionated post, but whatever, I think it’ll be worth your time if your curious at all about digital audio)

Native DSD playback? 24 bit audio? anything above 48k for playback? Yikes! I mean, come on!

First of all, uncompressed CD quality audio (PCM) at 16 Bit, 44.1 kHz is dope! (really freaking good). On a decent system, I’d love to playback PCM audio, but even i’ll admit that it’s difficult to differentiate that and a 320 Kbps MP3…or even a 256 AAC for that matter. Once you get down to 128 kbps MP3 though, oh forget it, I don’t wanna listen to it- I’d be able to pick a 128 out in a blind test every time guaranteed. Sure, the average user may not know exactly how to quantify, or objectively describe the difference in actual audio terms, but I guarantee they will perceive an obvious difference if they were presented a 128 MP3 and uncompressed PCM audio side-by-side. Remember what the old MySpace music used to sound like??? geeez

Why is 16 Bit, 44.1 kHz great for playback? because the music we consume is mastered to fit within a 16 bit dynamic range and sampling at 44.1 kHz (which means the highest frequency we can hear as humans is sampled/documented at least twice- good enough because barely anything musical lives up there anyway)

What’s sad is that we don’t even use the full 16 bit dynamic range. We squash (via clever compression) the audio so that it has barely any dynamic range at all and in return we get LOUDNESS. Yeah, that’s right, so when you’re sitting on the subway, or running on a treadmill, you can press ‘play’ and set your iPod volume once to drown out the world around you. No need to worry about the loud part coming up, or the quiet part at the start of your favorite songs because…there is none. You’re music was designed that way and you don’t even know it (you shouldn’t have to).

It’s not commonplace for people to sit in front of speakers to consume music anymore (it saddens me dearly because it’s really a magical experience when you do). It’s ear bud this, Beats by whatever, or Bose that! Guess what? Put a better DAC in front of whatever you’re listening through and it will sound better. Setup a pair of speakers (2) and seat yourself in the center and it gets even better- rediscover the three-dimensionality hidden within all the music you’ve listened to your whole lives but never knew was there.

I’ve owned three different iPhone 4’s and the DAC sounded different in ever single one (from different production runs over time, obv). Some sounded better than others. I’d be psyched if Apple put a premium DAC in a new iPhone rather than some run of the mill, less than a dollar DAC that’s probably currently in there (not that it’s bad, but you gotta understand there is better stuff out there). That would be a good first step.

Here is what I think we should be curious about: How is Apple going to redefine music consumption? OR how are they going to bring what we’ve lost and have us decide we like it more? Are they going to bring dynamic range back so that it’s worth having more bit depth? If so, HOW are they going to do that? What could they possibly do to compel their customers to prefer listening in a new environment vs. their ‘Beats’ or built-in speakers? I sure as hell don’t know, but holy ***** is it exciting to know that there is a powerful enough force out there that can legitimately bring something new to the table. The audiophile in me cannot wait to see what they cook up!!

I’ve gotta feeling it’s going to have to do something with making music interactive…both audibly AND VISUALLY. A new generation of art is coming, that’s what I think. It’s going to be stimulating and satisfying and the masses don’t even know they want it yet. Good luck, Apple!
 
It's not a hearing problem, it's a listening problem.

"...and create a new 24-bit capable version of its In-Ear headphones, but those reports have not yet panned out."

All headphones and speakers are analog. The digital conversion process needs to occur prior to amplification. So there's no such thing as 24bit capable headphones unless you're talking about digitally transmitting the data to the headphone via lightning or bluetooth. This is not a limitation of technology.... it is a limitation of human physiology unless you have a digital input implanted into your brain. So "digital" headphones convert the audio to analog and amplify the signal instead of your device. There's no gain for the end user to have the headphone convert the audio vs your iPhone unless you're headphone cable is 100ft long.

It doesn't matter, most people would not be able to tell the difference between 12, 16 or 24bit audio sources or 44.1k, 48k or 88.2 or 96k. This is not like the picture quality of SD, HD, 4k vs 8k.

This is more like properly encoded H.264 vs H.265 vs AppleProRes with the same pixel count and framerate. You're not going to be able to tell the difference unless you've dropped a few hundred grand into a home theater setup and knew what to look for to see a difference. And even then there would no diminished entertainment quality between the various formats.

My point being that AAC 256kbps is not the weak link in your audio chain. It's your headphones or speakers. 192kbps lossy audio compression method far exceeds the reproduction quality of most headphones or speaker systems and 99.99% of people would never hear the difference unless switching between the same source at two different qualities in a properly designed recording studio.

Yes, I agree for the most part, but regarding the video quality comparison: Recording a concert grand in a with an iPhone or a camcorder will not yield a satisfactory result compared to using even a mono high-quality condenser microphone, through a A/D interface and into (software that records 24/44.1khz, 24/88.2khz audio.)

But let's have a race in the other direction. Most people's listening environments are awful, so I often wonder at the point of mixing for anything but earbuds and headphones. Think of the old 640x480 video frame size and the CRT monitors: -Find one on YouTube if you can. Then look at 720p, 1080p, then you get my drift (ha!) Sure, most people can't tell the difference between the Adobe and the sRGB color space, or an iPhone jpeg from a RAW image from a full-frame DSLR, but it still matters to us creatives. Likewise, for tracking: recording rap with a Neumann U87 16/44.1 is overkill, but 24/44.1 is barely adequate for solo piano works. -Likewise most people can't tell Bach from Bartok, but I think it's important for people (and device quality) to move onward and upward.

It's not a hearing problem, it's a listening problem.

What I want to know is, where is Apogee in all of this? They made a proprietary system with Symphony i/o, etc. They make great converters. Maybe it's because Apple is moving in the direction opposite of creative professionals. But the "Mastered for iTunes" droplet uses a stunning sample rate conversion process. Too bad it won't do 320kbps.http://cdn.macrumors.com/vb/images/smilies/wink.png
 
What’s sad is that we don’t even use the full 16 bit dynamic range. We squash (via clever compression) the audio so that it has barely any dynamic range at all and in return we get LOUDNESS. Yeah, that’s right, so when you’re sitting on the subway, or running on a treadmill, you can press ‘play’ and set your iPod volume once to drown out the world around you. No need to worry about the loud part coming up, or the quiet part at the start of your favorite songs because…there is none. You’re music was designed that way and you don’t even know it (you shouldn’t have to).

That's not entirely true...

First, it depends a lot on genre... If you're listening mostly to pop music, yeah, you're getting awful compression. If you're listening mostly to jazz or blue, it's pretty rare for albums to be over-compressed.
Also, you don't need a huge dynamic for all kind of musics. For instance, the original vinyl of the Dark Side of the Moon only had a max dynamic range around 12/20 (according to the dynamic range database), yet it didn't prevent the album from being a monument of modern music... Same for the Köln Concert by Keith Jarrett or the Bridge by Sonny Rollins, the dynamic range is only average (10-12/20), yet the albums are good.

But you also have a lot of difference between editions. For instance, Relapse by Eminem has a much better DR in its vinyl edition (12/20) that in its CD edition (7/20). That's usually the case for vinyl editions in dance, hip-hop or techno, because vinyls are used by DJ on dancefloors: there, the sound is already as loud as legally possible and you need dynamic range to give the feeling of energy... For other genres, it's because vinyl has become an audiophile product - digital is marketed to people who listen to music on the treadmill or in the subway on cheap headphones while vinyl is marketed to people who will invest thousands in hifi...

So if you like full range of dynamic, test some different music genres, avoid radio edits and prefer vinyl rips to CD rips...
 
Likewise, for tracking: recording rap with a Neumann U87 16/44.1 is overkill, but 24/44.1 is barely adequate for solo piano works. -Likewise most people can't tell Bach from Bartok, but I think it's important for people (and device quality) to move onward and upward.

Not really sure you quite appreciate the tracking process with these kinds of music. Most classical engineers I know are working at 44.1 KHz because it covers all the necessary information.

47s and 67s are fairly common amongst the higher-end hip hop productions, forget the 'lowly' 87.

Higher sample rates are more commonplace amongst the 'production' world for two reasons. 1) A surprising number of engineers don't understand how PCM audio works, so think that anything above 44.1 KHz is somehow 'higher resolution', and 2) there are actually some benefits to capturing audio above 20 KHz if you want to get creative (particularly when it comes to pitch-shifting, slowing down and some kinds of compression).

44.1 KHz is all that is needed for 'full quality' (as in zero loss) playback of digital audio in the home on properly designed digital gear.
 
nice

That's not entirely true...

First, it depends a lot on genre... If you're listening mostly to pop music, yeah, you're getting awful compression. If you're listening mostly to jazz or blue, it's pretty rare for albums to be over-compressed.
Also, you don't need a huge dynamic for all kind of musics. For instance, the original vinyl of the Dark Side of the Moon only had a max dynamic range around 12/20 (according to the dynamic range database), yet it didn't prevent the album from being a monument of modern music...

^You're right- it's not entirely true for the reason you specified: its genre dependent. But let's look at what what Apple has been pushing lately: The new U2 album? Working Trent Reznor of NIN? Acquisition of Beats by DRE? The music coming from these people lately has screamed more loud than not nowadays- but yeah, I was speaking on behalf of pop music. Not discrediting listeners of other musical genres as I'd have to be real ignorant to do so- but hey, pop is called pop for a reason whether we like it or not. It's 'popular;' it's mainstream. (kinda like Apple) I was just saying that if we can get mainstream listeners of popular music enthused about listening in better environments suited to a larger dynamic range, it would be pretty cool.

Not really sure you quite appreciate the tracking process with these kinds of music. Most classical engineers I know are working at 44.1 KHz because it covers all the necessary information.
...

Higher sample rates are more commonplace amongst the 'production' world for two reasons. 1) A surprising number of engineers don't understand how PCM audio works, so think that anything above 44.1 KHz is somehow 'higher resolution', and 2) there are actually some benefits to capturing audio above 20 KHz if you want to get creative (particularly when it comes to pitch-shifting, slowing down and some kinds of compression).

44.1 KHz is all that is needed for 'full quality' (as in zero loss) playback of digital audio in the home on properly designed digital gear.

^I couldn't agree with you more here! I mentioned 16 Bit, 44.1kHz PCM audio as ideal for playback. HOWEVER, there are surely massive benefits of tracking and mixing at higher sample rates in the studio before it gets to the consumer. You mentioned pitch-shifting and slowing down- oh for sure! if you've got more data to work with, then end result is going to be more pleasing! It's all about that slope of that lowpass filter in the DAC. The broader the Q (less dB p/octave), the more sampling data you'll get. It effects everything in my opinion. All plugins, especially time-based effects such as reverbs and even linear pitch correction will sound better if you have that data to work with. It's all data at the end of the day- and the more accurate your computer/DAW can calculate those last few decimal places, the more fidelity you'll have access to. I track at 88.2 for general music prod, then sample down to 44.1 PCM before even thinking about making MP3's. (especially helps if you do analog summing) If I doing a film score- I'll track and mix at 96k so that I can halve my end result to 48k (which is film, or audio for visual media standard). My converters go up to 192k, but beyond 96k, I don't hear a difference in my processing anymore and the increased file size + intensive CPU performance just isn't worth dealing with.

For a video analogy: What do you think will have higher perceived resolution on YouTube?
The video shot at 4k, then processed/converted down to 1080p?
or the video shot at 1080p, then processed/converted again to 1080p?

The funny thing is, higher perceived resolution isn't always 'better' or 'good' for everything. But you must admit, it's what's flashy and current right now- like Apple.
 
^You're right- it's not entirely true for the reason you specified: its genre dependent. But let's look at what what Apple has been pushing lately: The new U2 album? Working Trent Reznor of NIN?

Actually, Reznor took a stand about the loudness war:
http://www.synthtopia.com/content/2...-loud-audiophile-masters-of-hesitation-marks/

I was just saying that if we can get mainstream listeners of popular music enthused about listening in better environments suited to a larger dynamic range, it would be pretty cool.

The problem is that they do not and can not listen in a better environment... If you listen to music at the gym or in the subway, you do want some compression, otherwise you won't hear half of it. It's better with intra-auricular headphones, because of the isolation, but you do not want too much isolation either...

The ideal solution would be to compress on the user end. The label should issue music with compression set to benefit the music (at a reasonable level) and not to be as loud as possible. And then, the player (iTunes, iPhone...) should allow the user to enable the compression level he likes...


For a video analogy: What do you think will have higher perceived resolution on YouTube?
The video shot at 4k, then processed/converted down to 1080p?
or the video shot at 1080p, then processed/converted again to 1080p?

I would rather have the 4k diffused at 4k... I have the resolution on my monitor, I have the cable with the bandwidth and you can really see the difference in video... I will probably buy a 4k TV when they're more affordable (my regret being that it will have to be a LCD when I currently have a plasma which is way better).

The funny thing is, higher perceived resolution isn't always 'better' or 'good' for everything.

For video, it is... I can not watch my DVD anymore, the quality is just awful once you have a big TV and have gotten used to 1080p...
The problem with high resolution, is that it shows the defaults. You could get away with bad special effects and makeup on DVD. It's harder with 1080p. It will be very hard with 4k... Also, some producers did a real scam by just running an upscaler on their DVD, some of the first blurays were awful.
Of course, there are certain defaults that are positive - for instance, 1080p and 4k are great for watching some black&white movies (for instance the early Jim Jarmusch), because you can see the grain of the film, something I'm sensitive to as a photographer who used Tri-X in his teens. The DVD completely destroyed the texture of these movies, which was there by choice.
 
sweet


Geez, I'll admit I was out of line by clumping Reznor in there with everyone else in the loudness war, he took a stand by releasing a separate version of that album for the niche audiophile audience- very enviable! However, a decision was still made to release the standard version of the album within the realm of todays mastering standards (Loud). <-- and they made that decision because they have to compete. Understandable and I don't blame 'em one bit. How to change that though?

The ideal solution would be to compress on the user end. The label should issue music with compression set to benefit the music (at a reasonable level) and not to be as loud as possible. And then, the player (iTunes, iPhone...) should allow the user to enable the compression level he likes...

I very much like the sound of that idea! Perhaps it's something along those lines that these big names have been brought in to work on. Again, very cool idea- broadband internet is plentiful and easily accessible nowadays, I don't see why content delivery of that nature shouldn't be feasible. (unless the ISP's step in, which they might and would suck, but that's a whole separate can of worms irrelevant to this thread)

I would rather have the 4k diffused at 4k... I have the resolution on my monitor, I have the cable with the bandwidth and you can really see the difference in video... I will probably buy a 4k TV when they're more affordable (my regret being that it will have to be a LCD when I currently have a plasma which is way better).

I would rather have the PCM audio master diffused to the PCM master- but that wasn't the point.

Yeah, the resolution is quite pleasant if you could simply leave it in 4k- I think you can do this with Vimeo, and perhaps even YouTube now. (i'm sure you know way more about it than I do) But it still comes down to whether you have the hardware to play it back natively. I have a 2k display and love it, for sure. But, great, you wanna invest in a 4k TV? Well, I wanna buy a amplifier with endless headroom...connected to a solid DAC...and solid pair of speakers tuned to my room. Most people don't- As humans we're visually dominant, it's easier to perceive the resolution of light than the resolution acoustical pressure.


For video, it is... I can not watch my DVD anymore, the quality is just awful once you have a big TV and have gotten used to 1080p...
The problem with high resolution, is that it shows the defaults. You could get away with bad special effects and makeup on DVD. It's harder with 1080p. It will be very hard with 4k...

You've gotten used to 1080p and can't go back to 480p DVD, I've gotten used to Red Book CD Quality Standard PCM and can't go back to 128 kbps MP3. I think my audio/video analogy still holds here. I understand what you're saying, though.

Of course, there are certain defaults that are positive - for instance, 1080p and 4k are great for watching some black&white movies (for instance the early Jim Jarmusch), because you can see the grain of the film, something I'm sensitive to as a photographer who used Tri-X in his teens. The DVD completely destroyed the texture of these movies, which was there by choice.

I actually think this example somewhat solidifies my point further. ''Higher perceived resolution isn't always 'better' or 'good' for everything.''
Here's another perspective:
^You like the look of the grain from that film? You find it 'pleasing' that things aren't crystal clear? Some people find audio more 'pleasing' when it's recorded to 2" magnetic tape.

You're a photographer, right? I'm sure you have the traditional film/light thing down (props, seriously, that's an art) but you probably also shoot with a DSLR with a high Megapixel count sensor, right? probably paid a pretty penny for it too (as you should if you care about your art), but if the photos you take are ultimately going to be compressed to JPEGs to be viewable on the internet, why have such a high quality sensor in that DSLR body? Why not just have the camera body do the JPEG conversion for you sight-on-seen..You know why.. because RAW gives you more fidelity and flexibility in post. You're JPEGs will look better in the end because you originally shot with wayyy more res in RAW image data from the sensor.<< PCM audio is the RAW in my world. Give us the RAW! lol

Haha, not picking on you, Lictor, I honestly think this is great conversation and I appreciate everyone indulged me here!
 
Geez, I'll admit I was out of line by clumping Reznor in there with everyone else in the loudness war, he took a stand by releasing a separate version of that album for the niche audiophile audience

Well, I did some more research, and the subject is controversial, since the dynamic range on his "audiophile" album is not that great - it's better than the CD version but not great. So, well, it remains to be seen what he will do with Apple. He's certainly aware of the issue, but it's not certain he will act on it.

I very much like the sound of that idea! Perhaps it's something along those lines that these big names have been brought in to work on. Again, very cool idea- broadband internet is plentiful and easily accessible nowadays, I don't see why content delivery of that nature shouldn't be feasible.

Actually, we're talking about sound compression here, not file compression. Audio compression has little impact on file size. So, it does not cost anything, except a cultural change. Moreover, players have enough computing power so that doing on the fly audio compression would not be a problem...

But it still comes down to whether you have the hardware to play it back natively.

With retina displays, a lot of people actually have the resolution to play 4k on their computers and tablets... But yes, it's still awfully expensive for TV and you have to "downgrade" to LCD (which I still feel inferior to plasma).
I do have some old but semi-decent hifi equipment too (Denon home cinema amplifier with 5 Acoustic Energy AE1). There is no point in having a big TV without the sound to go with it ;)

Most people don't- As humans we're visually dominant, it's easier to perceive the resolution of light than the resolution acoustical pressure.

I think it's also because the resolution for video is barely getting to the point where we don't see the pixels anymore. Fill your visual field with your TV (and yes, that means sitting very close to it) and you will see pixels, lack of resolution and compression artifacts.

But, like with audio, some people actually don't see the defaults. Like, some people like turning on the digital gadgets that totally destroy image quality (nuclear colors, the awful 600Hz modes...). Just like some people don't see the difference between 128kbits and CD..

I actually think this example somewhat solidifies my point further. ''Higher perceived resolution isn't always 'better' or 'good' for everything.[''
Here's another perspective:
^You like the look of the grain from that film? You find it 'pleasing' that things aren't crystal clear? Some people find audio more 'pleasing' when it's recorded to 2" magnetic tape.

And some people like tube amplifier, even though they're not hifi... I listen to a lot of blues, and, likewise, some artists can produce a lot of emotion from instruments built from scraps (literally for early artists) and amplifier that distorts and saturate...

Actually, I worked on video compression, during the move from MPEG to wavelet compression. The lesson is that wavelet was so successful because it's compression artefacts feel a lot more natural than MPEG. MPEG artefacts are blocky - and our brain is optimized to detect lines and patterns and focus on them. It's very bad when artifacts are more interesting than the content ;) On the other hand, wavelet artifacts induce fuzzyness and blurred lines - and our brain is tuned to reconstruct details from blurry images.

About film grain, the strange film is that film grain actually make the image feel sharper. If you have an image that is soft and you introduce some simulation of film grain, it will feel crispier. That's because film grain induce the feeling of higher local contrast and our brain loves local contrast and associate it with high level of details. But it doesn't work so well with digital noise (too regular, too colorful), just like in audio analog saturation, especially with tube amps, can feel good whereas digital saturation just feels awful.
That's why it's good to see the grain on films where the artist actually intended the film to be grainy, because it gives a texture and gritty feeling to the image.

You're a photographer, right? I'm sure you have the traditional film/light thing down (props, seriously, that's an art) but you probably also shoot with a DSLR with a high Megapixel count sensor, right?

It's a hobby, it's very hard to live off photography... I mostly shoot with a 40mp DSLR. But I also shoot with my iPhone. One reason if that you don't always feel like carrying 4kg of camera and lenses, another is that different tools make you work differently - just like most guitarists have several guitars.

but if the photos you take are ultimately going to be compressed to JPEGs to be viewable on the internet, why have such a high quality sensor in that DSLR body?

I actually print my photos, so they don't all go only to the web... I have several 60-100cm wide prints on fine art paper or plexy at home. People should really print their photos, it's just not the same as looking at them on a screen...
But, true, you don't need 40mp to print 100cm (you can get decent 60cm from even a 10mp).

But a camera is just not a sensor... It's also the ergonomics - with a Pro DSLR, you mostly have one button-one action, no need to go through menus and you can operate the camera without leaving the viewfinder. Likewise, shooting through a viewfinder is not the same act as looking at a LCD at arms length. You also have the benefit of an autofocus that works in extreme low light, close to no lag when taking the photo...
Also, a sensor is not just resolution. It's also dynamic range, low-light performance - even when publishing on the web, you have a real difference between a tiny sensor like on the iPhone (even if Apple did an excellent job with it) and a 24x36 one.

You know why.. because RAW gives you more fidelity and flexibility in post. You're JPEGs will look better in the end because you originally shot with wayyy more res in RAW image data from the sensor.

Actually, you don't gain much in resolution, JPEG Fine on a DSLR is not very compressed and you usually won't see any artifacts.
The gain from RAW is that you get choices and creativity is about choices. The sensor captures a 14 bits image, JPEG is 8 bits. So, when you're using JPEG, a crude algorithm decides how to fit these 14 bits into 8. With RAW, I'm the one that decide exactly how I'm going to do it.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.