Register FAQ / Rules Forum Spy Search Today's Posts Mark Forums Read
Go Back   MacRumors Forums > Apple Hardware > Apple TV and Home Theater

Reply
 
Thread Tools Search this Thread Display Modes
Old May 17, 2010, 01:05 PM   #1
roquentin
macrumors newbie
 
Join Date: May 2010
Play lossless files on a stereo receiver

Hello,
I'm wondering if anyone can help me.
Simply put, I would like to play "lossless" files through my stereo receiver with the best possible sound quality.
The particulars of my situation are as follows:
1. I have a Lacie external hard drive with music files all encoded using itunes and its "lossless" format.
2. I have a new, mid-range stereo receiver.
3. I would prefer not to involve my laptop in connecting these two devices.
4. I would prefer to use wired connections.

I understand that I need some other device to act as an interface.

I tried to do this using my TV and my DVD player, both of which are connected to my receiver with HDMI cables. Both the TV and the DVD player are capable of playing digital files via USB, but I found that these devices will only communicate with flash drives (my external is not a flash drive).

I also tried using a Sling Catcher (which I can also connect to my receiver with HDMI). While the Sling Catcher recognizes the external drive, an error message indicates that it is only compatible with FAT 32 file systems.

I have found conflicting information about simply using a PS3 to play the files on my external drive. My research on the topic had left me so confused about file systems, file formats, extensions, etc. that I couldn't even begin to formulate a useful question, so I'll ask it this way:

Can I use a PS3 to play "lossless" files (encoded using itunes) directly from my external drive, that is, without involving my computer in any way, and without storing the files on the PS3?

If not, could anyone suggest any alternatives?
Are there other products that can act as interfaces (Apple TV)?
Would it be advisable to convert all my music using some other file system and/or file format? (I'd like to avoid this).

I think that SONOS can do this and more, but it is beyond my budget.
I've considered getting a Mac Mini and using it as an interface, but this also seems like an expensive solution for what should be a simple task.

Sorry for the long post (my first) but any suggestions would be appreciated.

Thanks.
roquentin is offline   0 Reply With Quote
Old May 17, 2010, 01:23 PM   #2
jcschlic
macrumors regular
 
Join Date: Jan 2009
The Apple TV would be great for this. You could pick up a refurb at a reduced price and it'll work great. Not sure how much storage you would require, but you could still get away with sinking almost 40gb of lossless music to the device itself, since you mentioned that you would not want to use your laptop to stream. If the 40gb model is not enough, buy the 160gb (if you are against streaming).

I have used the Apple TV for the exact same setup you are describing and it is awesome. You get album artwork, a clean interface, etc. If you want the clean interface, album artwork, etc., get the Apple TV. If you don't really care about the interface, you could get a Western Digital WD TV (numerous models available) to hook your external drive directly up to.

Good luck!
jcschlic is offline   0 Reply With Quote
Old May 17, 2010, 02:15 PM   #3
mchalebk
macrumors 6502a
 
Join Date: Feb 2008
The main reason I bought an AppleTV was to use as a music server and it works great.

An iPod would also work.
mchalebk is offline   0 Reply With Quote
Old May 17, 2010, 03:06 PM   #4
Avatar74
macrumors 65816
 
Avatar74's Avatar
 
Join Date: Feb 2007
Lossless isn't necessary to maintain fidelity. Honestly, 256Kbps AAC would greatly reduce your data storage requirements while still maintaining a fidelity indistinguishable from 16-bit Linear PCM (CD Audio).

AppleTV would be a good menu-driven solution that gives you the option to sync content so you don't need to keep a computer running, or you can stream music to it from any computer on the network. This is useful in case you have multiple computers that come and go, with their own, decentralized libraries.
__________________
"Nature abhors a moron." - H.L. Mencken
Avatar74 is offline   0 Reply With Quote
Old May 17, 2010, 03:39 PM   #5
ChrisA
macrumors G4
 
Join Date: Jan 2006
Location: Redondo Beach, California
Quote:
Originally Posted by Avatar74 View Post
Lossless isn't necessary to maintain fidelity. Honestly, 256Kbps AAC would greatly reduce your data storage requirements while still maintaining a fidelity indistinguishable from 16-bit Linear PCM (CD Audio).
For most people this is true. Especially if you only have a mid-range stereo system and spent something less than about $1K each for speakers. But if you have good studio headphones and know the sound you can pick out the compression artefacts and they will drive you nuts. The effect not not continuous but happens only for a few split seconds every few minutes.

But today disks cost less then $100 per terabyte. Why care about storage?

If you want to learn to hear music then a good way is to ask your self objective questions about what you can hear. like: "ho is that ride cymbol being hit?" maybe you play guitar and you'd know in a second just by the sound if yuo have the bridge or neck pick on selected. can you hear it in the recording?. There are a 1,000 other questions like this. Don't just ask "does it sond good", be specific. Then try the same questions after changing to AAC or after geting a new amp or whatever
ChrisA is offline   0 Reply With Quote
Old May 17, 2010, 07:21 PM   #6
Avatar74
macrumors 65816
 
Avatar74's Avatar
 
Join Date: Feb 2007
Quote:
Originally Posted by ChrisA View Post
For most people this is true. Especially if you only have a mid-range stereo system and spent something less than about $1K each for speakers. But if you have good studio headphones and know the sound you can pick out the compression artefacts and they will drive you nuts. The effect not not continuous but happens only for a few split seconds every few minutes.

But today disks cost less then $100 per terabyte. Why care about storage?

If you want to learn to hear music then a good way is to ask your self objective questions about what you can hear. like: "ho is that ride cymbol being hit?" maybe you play guitar and you'd know in a second just by the sound if yuo have the bridge or neck pick on selected. can you hear it in the recording?. There are a 1,000 other questions like this. Don't just ask "does it sond good", be specific. Then try the same questions after changing to AAC or after geting a new amp or whatever


I have a $4000 sound system, I've done professional sound engineering, I'm a member of the Society of Motion Picture and Television Engineers (SMPTE), and I can tell you unequivocally that no... if your ears hear a difference between 256 Kbps AAC and the mediocre 16-bit dithered LPCM format of CD Audio (itself vastly inferior to 24-bit LPCM), then there's something wrong with your ears... or you're fooling yourself.

In blind tests, no conclusive evidence has been found that users could consistently identify which sample was AAC and which was 16-bit LPCM.

Keep in mind that AAC is not strictly a compression algorithm. It is a perceptual coding algorithm. Linear PCM is not. It isn't required to store the same amount of data in order to reconstruct the exact same information (waveform). If that were true, then CD audio should sound horrendously unreal to you, because there aren't enough bits per sample to store all possible amplitude values in nature.

Consider, for example, how just 24 frames a second of film are quite enough to fool your brain into perceiving continuous motion... flawlessly. Now consider forty-four thousand one hundred frames a second of audio. Even AES came forward some years ago and declared 128 Kbps AAC indiscernible from 16-bit LPCM... I won't go that far because audio snobs who wasted far too much money on snake oil gear will eat my head off. So I'll stick with the unquestionable substitute.

The main issue is not how much data is stored, but what the reconstruction algorithm does with it. As long as the source was bounced through a proper encoding algorithm, and the soundwave reconstructed with a proper decoding algorithm, which hasn't really been a problem since digital systems of, oh, around 1985... then you're fine.

Artifaction isn't a consequence, as Pohlmann pointed out 25 years ago, of missing data. It's a consequence that occurs upon reconstruction of the signal. But this has been a very minor problem in digital decoding systems since 1985.

Systems prior to 1985 lacked internal re-clocking of the signal and fell prone to falling out of sync. Also, back then there was a much more substantial difference in the quality of sample & hold buffers used in Burr-Brown DAC's versus other DAC's. But that gap has narrowed to a negligible level since then.

There's a lot of hooey in pro audio, and this is one of my biggest pet peeves because it's spouted ad nauseum by amateurs who never laid their eyes on a single page of Ken Pohlmann's "Principles of Digital Audio" (THE engineer's handbook to digital audio encoding/systems since the early 1980s).

Digital artifaction is just as evident on a mediocre set of speakers as on a great set of speakers. It's kind of like the bogus nature of so-called subliminal messages. Anything that is beneath the threshold of perception is precisely that... imperceptible.

If, however, you were arguing in favor of 24-bit undithered Linear PCM, I would applaud you. DVD-Audio and HD Audio support this format, and there IS a VERY obvious difference even against CD audio. A 16-bit stereo LPCM bitstream has 2 to the 16th power (65,536) possible amplitude values per quantization interval. But a 24-bit stereo LPCM bitstream has 2^24 or 16.78 MILLION possible amplitude values per quantization interval. The difference is staggering: CD Audio's dynamic range is 96.7dB, 24-bit LPCM is around 140dB dynamic range! When it comes to hearing cymbals accurately, sampling frequency has zilch to do with it. AAC vs. Lossless has zilch to do with it. What MATTERS is the amplitude resolution of the format and whether or not the decoding algorithm can reconstruct it faithfully from the data that's there.

That isn't as big a problem as you think for coding schema. Consider ADPCM, which uses relative instead of absolute amplitude values... In CD Audio there'll be a 16-bit value for every single amplitude value. In other words, a CD would store something like -26dBFS amplitude in one sample, and then a value of -25.8dBFS in another sample. In ADPCM, only the difference is stored from one sample to the next, which is much shorter: .2 dBFS. Furthermore, bit depth throttling is used... a variable number of bits per sample, unlike CD audio. Less data is required to store that relative change, but the result is exactly the same. AAC goes further using various tricks and techniques to eliminate redundant or unnecessary data not needed to reconstruct the waveform faithfully. One big one is a 20kHz lowpass filter that eliminates any frequency outside of the A-weighted (human) range of hearing. You're not going to perceive it, so why waste the bits encoding it?

When the audio snobs tell you that cymbal sounds produce artifacts because 44.1kHz sampling isn't sufficient to reproduce the higher frequencies... it's time to walk away. It's the ERRATIC nature of a waveform produced by a cymbal that's the issue. The frequency is well within the Nyquist limit. The problem is amplitude resolution. The amplitude values are changing too rapidly and erratically for 16-bit amplitude resolution. But not 24-bits at the exact same sampling frequency.

I once had a Denon rep try to upsell me into a $2800 amp. I asked him one simple question: "Ok, tell me what the fundamental, technical differences are between the sample & hold buffer in the Denon DAC's versus the ones in the Dual 32-bit DACs in the Sony." His response? "I'll have to go read the documentation." I said that's ok, and left.
__________________
"Nature abhors a moron." - H.L. Mencken

Last edited by Avatar74; May 17, 2010 at 07:45 PM.
Avatar74 is offline   0 Reply With Quote
Old May 18, 2010, 08:49 AM   #7
jcschlic
macrumors regular
 
Join Date: Jan 2009
LOL Avatar74 I would hate to have been that sales rep.
jcschlic is offline   0 Reply With Quote
Old May 18, 2010, 10:34 AM   #8
keihin
macrumors newbie
 
Join Date: May 2008
I'd consider discussion of lossless v. high bitrate AAC somewhat off-topic as the OP has already made their choice of encoding and file format.

That being said, one benefit to choosing a lossless codec is that you can always move to another encoding in the future. Lossless means you can decode to the original waveform and re-encode into whatever new encoding has taken the world by storm. If for only this reason, I'd recommend that anyone investing the time to rip CDs should keep a set of losslessly encoded files.

Personally, I encode my CD rips to FLAC. This encoding is not supported by the Apple ecosystem, but works great with my Squeezebox players.

To suggest an option to the OP, take a look at the latest Squeezebox device, the Squeezebox Touch. It can read directly from USB-attached storage and is advertised as supporting both FLAC and AppleLossless formats. Music in these formats is decoded directly on the device is output through either digital or analog outs, both of which sound great. It also does internet radio, other music services, has a touchscreen (no TV required to control), remote control and can be controlled by iPhone/iPodTouch software (iPeng). Oh, and it can control and sync with other Squeezebox players located around the house. If you can't tell, I love my Squeezeboxes.

Once I replace my older Squeezebox devices (which don't support Apple Lossless decoding on the device) with the new Touch devices, I may transcode my library from FLAC to AppleLossless for greater compatibility with my Apple devices and software.
keihin is offline   0 Reply With Quote
Old May 18, 2010, 02:40 PM   #9
Avatar74
macrumors 65816
 
Avatar74's Avatar
 
Join Date: Feb 2007
Quote:
Originally Posted by keihin View Post
That being said, one benefit to choosing a lossless codec is that you can always move to another encoding in the future. Lossless means you can decode to the original waveform and re-encode into whatever new encoding has taken the world by storm. If for only this reason, I'd recommend that anyone investing the time to rip CDs should keep a set of losslessly encoded files.
That's not really the case. You have to remember that the system capable of transcoding from Format A to Format B recognizes both formats, and is (by its very purpose) capable of reconstructing the original waveform from both. This means it understands the algorithm of Format A, and thus what the end result of playback should be. Not consciously of course, but I'm saying the discrete math is there... There's absolutely zero reason for a transcode from AAC 256Kbps to another format to sound fundamentally worse than from Lossless. It's not as if you need every original bit there to tell Format B's encoder what the end result needs to be. If it didn't have the language to understand what Format A's algorithm did with the information, it couldn't transcode in the first place... Without that guidance, it'd be converting the data into gibberish.

To bring it back to the topic... I think the AppleTV would do, but it just depends on how much content you have. I had about 500GB of content but adding TV series and HD movies has ballooned my library and I've expanded it to 3 TB. The only really feasible way to have constant access to that size library all at once is to mate either a NAS or a Firewire/USB 2.0 drive to a computer running iTunes, and sync/stream from there.

However, if you only need access to portions of it at a time (and most people can do just fine with that), you can sync to the AppleTV when your laptop is up and running.

If you're using a fundamentally different word length, e.g. 8-bit, or a fundamentally different sampling frequency, e.g. 11kHz, then yes, the amount of information discarded won't reproduce accurately... but you've got a noticeable difference in the source file to begin with.
__________________
"Nature abhors a moron." - H.L. Mencken

Last edited by Avatar74; May 18, 2010 at 03:02 PM.
Avatar74 is offline   0 Reply With Quote
Old May 18, 2010, 03:55 PM   #10
Alrescha
macrumors 65816
 
Join Date: Jan 2008
Location: Boston, MA
Quote:
Originally Posted by Avatar74 View Post
There's absolutely zero reason for a transcode from AAC 256Kbps to another format to sound fundamentally worse than from Lossless. It's not as if you need every original bit there to tell Format B's encoder what the end result needs to be.
There may be zero difference in your theoretical world, but in practice these encoders expect to have high-quality input and their job is to throw away all the bits they can and leave enough to fool the ear into thinking the music is all there. Once those bits are thrown away, they're gone. When a second encoder comes along, it has a tiny fraction of the information the first encoder had -- and the resulting output is demonstrably worse.

Sure, someday someone might make an optimized transcoder to go from format a to format b, one that *doesn't* expect high-quality input in the first place. If it exists, I haven't run into it.

A.
Alrescha is offline   0 Reply With Quote
Old May 18, 2010, 08:48 PM   #11
Avatar74
macrumors 65816
 
Avatar74's Avatar
 
Join Date: Feb 2007
Quote:
Originally Posted by Alrescha View Post
There may be zero difference in your theoretical world, but in practice these encoders expect to have high-quality input and their job is to throw away all the bits they can and leave enough to fool the ear into thinking the music is all there. Once those bits are thrown away, they're gone. When a second encoder comes along, it has a tiny fraction of the information the first encoder had -- and the resulting output is demonstrably worse.

Sure, someday someone might make an optimized transcoder to go from format a to format b, one that *doesn't* expect high-quality input in the first place. If it exists, I haven't run into it.

A.
Re-read all my above posts on this subject. You're confusing information (the reconstructed audio) with data requirements... thinking that each sucessive transcode means even fewer bits remain than before. That's not quite how it works.

" it has a tiny fraction of the information the first encoder had"

- No. It has ALL the information the first encoder had, by virtue of having both the encoded bitstream and the knowledge of the entire algorithm of the first encoder that encoded it. Without both, the resulting transcode wouldn't be "thrown away" bits. It would be TOTAL gibberish... like trying to translate from Latin to Swahili without understanding a single word of Latin. Keep in mind we're not altering the final soundwave in any substantial way, e.g. transcoding to half the Nyquist limit or cutting down to 8-bit sample word lengths versus the original 16. AAC is a perceptual encoding schema... using more efficient methods of truncation with better decoding algorithms with fewer data requirements to faithfully reconstruct the source, and discarding inconsequential data that would only reconstruct imperceptible audio.

The problems in transcoding one coding schema to another actually arise as a result of errors due to poor sample & hold buffering times. Faster encodes generally give rise to transcoding error. Ideally, resampling of the decoded output in realtime resolves these issues, but better, faster hardware also resolves the issue by being able to make accurate calculations quickly enough before any given sample is discarded from the buffer.

It's not like the transcoder keeps whittling away at bits just to make the signal shorter, thus producing some less audible information. There's a lot of truncation going on that isn't lossy at all. The parts that are lossy aren't necessarily perceptible... certainly not at the bitrates we're talking about for AAC.

Also, the "once the bits are thrown away they're gone" is completely inaccurate. You're forgetting that the full soundwave has to be reconstructed... What do you think arises when reconstruction occurs? Do you think that if the PCM stream is something like ABRACADABRA and the AAC stream is BRCDBR that _BR_CDBR_ is what gets played back? Do you then think that BRCDBR is all that gets carried over in a transcode? No, the digital representation of ABRACADABRA, in this example, has to be reconstructed before any of it's audible... and the encoding algorithm on the other side of a transcoder knows that ABRACADABRA is the intended output because it knows how the coding schema of the source works... otherwise, again, the transcoded signal would be TOTAL gibberish.

As Pohlmann noted in the 1985 edition of Principles of Digital Audio, errors arise if, say, the algorithm makes an error and reconstructs it as EBRACADEBRA instead. But such repeated errors would be blindingly obvious like a woodpecker drilling into your skull... not on the edge of perceptibility.
__________________
"Nature abhors a moron." - H.L. Mencken

Last edited by Avatar74; May 18, 2010 at 09:20 PM.
Avatar74 is offline   0 Reply With Quote
Old May 18, 2010, 09:21 PM   #12
Alrescha
macrumors 65816
 
Join Date: Jan 2008
Location: Boston, MA
Quote:
Originally Posted by Avatar74 View Post
No. It has ALL the information the first encoder had, by virtue of having both the encoded bitstream and the knowledge of the entire algorithm of the first encoder.
It is completely and totally impossible to reconstruct the input data - that is why it is called 'lossy' compression in the first place. The second decoder does not have all the information of the first, no more than a person with a picture of a car has a car.

You may want to read up on how (for instance) the MP3 encoder works. Its output is specifically intended for the way in which the human ear and brain work. It is a faint shadow of the input, except to people.

addendum:

Quote:
Originally Posted by Avatar74 View Post
Also, the "once the bits are thrown away they're gone" is completely inaccurate. You're forgetting that the full soundwave has to be reconstructed...
What in heaven's name makes you think the full <original> soundwave is being recreated?

A.

Last edited by Alrescha; May 19, 2010 at 05:01 AM.
Alrescha is offline   0 Reply With Quote
Old May 19, 2010, 09:05 AM   #13
Avatar74
macrumors 65816
 
Avatar74's Avatar
 
Join Date: Feb 2007
Quote:
Originally Posted by Alrescha View Post
It is completely and totally impossible to reconstruct the input data - that is why it is called 'lossy' compression in the first place. The second decoder does not have all the information of the first, no more than a person with a picture of a car has a car.
Irrelevant analogy, considering that the original multitrack recording doesn't have anything other than a representation of all the instruments being played... The "picture" is all that we're talking about.

If you want to get into talking about accuracy of the initial recording, then I could really bore you to tears with how many ways in which 16-bit Linear PCM (CD Audio) is horribly flawed and limited, how miking techniques alter/color the recording, how electronic circuit noise elevates the noise floor and hinders the dynamic range of any given format, how using one set of identically-shaped drivers to represent multiple instruments with radically different mediums of vibration. etc. etc. ad infinitum...


Quote:
You may want to read up on how (for instance) the MP3 encoder works. Its output is specifically intended for the way in which the human ear and brain work. It is a faint shadow of the input, except to people.
But I'm not talking about Fraunhofer-IIS's MP3 format. I'm talking about MPEG-4, Part 10, the AAC encoder developed by Dolby Laboratories (of which I'm a licensee), Fraunhofer and Apple. 256 Kbps AAC is quite sufficient in ways that 320 Kbps Mp3 is not. If you disagree with that, I strongly encourage you to write your technical rebuttal to the Audio Engineering Society and have them publish it in their peer-reviewed journal.

And I've only been talking about reconstruction of what's perceptible to the human ear... since that's essentially the only thing that matters. Anyone who tells you otherwise is blowing smoke at you. Yes, there's a LOT of data in a PCM stream that represents stuff you will never perceive... and it can be truncated, to a certain point, without being noticeably deleterious to the final result.

Quote:
What in heaven's name makes you think the full <original> soundwave is being recreated?
You're misquoting me. I didn't say the "original" soundwave (see above about multitrack recordings... which don't contain the "original" soundwave at all). I'm referring to reconstructing the particular soundwave that was encoded by the encoder....

But you're tap dancing around my question...My original point about this is that the decoder has to be capable of reconstructing what the encoder encoded. That means that the decoder in a transcoder, in order to read the source file, has to be able to reconstruct the soundwave. That is a decoder's purpose... to reconstruct information from limited data. If it could only look at the data and have no idea what the data represented, then the transcoded output would be total gibberish. Remember, it's making a conversion, not a copy. If all did was copy the data bit for bit, i.e. a duplicate file in the same format, then it wouldn't need to understand the encoding schema at all.

I'm not talking about reconstructing an exact digital duplicate of the original LPCM file, granted yes, some data is discarded but not haphazardly. The data discarded isn't relevant to your perception... I'm talking about reconstructing the analogue soundwave... at least all the parts of it that are within human perception, because none of the imperceptible parts can have any direct or harmonic effects on us.

I really encourage you to read Ken Pohlmann's Principles of Digital Audio for both the fundamentals as well as the history of digital audio encoding systems, dating back to the 1920s at Bell Labs where, incidentally, both the Nyquist theorem was formulated and the term "bit" was coined.
__________________
"Nature abhors a moron." - H.L. Mencken

Last edited by Avatar74; May 19, 2010 at 09:16 AM.
Avatar74 is offline   0 Reply With Quote
Old May 19, 2010, 09:36 AM   #14
Alrescha
macrumors 65816
 
Join Date: Jan 2008
Location: Boston, MA
Quote:
Originally Posted by Avatar74 View Post
Irrelevant analogy, considering that the original multitrack recording doesn't have anything other than a representation of all the instruments being played... The "picture" is all that we're talking about
Apparently the analogy is good enough.

You're claiming, in effect, that taking a picture of a picture is just as good as taking a picture of the original. I disagree.

A.
(last post on this topic)
Alrescha is offline   0 Reply With Quote
Old May 19, 2010, 10:20 AM   #15
keihin
macrumors newbie
 
Join Date: May 2008
Quote:
Originally Posted by Avatar74 View Post
- No. It has ALL the information the first encoder had
These codecs are called "lossy" for a reason.

Decoding from a lossy codec returns a numeric representation of an approximation of the original waveform.

Decoding from a lossless codec returns a numeric representation of the unaltered original waveform .

The returned products may sound the same, if the approximation is good enough. But they will not be the same.

And D/A conversions simply don't enter into this process. All of these transcoding operations are purely digital-to-digital.

If you don't believe generational losses are real, try sending a sample from it's original form through multiple passes of various lossy encodings/decodings. The results will be audible after some number of passes. Comparing waveforms in a viewer will reveal numeric differences after even the first pass.
keihin is offline   0 Reply With Quote
Old May 19, 2010, 10:24 AM   #16
keihin
macrumors newbie
 
Join Date: May 2008
Oh, and responding to the OP: get an AppleTV if you don't mind using your TV as the primary user interface and don't anticipate using multiple players and sync'ing them. Otherwise take a look at Squeezebox.
keihin is offline   0 Reply With Quote
Old May 19, 2010, 11:03 AM   #17
wysinawyg
macrumors member
 
Join Date: Aug 2009
Quote:
Originally Posted by keihin View Post
If you don't believe generational losses are real, try sending a sample from it's original form through multiple passes of various lossy encodings/decodings. The results will be audible after some number of passes. Comparing waveforms in a viewer will reveal numeric differences after even the first pass.
But nobody is suggesting anyone do this.

I don't have Avatar74's detailed knowledge but that maybe helps explain it for everyone else.

If a 256 kbps sounds the same as a FLAC, why can't you turn that 256 kbps file it into a FLAC file that still sounds like a 256 kbps track (and hence the original FLAC)?

I don't think anyone is denying that if you keep running the same piece of music through lossy compression techniques it will eventually degrade, but nobody is suggesting to do that. We're talking at worst a single conversion in the future from 256 kbps to a lossless format (which can then be transcoded ad infitum to other formats). If the 256 kbps never sounded any worse than the original how can a lossless encode of that 256 kbps?

Quote:
Originally Posted by Alrescha View Post
Oh, and responding to the OP: get an AppleTV if you don't mind using your TV as the primary user interface and don't anticipate using multiple players and sync'ing them. Otherwise take a look at Squeezebox.
You can run an AppleTV off an iPhone/iPod Touch (without paying for iPeng as with the Squeezebox), can easily use multiple players and synch them all together.
wysinawyg is offline   0 Reply With Quote

Reply
MacRumors Forums > Apple Hardware > Apple TV and Home Theater

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Similar Threads
thread Thread Starter Forum Replies Last Post
rMBP - audio goes through receiver to TV, but not to stereo speakers? shoelessone MacBook Pro 6 Jun 6, 2014 01:47 PM
Connecting atv stereo receiver Tsilk Apple TV and Home Theater 2 Jan 7, 2013 05:58 PM
Apple TV and a Home Receiver? Files won't play. DaGrandMastah Apple TV and Home Theater 9 Jan 2, 2013 06:53 PM
Optical(Toshlink) out to Receiver - only stereo? raymond lin Digital Audio 0 Dec 31, 2012 04:53 PM
How to get sound from mac to stereo receiver? rpg51 Digital Audio 44 Nov 7, 2012 07:31 PM

Forum Jump

All times are GMT -5. The time now is 11:40 AM.

Mac Rumors | Mac | iPhone | iPhone Game Reviews | iPhone Apps

Mobile Version | Fixed | Fluid | Fluid HD
Copyright 2002-2013, MacRumors.com, LLC