Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Hum...

Yesterday, I could buy an individual track at 128 kbps with DRM for $0.99. In the immediate future, I will be able to buy the exact same file, and pay the exact same price. The price hasn't changed.

OK...

Yesterday, I had no means of obtaining any DRM-free songs directly in one step from iTunes, and all songs were encoded at 128 kbps. I had no say in the matter. Sometime in the immediate future, I will be able to purchase tracks that are both DRM-free and encoded at 256 kbps. Those products didn't exist before, and now they do. The price has been set for that product, and it happens to be around 30% more expensive a la carte than the previous, still-available product. But the still-available product's price hasn't changed.

I know what you're getting at is "I have a choice, and therefore it is a good thing." I totally agree with you here.

Demonstrable fact. How does any of that qualify as an irrelavent anecdote?

None of it relates to my original argument. I never said that the whole idea was bad, just that setting the precedent that stripping DRM means that we owe the company more is. That concept has absolutely nothing to do with what you're saying.
 
If Microsoft offered two versions of its operating systems -- one costs 30% more, but removes the software activation features -- do you think people would pay the extra?
 
Im not talking about frequencies outside the human threshold!! But the quality of the signal that we can hear.

Also there is much debate about the range of human hearing (upper and lower) but thats a different days work...

Look, you guys are arguing over a pointless debate.

The results of double-blind listening tests trumps any and all mechanical analysis of what the listener should be able to hear.

Conversely, if you can't withstand the scientific rigor of double-blind testing, you are only clinging to well-loved delusions.

Bottom line: If a listener consistently hears the difference, it's there. Argument over.
 
Other winners etc...

Generally a good development within a bad game/business model. Progress is slow and mixed. Here are a couple of thoughts:

Other big winners in this deal are other online music stores. If the industry adopts DRM-less distribution, these stores can sell music from the Big 4 that will play on iPods - right now they can't because WMA doesn't play on iPods. Big win for them.

Consumers also win, because it then increases competition between online stores which now can sell to iPod users. So maybe there's a better chance that the song price will come down in the future.

This is good for AAC becoming more established as standard encoding format (MP3 is so old...)

Really good precedent being set that Apple is letting you upgrade your purchased music to higher quality. This means when Lossless / higher quality comes around, maybe we can hope they will also offer an upgrade path?
 
uote:
Originally Posted by Marx55
1kps
2kbps
4 kbps
8 kbps
16 kbps
32 kbps
64 kbps
128 kbps
256 kbps <-- WE ARE HERE NOW
512 kbps
1024 kbps
2048 kbps
...

So, how many kbps would be the Apple Lossless AAC?

Thanks.

---

AFAIK there is no such thing as "lossless" AAC. The format on CDs is 16bit PCM which no compression. CDs datarate is 44000 samples per second, 16bits (2bytes) each sample for each channel. Makes 176k Bytes/sec or 1408kbits/sec. Thats lossless, ie not compressed at all.

You maybe thinking of Apple lossless (AIFF) which is equivalent to a WAV file on Windows, its a way of storing uncompressed 16bit PCM.

In computers you can compress any file, ie in Windows XP you can select to compress a file. Typically this uses a lossless compression algorithm. It works by removing redundancy in a file, such as a string of zeros and represents them as a simpler construct that takes less space, ie instead of having 200 spaces, it may note that there are 200 spaces, potentially using only 3 bytes instead of 200. In practice its much more complex but that should convey the idea.

With music, lossless compression typically doesn't work. The data tends to look random to lossless compression algorithms. MP3, AAC, etc. use perceptual encoding. They analyze the music to figure out what a person actually hears from it. The algorithm then removes music and information that it determines people are unlikely to be able to hear. This is lossy compression, ie you are removing music. Newer algorithms have better perceptual models and tend to be able to reproduce music well at lower bitrates (ie less information/data). AAC is still one of the best algorithms and is significantly better than MP3.

I can hear imperfections in 128kbps MP3 but as yet I haven't recognized the artifacts in 128kbps AAC.

So, the next move in five years or so will be full Lossless from Apple iTunes...

:)
 
Yes, but you can uncompress it to WAV or AIFF and have full-CD quality.

B

Which is why I still buy CDs for audio I really care about. Even if I lose the disks which currently serve as backup media, anything stored in Apple Lossless will remain useful for my entire lifetime, no matter what happens with audio formats.
 
If Microsoft offered two versions of its operating systems -- one costs 30% more, but removes the software activation features -- do you think people would pay the extra?

They offer five versions of their workstation operating system. They strip stuff out and charge less for the crippled versions. Which is kinda backward thinking if you ask me.

Anybody who anticipates needing to take their Vista-powered notebook back and forth from a networked work environment and a media-rich home environment would be foolish to go for anything less than Vista Ultimate. Vista Home Premium doesn't do anything but simple Workgroup-based networking. Vista Business doesn't include any Media Centre components. There's no upgrade path from either the Basic or Premium Home editions of Vista to the Business edition. Braindead.
 
Excellent news, and Apple leads the charge again...it's gonna retain its market for years to come...thanks, SJ!

What about that stupid MS Zune dude who said that Apple's proposal was foolish and irresponsible? Where is he now? And the PC fanboys for that matter, too...:rolleyes:

GO APPLE!
 
Math is one of the sciences that attempts to explain what happens or is experienced in the real world (not the other way around).

Want to know what someone can or can't hear? Ask them.

Thomas Edison probably felt his Victrola was "good enough for anyone" and could go on for ages on a useless techno-rant spewing out the specs to prove it. :p

Nowadays we use this hocus pocus called "Psychoacoustic Modeling" and it is based on hard science.

http://en.wikipedia.org/wiki/Psychoacoustic_model

and, yes, it is the basis of major advances in audio compression as we better understand the parts of audio that humans have the ability to distinguish.
 
Nowadays we use this hocus pocus called "Psychoacoustic Modeling" and it is based on hard science.

http://en.wikipedia.org/wiki/Psychoacoustic_model

and, yes, it is the basis of major advances in audio compression as we better understand the parts of audio that humans have the ability to distinguish.

Science explains the real world. Not the other way around.

This "I can prove you can't hear the difference" reminds me of the old radio guys that could "prove" their old-fashioned mono systems were as good as the new-fangled stereo gear. Didn't matter what differences a person could actually hear -- they had the "specs" to prove otherwise. :p
 
Im not talking about frequencies outside the human threshold!! But the quality of the signal that we can hear.

Also there is much debate about the range of human hearing (upper and lower) but thats a different days work...

Not quite what I was getting at. That would have been obvious. I wanted to see if you'd figure it out and you didn't... which underscores my point about people arguing over encoding systems they don't understand.

One of the reasons to incorporate a low-pass filter at the Nyquist limit is that recording frequencies above the Nyquist limit will invariably produce aliases of those frequencies. For example, a 32kHz signal sampled at 44.1kHz creates an alias at 12.1kHz. Instituting a low-pass filter at the Nyquist limit prevents the 32kHz signal from being recorded and thus no alias.

That being said, such antialiasing filters have a 2kHz transition which means there's a span of 2kHz at the upper end of the low-pass where the signal is rolled off from fulll scale gradually so that complex harmonics above the filter when removed are smoothly rolled out and not abruptly cut off. The point at which they are rolled off is actually above the highest threshold of human perception so even with some give or take where the roll off varies among humans somewhere between 17 and 20kHz, no human has the physical "hardware" to hear anything at or above 22kHz.

I brought this point up because you seemed to think that measures have not been taken to mitigate the possible introduction of artifacts that might arise from an encoded signal, without understanding how or why those artifacts arise in the first place. Low-pass filtering is the first principle behind reconstructing a continuous analog signal from a discrete time sampled (digital) signal without loss (all else being equal).

So what I'm basically telling you is that there are components to a digital encoding system that no audiophile ever discusses when weighing the pros and cons... and that the reason for this generally tends to be that they do not know or understand the most critical principles involved... and your response seems to reinforce my view that opinions have been formulated without understanding even the most basic fundamentals of digital recording.

I see the same thing when audiophiles talk about using expensive cabling to reduce jitter whilst being totally unaware that most A/D-D/A converters have used quartz crystal oscillators for internally reclocking the signal for a number of years now... probably since before said audiophile bought their first CD player.
 
Science explains the real world. Not the other way around.

This "I can prove you can't hear the difference" reminds me of the old radio guys that could "prove" their old-fashioned mono systems were as good as the new-fangled stereo gear. Didn't matter what differences a person could actually hear -- they had the "specs" to prove otherwise. :p

You're trying to substantiate one anecdote with another anecdote, without knowing specifically what was being argued by the "old radio guys" and whether or not it was actually correct from an engineering point of view (even then I'm sure some Bell Labs engineers would have disagreed with them... Harry Nyquist to name one).

I'm not saying "I can prove you can't hear the difference." I'm saying you have yet to establish that there IS a difference. The burden of proof of extraterrestrial UFO's lies not with the skeptics...
 
Not quite what I was getting at. That would have been obvious. I wanted to see if you'd figure it out and you didn't... which underscores my point about people arguing over encoding systems they don't understand.

One of the reasons to incorporate a low-pass filter at the Nyquist limit is that recording frequencies above the Nyquist limit will invariably produce aliases of those frequencies. For example, a 32kHz signal sampled at 44.1kHz creates an alias at 12.1kHz. Instituting a low-pass filter at the Nyquist limit prevents the 32kHz signal from being recorded and thus no alias.

That being said, such antialiasing filters have a 2kHz transition which means there's a span of 2kHz at the upper end of the low-pass where the signal is rolled off from fulll scale gradually so that complex harmonics above the filter when removed are smoothly rolled out and not abruptly cut off. The point at which they are rolled off is actually above the highest threshold of human perception so even with some give or take where the roll off varies among humans somewhere between 17 and 20kHz, no human has the physical "hardware" to hear anything at or above 22kHz.

So what I'm basically telling you is that there are components to a digital encoding system that no audiophile ever discusses when weighing the pros and cons... and that the reason for this generally tends to be that they do not know or understand the most critical principles involved... and your response seems to reinforce my view that opinions have been formulated without understanding even the most basic fundamentals of digital recording.

I see the same thing when audiophiles talk about using expensive cabling to reduce jitter whilst being totally unaware that most A/D-D/A converters have used quartz crystal oscillators for internally reclocking the signal for a number of years now... probably since before said audiophile bought their first CD player.

Your still not looking at the point of the arguement, bit rate as opposed to sampling frequencies. I'm not entering a frequency debate, as its irrelevent. Do you still contend that the lower bit rate (16 compared to 24) will give the same analogue signal back?
 
You're trying to substantiate one anecdote with another anecdote, without knowing specifically what was being argued by the "old radio guys" and whether or not it was actually correct from an engineering point of view (even then I'm sure some Bell Labs engineers would have disagreed with them... Harry Nyquist to name one).

I'm not saying "I can prove you can't hear the difference." I'm saying you have yet to establish that there IS a difference. The burden of proof of extraterrestrial UFO's lies not with the skeptics...

Absence of proof != proof of absence. ;)
 
1kps
2kbps
4 kbps
8 kbps
16 kbps
32 kbps
64 kbps
128 kbps
256 kbps <-- WE ARE HERE NOW
512 kbps
1024 kbps
2048 kbps
...

So, how many kbps would be the Apple Lossless AAC?

Thanks.

Well Lossless <> AAC, but that's being pedantic.

I rip everything in lossless, and songs vary between about 750 and 1200 kbps.

I think the decision to go with 256kbps is a good one and a much better compromise than before. 128kbps clearly does sound worse than lossless - 256 sounds very very similar to lossless, and with only 1/4 of the data it's a good choice.

If I was listening only on my iPod I'd be happy with just 256 - but at home I listen to all music on my Mac through a good D to A converter (a Benchmark DAC 1) and I like to feed it with lossless.
 
From the question and answer:



More like it would take too long to re-encode the entire existing iTunes library at the new bitrate and remove DRM. So they're not touching the old library, and as a result, they've decided that raising the price on that segment of the collection would look bad.

Not a jab. Just a more realistic assessment.

I'll bet you a quarter that the extra 30 cents goes straight to EMI, a little caveat that could explain the newfound corporate enthusiasm about DRM free music.

A moderately sized server farm could easily re-encode about 20% of the iTunes library in a very short time period. Perhaps a period of, say, mid-March, when the EMI agreement was probably penned (and a farm was hypothetically dedicated to the project), until the end of April when the music will be available?

Anyway, I still prefer to give bands my money by seeing them in concert.
 
Your still not looking at the point of the arguement, bit rate as opposed to sampling frequencies. I'm not entering a frequency debate, as its irrelevent. Do you still contend that the lower bit rate (16 compared to 24) will give the same analogue signal back?

I'm not convinced you understand the difference between bit-depth, bit rate and sampling frequency.

What you're arguing tends to be tied toward reproducing the frequency spectrum of human hearing but I find it rather interesting that you've made absolutely no mention whatsoever of amplitude... since bit-depth (which you've confused for bit RATE, which is not the same thing) has to do with recording the amplitude at any given quantization interval.

Now why would this be important? a sampling frequency of 44.1kHz with 8-bit sampling depth can faithfully reproduce the entire spectrum of frequencies we can hear... but NOT the amplitude values. But note that there are various ways, as I've demonstrated earlier, to represent amplitude values using less data than even in a conventional PCM system (ADPCM is an example).

But here again you're speaking in terms of traditional pulse code modulation without any comprehension of how this differs from a perceptual coding algorithm like AC-3 (Dolby Digital), ATX1000 (DTS) or Dolby-Fraunhofer AAC. These systems further reduce bitrate requirements by changing the nature of what data are stored in the file and the "dictionary", so to speak, used in the software/hardware encoding and decoding it.
 
This is true. That is exactly what I used to do. Well Ill still go on doing that. But for new CDs. $10 is a lot less than $18-24. (Especially since they charge $.99 CAD, which is less than $.99 USD, but at a store they charge more in CAD than in USD) So, its a bigger saving for me. Plus while it is a small drop in Quality and not having the actual CD is kinda a bummer. But at half the price, plus not having to leave the house, Im getting enticed. But yeah, I can get used CDs for like $5.00, so why would I pay double for less quality. But you see..... there are some times when I would find this usefull.

Exactly, it doesnt work for new releases. Those cd's that cost 16-18 bucks. But for used or older releases your better off buying the cd.
 
I'm not convinced you understand the difference between bit-depth, bit rate and sampling frequency.

What you're arguing tends to be tied toward reproducing the frequency spectrum of human hearing but I find it rather interesting that you've made absolutely no mention whatsoever of amplitude... since bit-depth (which you've confused for bit RATE, which is not the same thing) has to do with recording the amplitude at any given quantization interval.

Now why would this be important? a sampling frequency of 44.1kHz with 8-bit sampling depth can faithfully reproduce the entire spectrum of frequencies we can hear... but NOT the amplitude values. But note that there are various ways, as I've demonstrated earlier, to represent amplitude values using less data than even in a conventional PCM system (ADPCM is an example).

But here again you're speaking in terms of traditional pulse code modulation without any comprehension of how this differs from a perceptual coding algorithm like AC-3 (Dolby Digital), ATX1000 (DTS) or Dolby-Fraunhofer AAC. These systems further reduce bitrate requirements by changing the nature of what data are stored in the file and the "dictionary", so to speak, used in the software/hardware encoding and decoding it.



Jasus ye love yer jargon dont you. All things being equal obviously?
 
.. I'm not saying "I can prove you can't hear the difference." I'm saying you have yet to establish that there IS a difference. The burden of proof of extraterrestrial UFO's lies not with the skeptics...

You're trying to use a very limited "model" of the real world to "prove" what is or isn't possible in the real world. Doesn't work that way.

Your "model" is a simply a crude version of the real world. For example: Do you actually think you could use your "model" to accurately and completely reproduce the sound of even a simple musical instrument, such as a flute playing any sort of musical piece? A "model" that actually is able to reproduce a convincing model of the sounds that even a relatively "primitive" (such as the flute) can produce to the point that someone couldn't tell the difference between a real flute playing and your "model" of the flute?

I'm not talking about just one or two notes -- give me a model of a flute that can play anything I might want to play, that sounds 100% realistic, with accurate attacks, swells, vibrato, staccatos, flutter-tonguing, etc. It should be quite easy for you to do, since all the math that's involved in a flute's sound has been explained in great detail in your textbook. :p
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.