Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Oh man! Are you on PT12? That's what Track Commit is all about: There's a new thing in the music industry that you now must provide 24/96 or higher stems to the majors for "future formats, remixing, and remastering..." PM me if you want.

PT 10

But you have to look at habits. Vinyl was a "home" format - as in, for the general public, you only listened to vinyl at home; you didn't play it in your car; take it to the park, etc.

That's not entirely true. There were portable wind-up 78 players, I owned one, a Phonola, and my grandparents indeed took it to the park, and when they traveled. There were also record players for the car introduced in the 1950s and available on several models of automobiles. With the change to 45s so too went the portable record player, some battery powered. And these were very popular. So I disagree it was solely a home medium. The automobile didn't really have a practical way to play your own music until the 8-track tape, but otherwise vinyl was very portable. The point is, people have been accepting low-fi listening conditions since the advent of recorded audio.
 
PT 10



That's not entirely true. There were portable wind-up 78 players, I owned one, a Phonola, and my grandparents indeed took it to the park, and when they traveled. There were also record players for the car introduced in the 1950s and available on several models of automobiles. With the change to 45s so too went the portable record player, some battery powered. And these were very popular. So I disagree it was solely a home medium. The automobile didn't really have a practical way to play your own music until the 8-track tape, but otherwise vinyl was very portable. The point is, people have been accepting low-fi listening conditions since the advent of recorded audio.

Although I would have loved to record Mr. Rachmaninov or Josef Hofmann in Pro Tools at 24/192 with all the best gear, we all have to live with the historic but brilliant recordings of the time. -Edison Cylinders. -But you can "hear-through" the tech. Maybe this is more about listening instead of raw human physiological perception. Maybe it's content that matters and that's what this is really about: Unfortunately for all of of mankind, Justin Bieber and $0.50 will never, ever be Rachmaninov or Godowsky. -Both top-hit stars in their own time and deserving of all the bandwidth available today. -I know this is an entirely unfair comparison. (FWIW, I am a huge Taylor Swift fan, but her best songs never get played on the radio.)
 
Could we just have lossless audio please. These higher bit rates and depth sound no different. Also, sort out dynamic range, that's the biggest problem right now.

The latter is especially true. "Mastered For iTunes" means COMPRESSED AS HELL (i.e. LOUD LOUD LOUD with no dynamic range). It IS ridiculous that in 2015 you can't seem to buy lossless albums over the Internet for the most part. How much has the average ISP bandwidth increased over the past 15 years? More than the difference between AAC and WAV/AIFF. You should be able to buy what you want and convert for mobile if needed.

The 24/96 24/192 thing is utter nonsense on the playback side and it's easily proven. It's good to have headroom on the recording side, but oversampling, etc. solved the "brick wall" filtering issues of CD type sound a long LONG time ago. Besides, what's the point in having 24/96 if the average album on iTunes has less than 20dB of dynamic range to begin with??? It's complete BS sold as snake oil to IGNORANT people that don't know better (and are unbelievably hard to convince despite all the scientific proof and testing in the world available).

BUT it makes good MARKETING sense to RESELL a lot of music. All you need to do to make people "notice a difference" is REMASTER the album without all the compression and BINGO, people will claim it's because of 24/96 and not because of the remastered album that isn't so compressed (because they're ignorant of industry practices). SACD has sold quite a few albums on this basis (i.e. the albums are remastered for SACD and THAT is why they sound better; convert them to 44.1kHz lowly CD and they sound the same if they're 2-channel).

Multi-channel music makes more sense, but then how many people have proper 5.1 or 6.1 or 7.1 setups at home even for movies, let alone music? THAT is the primary reason why formats like SACD have no traction. Most people listen to earbuds (crap) or something like Beats headphones in recent years ('subwoofer' headphones).
 
Last edited:
I think improved audio streaming is long overdue. I hope this works out

Streaming eats up monthly bandwidth, subscriptions make content inaccessible once you stop paying - it's little more than throwing money into a toilet and flushing it away.

Still, we're always told the market is about innovation and moving forward so it's great improved audio compression technologies are being developed. Better than the movie industry where they repackage the same slop in new effects (like Star Wars, for which the movie poster pretty much spoiled things first) and tell us it's more creative and awe-inducing (when it's not...)
 
  • Like
Reactions: JamesPDX
Streaming eats up monthly bandwidth, subscriptions make content inaccessible once you stop paying - it's little more than throwing money into a toilet and flushing it away.

Thankfully, we have a choice - not only that, you don't have to make one choice exclusively, you can have a mix of options.

Bandwidth isn't usually that restricted at home, and even if you don't have an unlimited mobile bandwidth, most streaming services (when paying) have an option to download tracks on wifi and play them offline.

A small, ongoing subscription to access a vast library whenever you want without having to house it is a pretty good deal. It's a great way of discovering new music, and occasionally listening to things that you would never buy.

If certain tracks or albums really mean something special to you, then buy and own them. You can even do that if you happen to subscribe / use a streaming service as well.

I pay for Deezer Elite. Every so often, I buy CDs as well (even ones that are on the streaming sites).
 
Well, I just lost ALL respect I ever had for Meridian. It's clear they are putting money above the TRUTH and it's not hard to explain how you can get 24/192 at CD rates when there is almost ZERO actual audio content above 22kHz (i.e. you can compress the living crap out of it since there's NOTHING THERE but NOISE). 24-bit doesn't actually get used either since in practice even the BEST POSSIBLE RECORDINGS (read best mics in a classical 'quiet' environment) never seem to manage more than about 18-19 bits of usable dynamic range, so once again you can compress the living hell out of the signal (and I don't mean dynamic range compression, but space saving compression) since there's NOTHING THERE beyond the 18th or 19th bit. NOTHING.

Now, what is Meridian doing? Oh yeah, they're SELLING SOMETHING. That means they are predisposed to LIE in order to make sales and MONEY.

24/96 makes sense on the recording end (headroom), but it's VOODOO on the playback end. It doesn't "hurt" to have it except that it BREEDS ABSOLUTE FRACKING IGNORANCE in the general population that doesn't know up from down or left from right when it comes to well, almost anything really (just look at the fracking absolute BS mess there is in politics full of ignorant MORONS that spread lies, deceipt and INSULTS over the lies that were told to them to begin with). So, in that sense, it's absolutely caustic to push this crap because it breeds ignorance and that ignorance breeds more ignorance. The people at Meridian know full damn well they're selling snake oil, but they do it anyway because they see MONEY in it and the record industry likes the basic idea because they would LOVE to sell you the same music you already have all over again under the pretense of better quality. And like I said, it'd be easy to actually increase quality (without any 24/96 or 24/192 nonsense) by simply remastering the albums without compressing all the dynamic range out of the album and cranking up the treble/bass, etc. to make it "sound good on the radio" or on cheap speakers that can't produce jack for bass, etc.

Now let's talk about AAC. I have YET to see ONE double-blind study that proves ANYONE can hear the difference between 256kbps AAC and an uncompressed WAV file. I used to talk to one of the people responsible for developing AAC many years ago and they did EXHAUSTIVE testing to make sure the format was utterly transparent. I mean double-blind testing out the wazoo! (MP3 needs something closer to 320kbps to approach this level of transparency, possibly a bit more). People like this would LOVE for you to believe that it's night and day between AAC and CD audio, but I've done my own testing with high-end ribbons speakers using active crossovers and multiple high quality amplifiers. I couldn't hear any difference what-so-ever. Yeah, I'm deaf. That's why NO ONE can prove they can hear a difference using ABX testing. :rolleyes:

When he starts talking about more samples in "time" he's violating Nyquist which is 100% frequency dependent. In other words, it doesn't help to have more "samples" to capture sound within the Nyquist limits. You only need more samples to capture higher frequencies which he admits humans cannot hear and don't need one bit. In other words, he spends a lot of time TRYING SO HARD to convince you that higher sampling rates DO SOMETHING. They are 100% IRRELEVANT since you cannot hear the frequencies in question that he wants you to believe you need to capture (i.e. frequency = 1 / time and thus 5uS = 200kHz or about 10x higher (and thus one octave) higher than you can hear so exactly how is that he thinks we need 5uS in order to match the human ear when he admits we can't hear above 20kHz (and a 40 year old is lucky to hear 14kHz). Again, going to smaller samples (e.g. 5 micro-seconds) only increases the higher frequencies you are able to capture (yes, the same ones you CANNOT HEAR and which most instruments have almost no content anyway). In other words, he's talking out his arse. He tells you the TRUTH (you can't hear above 20kHz in on breath and then tries to turn around and tell you that you CAN in "time" but time is just the inverse of frequency for god's sake so he's LYING (and I would testify to that in court).

Honestly, there should be laws against making BS claims that cannot be backed up by double blind testing. Meridian has ONE GOAL here an that is to sell this "codec" to streaming services and make a ton of money on BULLCRAP.
 
  • Like
Reactions: chambone
All those videos prove is that you don't need 24-bit / 96 or 192khz playback.

Firstly, dynamic range - so we don't really need 24-bit (144db) dynamic range, as analog signals only have a theoretical dynamic range of 120db (20-bit).

But then, we don't even need that, because nobody actually uses 120db dynamic range. Nobody even gets close to using 96db dynamic range. Besides which, the theoretical dynamic range of the recording is only part of the story - it has to be played back, which means having a set volume from your system. And you have to compete with the background noise floor.

You are never, ever going to be able to distinguish even the full 96db dynamic range that 16-bit provides. It's impossible.

Now, frequency range - above 19khz simply isn't audible, for most people, that frequency is going to be lower. You've also got limitations on frequency response of speaker systems that simply makes any arguments about ultrasonics ridiculous.

The contention about timing information is interesting, but does have a flaw. So, humans have a listening time resolution of 7 microseconds, and only 192khz provides better than that. But if that mattered, then you would have significant differences simply from the precise microsecond that playback started - because there isn't any clock synchronisation between your hearing and the playback equipment.

Those details are more important when it comes to mixing multiple tracks, as the minuscule timing error between playback and hearing isn't necessarily audible, but if you've lost that fidelity in the recording, and then compound it in mixing, the cumulative effect may be significant.

Now, that doesn't mean that there aren't production problems - specifically in filtering ultrasonic content out - that causes causes the audible frequencies to be smeared (e.g. pre-echoes).

So, what have we got with MQA? Apparently, two stage noise shaping to remove (/move) the ultrasonic content, rather than a traditional filter, to end up with a 16-bit / 44khz signal. What does reversing that to "recreate" a 24-bit / 192 kHz signal give you? Well, basically nothing - because you aren't playing the music loud enough, your speakers aren't producing ultrasonics, and even if they did, you wouldn't hear it.

When it comes to playback, and you want to have the "quality" of 24-bit / 192khz using the same number of bits as a 16/44.1 recording, then just do a damn good transcoding of 24/192 to 16/44.1.

That may ultimately be the benefit of MQA - improving the production of music, and the transcoding of masters into 16-bit/44.1khz. The playback side of MQA merely comes down to detecting that it was recorded using that process and turning a light on the decoder to give people a warm fuzzy feeling that they are getting something better.
 
  • Like
Reactions: MagnusVonMagnum
When he starts talking about more samples in "time" he's violating Nyquist which is 100% frequency dependent. In other words, it doesn't help to have more "samples" to capture sound within the Nyquist limits. You only need more samples to capture higher frequencies which he admits humans cannot hear and don't need one bit. In other words, he spends a lot of time TRYING SO HARD to convince you that higher sampling rates DO SOMETHING. They are 100% IRRELEVANT since you cannot hear the frequencies in question that he wants you to believe you need to capture (i.e. frequency = 1 / time and thus 5uS = 200kHz or about 10x higher (and thus one octave) higher than you can hear so exactly how is that he thinks we need 5uS in order to match the human ear when he admits we can't hear above 20kHz (and a 40 year old is lucky to hear 14kHz). Again, going to smaller samples (e.g. 5 micro-seconds) only increases the higher frequencies you are able to capture (yes, the same ones you CANNOT HEAR and which most instruments have almost no content anyway). In other words, he's talking out his arse. He tells you the TRUTH (you can't hear above 20kHz in on breath and then tries to turn around and tell you that you CAN in "time" but time is just the inverse of frequency for god's sake so he's LYING (and I would testify to that in court).

Honestly, there should be laws against making BS claims that cannot be backed up by double blind testing. Meridian has ONE GOAL here an that is to sell this "codec" to streaming services and make a ton of money on BULLCRAP.

I think it's worse than concentrating on whether or not you can hear audio above 20kHz. Here he is implying, like so many of his type, that there is "timing information" in audio below 20KHz that is not encoded at 44.1KHz. This, IMHO, shows that he doesn't have the first idea what band limited sound waves actually are. He needs to go and watch that Xiph.org video again and focus on the part about square waves.
 
He is correct. There is an issue with timing of a 20kHz signal using 44.1kHz sample rate. The ADC will cause a 90-degree phase shift. There will be another phase shift caused by the anti-aliasing filter. Is this important in audio? Probability not for 20kHz, but given the logarithmic nature of the frequency response any phase-shift at 2kHz may have an effect on the audio quality.

In other fields of engineering where the signal goes from analog to digital then back to analog, the rule of thumb it that the sample rate has to be greater than 10 times the upper cutoff frequency. In audio this would make the 192kHz sample rate correct.

So what is this argument about? The recording part of the process is done at a higher sample rate. It is only the file that uses the 44.1kHz sample rate. The recording has to use digital signal processing to go down to the 44.1Hz sample rate. The playback also uses a higher sample rate, the the DAC use DSP to add samples before the conversion, this is called oversampling. Most DAC use a higher rate than 192kHz for the conversion. The issue of sample rate for the most of the record playback system was solved long time ago, higher is better.

The only argument for 44.1kHz sample rate is file size. This was important in the 1980s; it is not now. Also DSP is not perfect. There are 2 unnecessary places where DSP occurs in the record playback process. It would be better just to have the 192kHz sample at all stages of the process.
 
He is correct. There is an issue with timing of a 20kHz signal using 44.1kHz sample rate. The ADC will cause a 90-degree phase shift. There will be another phase shift caused by the anti-aliasing filter. Is this important in audio? Probability not for 20kHz, but given the logarithmic nature of the frequency response any phase-shift at 2kHz may have an effect on the audio quality.

Unless cumulative phase shifts are compounding a loss of data - e.g. a continual loop of playing and resampling the waveform - then a phase shift is only going to be noticeable if they are different between channels / speakers.

As any phase shift would ordinarily occur identically in both left and right channels (or all channels of a multi-channel system), then it's not really relevant.

The only argument for 44.1kHz sample rate is file size. This was important in the 1980s; it is not now. Also DSP is not perfect. There are 2 unnecessary places where DSP occurs in the record playback process. It would be better just to have the 192kHz sample at all stages of the process.

Actually, that isn't true. We don't have the same CD physical limitations to deal with, but people haven't really invested in hardware that is capable of playing anything greater than CDs. Streaming sites are reluctant to even offer FLAC / ALAC with it's less than CD bandwidth, let alone any format that contains more information (even MQA encoded). Storage space is cheap, but still unwieldy, and especially problematic on portable devices (I transcode from ALAC to AAC when I add music to my phone for portable listening).

Also, there are lots of areas that aren't perfect - speakers aren't perfect, rooms aren't perfect. So even if DSPs aren't perfect, implemented correctly with understanding of the environment they are operating in, they can do more to improve sound quality than they harm it.

Far too much of the music made available today at 16-bit / 44khz sounds worse than they could do because of mistake / deliberate choices made during production - far more so than any theoretical format / conversion limitation. Too many streaming / download sites aren't even delivering lossless 16-bit / 44khz, making it even worse than the botched production that was made available to them in the first place.

Hell, we've still got people clinging to provably less accurate formats, partly because CD quality audio isn't being produced correctly.

Instead of banging a drum that would require everybody to invest in a whole bunch of expensive hardware that they don't actually need, how about we just concentrate on getting 16-bit / 44.1khz audio produced and delivered to consumers correctly?
 
  • Like
Reactions: MagnusVonMagnum
He is correct. There is an issue with timing of a 20kHz signal using 44.1kHz sample rate. The ADC will cause a 90-degree phase shift. There will be another phase shift caused by the anti-aliasing filter. Is this important in audio? Probability not for 20kHz, but given the logarithmic nature of the frequency response any phase-shift at 2kHz may have an effect on the audio quality.

As was pointed out, phase-shift of a single channel (rather than between two or more stereo channels) is inaudible to the human ear and therefore irrelevant, especially in a frequency band that contains little musical information to begin with (the 10kHz-20kHz octave) and in which most people over 40 can only hear the first few notes, if at all anyway.

The LP format is much beloved by many (for some odd reason; I invested in a high-end rig to find out if there was anything to these claims and more importantly to transfer albums that STILL don't exist on CD or digital in general) and while its response does go beyond 20kHz, it begins to roll off by 12kHz and contains mostly unusable surface NOISE above 20kHz (yet people go to great lengths to "preserve" that surface noise of a needle dragging through vinyl plastic. Personally, I use iZotope RX to REMOVE most of the surface noise and all the clicks and pops, leaving a recording that is quieter than the original on CD in many cases and without hurting the sound; now THAT is advanced use of DSP, not some snake oil in a bottle).

There is no "magic" to the LP. It's an irrationality based on poorly mastered CDs (yes sometimes the LP does sound better, but that "better" can be recorded and put back on a CD and sound identical to the LP so it's clearly the mastering/mix and not the format itself) an the love of being "involved" in the playback process (yes, it's great fun aligning and setting up an LP player, especially for someone like me that does mechanical alignments of industrial automation equipment as part of my real world job). The problem is despite the admitted "fun factor" of the LP for mechanically/electrically inclined folks like myself, ultimately, once everything is on an even playing field, the CD wins EVERY SINGLE TIME HANDS DOWN. The rest is pure nonsense and some fairytale beliefs by willingly ignorant (in my book that means STUPID) people.

Still, given mastering problems galore in the industry, it's not hard to find superior sounding vinyl for many albums (you can only compress an LP so much, so the worst sounding CDs tend to sound at least a "little" better on LP. Sometimes it's night and day as the mastering engineer "sneaks" one past the record label who doesn't pay attention to the LP format anymore. Sadly, most engineers KNOW BETTER but are forced to put out CRAP because their bosses DEMAND it (it must be LOUD!) and this comes from psychoacoustical studies that show that "louder = better" to most people that only listen casually, particularly on the radio. Look at the audio reviews of Pink Floyd's A Momentary Lapse of Reason. Most think it sounds "awful" (sound quality wise, not content which is a different issue) compared to earlier analog only albums like Dark Side of the Moon and Wish You Were Here. It's a fully (save the drums) digitally recorded and mastered album. In fact, it has much more dynamic range than ANY Pink Floyd album ever made (including the one after it which is compressed to sound louder). All you have to do to "fix" it is turn the volume up in a quiet environment and it's excellent sounding (the LP version is more compressed but has less detail; I've A/B compared them volume matched). Actually, the CD also isn't "normalized" either (i.e. rip it, put it in something like Audacity and normalize the levels so that the loudest sound is near the maximum and you'll find it a lot louder without having to compress a thing; I'd call that a production mistake).

But as soon as people hear a quieter sounding average volume compared to the song before it on the radio or whatever, their immediate impression isn't just "I need to turn up the volume" but rather "that song's sound sucks compared to the louder one" and that's an issue with human psychoacoustics where louder = better at a "glance" and more detail is revealed as you turn up the volume that is below the room's ambient noise masking level so one is given the impression that yes louder = more detailed (as in compression reveals ALL the details; the problem is they're all at the same relative volume level now and that sounds bizarre and yet that is what studios WANT). Going to an "HD" format that offers MORE dynamic range makes no sense when it's 100% counter to the studios desire to GET RID OF DYNAMIC RANGE.

The only argument for 44.1kHz sample rate is file size. This was important in the 1980s; it is not now.

The fact remains that the "problems" of 44.1kHz audio were solved a long time ago with oversampling an similar technologies (Sony 1-bit, waveshaping, etc.). There is no need to reinvent the wheel. Getting studios to remaster their albums for actual high sound quality would do 1000x more good at this point for getting truly better sounding music out there.

I'd be FAR more concerned at this stage that new media players like FireTV and AppleTV 2-4 do NOT output 44.1kHz PERIOD. They all upsample to 48kHz! That kills things like DTS Music CDs (that are encoded at 44.1kHz and the signal is destroyed if it's not maintained lossless and output at exactly 44.1kHz or you get a pitch increase ala FireTV running KODI). Is it really too much to ask that Google and Apple offer us an output rate that matches 99.999% of the digital music catalog out there? No, they just assume we won't notice the up-sampling and do it anyway to save a few pennies. (The original Apple TV DID output 44.1kHz when asked to and was bit-perfect as DTS music CDs even encoded as Apple Lossless would play without it even knowing it). And yet people claimed that Apple Lossless wasn't "really" lossless and yet the DTS test proved it as if even a single random bit were lost it would have scrambled the encoded signal.

Also DSP is not perfect. There are 2 unnecessary places where DSP occurs in the record playback process. It would be better just to have the 192kHz sample at all stages of the process.

Better for whom? People that don't want to use oversampling? The same effects are still occurring, just out of your hearing range. You're asking people to buy new equipment, new music catalogs and waste more banwidth just to satisfy your need to say it wasn't oversampled and the audible content is 100% identical to the human ear. That sounds like a waste of time, resources and money to me.
 
For what it's worth, I'm always amused by an unintentional test of sample rates and sample rate conversion that happened a few years back. A piece of professional mac software I use aimed at live performance and theatre sound contained a bug the meant all audio was down sampled to 44.1KHz, and then resampled to the rate of the hardware output. If you had, say, 96KHz files and hardware then the audio would be sampled down to 44.1 and back up before play out by apple's default sample rate conversion algorithm. The creators of the software were not aware of this, and nobody the world over in six or so years noticed. If there was such an extreme difference in audio quality as some would make out then surely somebody would have noticed.
 
  • Like
Reactions: drumcat
The fact remains that the "problems" of 44.1kHz audio were solved a long time ago with oversampling an similar technologies (Sony 1-bit, waveshaping, etc.). There is no need to reinvent the wheel. Getting studios to remaster their albums for actual high sound quality would do 1000x more good at this point for getting truly better sounding music out there.

Better for whom? People that don't want to use oversampling? The same effects are still occurring, just out of your hearing range. You're asking people to buy new equipment, new music catalogs and waste more banwidth just to satisfy your need to say it wasn't oversampled and the audible content is 100% identical to the human ear. That sounds like a waste of time, resources and money to me.



I am not asking for anyone to replace anything. I have no issue with the CD format. But if a company or group of companies want to develop high definition audio. I want to, and I want others, to have a choice about buying new equipment and music in that format.

I don’t want someone protecting me from the snake oil salesman. Especially when I think the arguments for these companies selling snake oil are suspect.

Some have argued that higher sample rate, than 44.1kHz, will sound much worse, but they will also argue that oversampling has solved all the problems of 44.1kHz audio, xiph.org is guilty of this. From the view of the part of DAC that does the conversion there is no difference between 4 X oversample 44.1kHz audio and 192kHz audio. The argument that oversampling solves 44.1kHz audio proves their argument about higher sample rate sounding worse, wrong.
 
I am not asking for anyone to replace anything. I have no issue with the CD format. But if a company or group of companies want to develop high definition audio. I want to, and I want others, to have a choice about buying new equipment and music in that format.

I don’t want someone protecting me from the snake oil salesman. Especially when I think the arguments for these companies selling snake oil are suspect.

Some have argued that higher sample rate, than 44.1kHz, will sound much worse, but they will also argue that oversampling has solved all the problems of 44.1kHz audio, xiph.org is guilty of this. From the view of the part of DAC that does the conversion there is no difference between 4 X oversample 44.1kHz audio and 192kHz audio. The argument that oversampling solves 44.1kHz audio proves their argument about higher sample rate sounding worse, wrong.

_shakes head in disbelief_

OK, so this gets into politics. I don't care if you want to waste your money on these things, that's your business. That does not make something true however, and most countries have laws against false advertising and misleading customers. I don't want the price of audio hardware to be artificially inflated by bogus claims, or have to work at ridiculous data rates in my business because customers demand this crap.

I've not seen anyone suggest that sample rates higher than 44.1KHz make things sound MUCH worse, but comments that there may be theoretical issues. The Xiph.org video you site even notes that these issues would be so low a level in practice as to be inaudible.

The point about oversampling is that it can solve practice problems in the conversion stage. The distribution format is separate. My point above that lot's of top audio professionals did not notice for years that audio was being resampled all the time, just goes to show how this is a non argument.
 
_shakes head in disbelief_

OK, so this gets into politics. I don't care if you want to waste your money on these things, that's your business. That does not make something true however, and most countries have laws against false advertising and misleading customers. I don't want the price of audio hardware to be artificially inflated by bogus claims, or have to work at ridiculous data rates in my business because customers demand this crap.

I've not seen anyone suggest that sample rates higher than 44.1KHz make things sound MUCH worse, but comments that there may be theoretical issues. The Xiph.org video you site even notes that these issues would be so low a level in practice as to be inaudible.

The point about oversampling is that it can solve practice problems in the conversion stage. The distribution format is separate. My point above that lot's of top audio professionals did not notice for years that audio was being resampled all the time, just goes to show how this is a non argument.

i'm not referring to the video. I'm referring to https://xiph.org/~xiphmont/demo/neil-young.html where these claims have been made. Also there is a post above that make these claims. OK take the word much away. The claim that higher sample rates make sound quality worse is common in this debate. If there were consistency with this claim they would also claim that oversampling is wrong, they don't.
 
i'm not referring to the video. I'm referring to https://xiph.org/~xiphmont/demo/neil-young.html where these claims have been made. Also there is a post above that make these claims. OK take the word much away. The claim that higher sample rates make sound quality worse is common in this debate. If there were consistency with this claim they would also claim that oversampling is wrong, they don't.

OK so the wording in that article is "slightly worse" which is demonstrably true.... Whatever. Remember that, at least in the case of the xiph.org, the argument is that the ultrasonic stuff makes the analogue path cause extra IM distortion, not that the digital signal itself is worse.

You are confusing the sample rate of the distribution media (the discussion point here) and oversampling in the converters (a different discussion).
 
Last edited:
I am not asking for anyone to replace anything. I have no issue with the CD format. But if a company or group of companies want to develop high definition audio. I want to, and I want others, to have a choice about buying new equipment and music in that format.

Just the fact they want to call it "high definition" should tell you something (i.e. capitalizing on a VIDEO standard name).

I don’t want someone protecting me from the snake oil salesman. Especially when I think the arguments for these companies selling snake oil are suspect.

If you would take the time to learn the engineering aspects, you'd realize the only thing suspect is the reasons for pushing "high definition" audio when remastering albums for better quality would be far more beneficial to the end user.

Some have argued that higher sample rate, than 44.1kHz, will sound much worse

I don't recall a single person making such an argument. Who EVER said it would sound "worse" let alone "much worse" ??? The point is that barring any other mastering changes, it would sound IDENTICAL (i.e. no audible change what-so-ever since the changes are outside the limits of human hearing). It's like we have improved sound in the 20kHz-80kHz range! Yeah, except you can't hear anything in that range! (unlike HD video where you CAN see sharper video, at least at relative size/distances. NOTHING will make you be able to hear above 20kHz, by comparison and no recordings have more than about 100dB of dynamic range with almost no music recordings having more than 96dB (i.e. waste of time to have more than 16-bits on playback) especially since playing back a 20-bit (let alone 24-bit) recording at the volume levels required to "hear" 20-bits of dynamic range would DESTROY YOUR HEARING in a very short amount of time anyway! How is that "better" ?
 
I had a hard time sitting through this pseudo-scientific mumbo jumbo. Just a few observations:

- The person does not appear to work for Meridian. He seems to be merely speculating how it works and what it does.

- He yet again brings up the old canard that you can see "stairsteps" in the output signal if you just zoom in close enough. That tells me he doesn't understand how digital audio works.

- After saying that ultrasonic frequencies don't matter directly to the human perception of music, his argument seems to be that auditory temporal resolution somehow does. He never really explains how, but seems to vaguely suggest that either the temporal resolution of the analog output is directly limited by the sampling rate (which would again indicate a lack of understanding digital audio), or that pre-echo caused by steep anti-aliasing filters is the problem MQA tries to address. The latter is finally a real issue. Of course, the trade-offs between high-frequency roll-off, aliasing, phase distortion and ringing have long been understood by engineers, which is why we have techniques such as oversampling. If MQA claims to improve on such techniques, I'd like to know how. I have been unable to find such an explanation anywhere, largely because MQA is "secret". My experience is that audiophile "wonder-technologies" that avoid peer-review by means of secrecy are usually BS.

- In the second video the author seems to say that MQA works by encoding ultrasonic sound components into the least significant bits of a 16-bit audio signal, which would obviously destroy the original bits, or in other words introduce noise and reduce dynamic range. Wouldn't that imply a tacit admission that 16 bits are already more than enough?
 
Streaming eats up monthly bandwidth, subscriptions make content inaccessible once you stop paying - it's little more than throwing money into a toilet and flushing it away.

Still, we're always told the market is about innovation and moving forward so it's great improved audio compression technologies are being developed. Better than the movie industry where they repackage the same slop in new effects (like Star Wars, for which the movie poster pretty much spoiled things first) and tell us it's more creative and awe-inducing (when it's not...)

I should have clarified.. I don't follow the streaming subscription model for my own music; I suppose I was referring to download audio quality.. Not that bandwidth is relevant with unlimited data (speaking only for myself), but I realize that could be an issue for some.
 
  • Like
Reactions: JamesPDX
I had a hard time sitting through this pseudo-scientific mumbo jumbo. Just a few observations:

- The person does not appear to work for Meridian. He seems to be merely speculating how it works and what it does.

- He yet again brings up the old canard that you can see "stairsteps" in the output signal if you just zoom in close enough. That tells me he doesn't understand how digital audio works.

- After saying that ultrasonic frequencies don't matter directly to the human perception of music, his argument seems to be that auditory temporal resolution somehow does. He never really explains how, but seems to vaguely suggest that either the temporal resolution of the analog output is directly limited by the sampling rate (which would again indicate a lack of understanding digital audio), or that pre-echo caused by steep anti-aliasing filters is the problem MQA tries to address. The latter is finally a real issue. Of course, the trade-offs between high-frequency roll-off, aliasing, phase distortion and ringing have long been understood by engineers, which is why we have techniques such as oversampling. If MQA claims to improve on such techniques, I'd like to know how. I have been unable to find such an explanation anywhere, largely because MQA is "secret". My experience is that audiophile "wonder-technologies" that avoid peer-review by means of secrecy are usually BS.

- In the second video the author seems to say that MQA works by encoding ultrasonic sound components into the least significant bits of a 16-bit audio signal, which would obviously destroy the original bits, or in other words introduce noise and reduce dynamic range. Wouldn't that imply a tacit admission that 16 bits are already more than enough?

meridian is just another criminal company that deceives and scams clueless, rich audiophiles.

they wanna sell you a mqa capable cd player.
for £11,000.
 
Likely linked to the new all digital lightning port headphones that were rumored. Apple is extremely good at evaluating the existing technology within it's products, assessing it's relevance in the present day, and then sprinkling some Apple magic on it to raise the standard.
 
  • Like
Reactions: Chytin
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
  • Like
Reactions: milo
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.