Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
It's maddening isn't it - coming from the world that actually makes and mixes music to people who only consume it and are completely dumbed founded by snake oil and things they don't understand. The Audiophiles will forever be the suckers of the tech world that companies continually exploit. I've seen them come out with some absolute insane stuff and the irony is they think they're clued on it and really knowledgeable!

What I learned , though it took me way too long , is people don't particularly come to a forum to be educated.
(Well, not about this kind of stuff.)
They come to have their existing beliefs affirmed by an online committee.
Which is why I can't be bothered try to explain any more.

I only posted here to give you moral support to prove you wasn't entirely rowing against the tide and that at least one more person was rowing with you. Normally I give these threads a wide berth.

Thousands of keystrokes I've wasted in the past on people who still go away believing the exact same sh..e, because it's just words on a page typed by an anonymous guy they trust less than the person trying to sell them 'Audiophile Grade' Cat 6 LAN cable at fifty quid a yard because it 'sounds better with wider sound stage, better defined bass and crisper highs'.
Sometimes it's better to just not bother.
 
Indeed - and you can hear the difference between either of them!

The only need for ultra high res is during the recording stage. It enables us to process plugins at higher sample rates which without writing a scientific essay stops certain mathematical errors in the audio - however you can still do everything at 44.1khz and upsample in the plugin itself to do this. When it comes to listening back 44.1khz is already much higher than the human ear can hear and bit rate doesn't have any influence on sound quality just dynamic range.

In **theory** you could have a ridiculous low noise floor on a classical record at 24 bit with insane dynamic range, but in reality it wouldn't make much difference. 24bit is great for recording at as you don't need to worry about the levels (and indeed some mixers employing floating point 32bit and even 64bit which basically means it's impossible to over load and distort or have a signal to quiet) back in the analog days you were restricted to a lot less dynamic range on tape than even 16bit could offer you (and a hugely increased noise floor which used most of it up)

16-bit has a 96dB dynamic range.
24-bit has a 144dB dynamic range (which will never be fully taken advantage of; maybe 115dB at MOST in the rarest of performances and recordings).

CD has a bit depth of 16 and a sample rate of 44,100Hz. The Nyquist-Shannon theorem explains why a sample rate of 44,100 times per second, double that of 22,050Hz, is plenty. With R-2R ladder or Sign-Magnitude DACs that’s usually true, but it’s the digital filter in a DAC that makes the difference. (In my Sign-Magnitude DAC, 24/96 is plenty, and only due to the digital filter does 24/96 sound 3-5% better than 16/44.1.)

99% of DACs today employ Delta-Sigma technology which is garbage. That’s why they can handle all these high-res formats like 24/192, DSD, DXD, etc., which they require to sound as good as 16/44.1 on older and better (or built today for extremely high prices) R-2R and Sign-Magnitude DACs.
 
I would be interested in a reference to the specification of the test you have performed.
I’m not sure what more you’re asking beyond what I said, but if you scroll up you’ll see a post with descriptions of my system (and the amp I built) and pictures of it as well as my two best headphones. The one with the chrome grill and blood wood ear cups are my best headphones, which again is referenced in that post above. I won’t name the software because it’s not a legally purchased copy and was given to me by a fellow audio vendor.
 
I can tell the difference pretty clearly between flac and wav on any half decent listening source.
The source recording matters a lot though.
A thin, **** recording shows almost no improvement even from 128 mp3 to flac.

You could blind a/b me as scientifically you want just as I have done to myself testing different capacitors, tubes, transformers etc when building, upgrading, and swapping components many times.
Yes. I'm really not normal but I do exist.

If you are serious about hearing the difference between, let’s say a 16/44.1 FLAC and the same 16/44.1 WAV, you’re really hearing farts because you sound like a donkey-hole.
 
Last edited:
  • Haha
  • Like
Reactions: dannys1 and MrRom92
will see how much they are improved when mine arrives. battery and sound improvements will be what people want. I would hope after 3 years there's a good sound jump
 
  • Like
Reactions: Tagbert
I’m not sure what more you’re asking beyond what I said

You mentioned you performed some tests and that you were scored in accuracy on those tests. If they are scientifically validated tests they should have a published methodology that details exactly what they are testing, how, why the results are relevant and how to calculate how much the results are significant from a statistical point of view.

I'm asking if you have a reference to said methodology's documentation since I'm interested in it.
 
As far as frequency response from 20Hz to 20kHz, a compressed MP3 @ 320kbps vs. a lossless audio file is used, and you are given a 20-band EQ to correct what you perceive to be missing in the compressed MP3 audio file after first hearing a 15 second clip of the MP3 and then the same 15 second clip of the lossless file which is played three times in a row (and also a 30 second clip both played three times in a row and a 60 second clip both played three times in a row). On the 15 second clip I scored a 98.3% accuracy of 20-Band EQ correction for what I found lacking in the MP3; on the 30 second clip I scored a 96.8% accuracy; on the 60 second clip I scored a 95.4% accuracy. (Audio memory is typically extremely fleeting where most people can’t hear a difference after a short or long amount of time because lossless audio has a range of 19,980 different frequencies sampled 44,100 times per second.) That is the reason the accuracy is highest on the shortest clip. The average scores on this particular software and hardware (designed by professional audiologists working with “audiophiles”, that I was able to get my hands on through a fellow audio company, and because you need a USB key to run it and my version was hacked, I can’t say much about it) has an average accuracy scoring in the negative range of -12.8% to an average positive range of 7.2%. It is mainly purchased by audiophiles, though many audiologists use it as well and they don’t come close to using the type of high-end equipment I’m using. But I’ve been listening to high-end equipment since 1993 and trained my brain’s tone maps over the course of a two year period to adjust to flat, neutral sound.

But this isn't how MP3 compression works - it doesn't remove a specific frequency band that is audible. So this is a bizarre test you've taken part in.

All this percentage and jargon sounds very audiophile and not grounded in science or coming from an audio engineers perspective. Having an ear that can hear 0.3db changes is not impressive at all - it's quite normal.

If you just null a 320kbps MP3 against it's lossless eqivelent you'll hear and see the missing data. It's right across the frequency spectrum, you couldn't cut it all out with any kind of EQ frequency band.

To prove you can hear the difference all you need to do is blind A/B like that test provider and you tell us which is X. Simple.
 
Though Apple for some reason claims the APM are not capable of lossless audio using a wired capable, it is somewhat incorrect and misleading. If you take lossless audio to mean absolute bit-perfect reproduction of the original signal, then no, it is not capable of playing lossless audio. And that is because it goes through multiple digital to analog conversion stages.
Lossless alone isn't enough for some of the audiophiles round here though - Hi-Res lossless or bust apparently.;)
If you take lossless to mean not encoded using lossy compression; then they ARE capable of playing lossless audio. Nothing is lossy compressed when using the APM via the 3.5mm cable.
Personally 16 bit, uncompressed CD quality is perfectly adequate for me and was for the longest time considered "lossless" (1:1 CD rip).

As more stuff is re-mastered and newer music (possibly) optimized for high-res playback maybe it will make a tweeny tiny bit of difference (on the right equipment - *not Airpods*).

I'd take a stab in the dark that most "High-res" streaming music currently available is simply upsampled from the CD rip probably making it sound worse than the original format is was mastered for unless labels really going to submit individual masters of music for all these various lossless tiers? If simply upsampled / downsampled, somethings gotta give I would think.

At the end of the day most people are gonna be listening to youtube and spotify on-the-go anyhow. Airpods/m are "decent" and convenient low/mid range priced headphones intended for mass market, an area where golden-eared audiophiles are always going to be disappointed anyhow.
 
  • Like
Reactions: MrRom92
Lossless alone isn't enough for some of the audiophiles round here though - Hi-Res lossless or bust apparently.;)

Personally 16 bit, uncompressed CD quality is perfectly adequate for me and was for the longest time considered "lossless" (1:1 CD rip).

As more stuff is re-mastered and newer music (possibly) optimized for high-res playback maybe it will make a tweeny tiny bit of difference (on the right equipment - *not Airpods*).

I'd take a stab in the dark that most "High-res" streaming music currently available is simply upsampled from the CD rip probably making it sound worse than the original format is was mastered for unless labels really going to submit individual masters of music for all these various lossless tiers? If simply upsampled / downsampled, somethings gotta give I would think.

At the end of the day most people are gonna be listening to youtube and spotify on-the-go anyhow. Airpods/m are "decent" and convenient low/mid range priced headphones intended for mass market, an area where golden-eared audiophiles are always going to be disappointed anyhow.


There are definitely different levels of audiophilia, at some point you jump the shark and start worrying about things like the “audiophile grade cat6 cables” and CD shavers mentioned upthread - that’s the point where I no longer wish to associate.


Unfortunately there is so much snake oil and uneducated nonsense in this market that people already not in the audiophile realm are completely repulsed and then feel the need to invalidate ANYTHING that might lead to better sound, even if it has a basis in scientific reality. The snake oil nonsense gives the entire market and anyone hoping to achieve better sound a bad reputation. But there are many sensible, smart (and dare I say, “real”) audiophiles who actually have an understanding of how things work and gladly ignore that junk. The non-audiophiles spend more time thinking about audiophile grade CAT6 cables and CD shavers than we do. I don’t waste cycles of my meat-CPU on such nonsense.


I still buy CDs and maintain a large library of EAC rips, because CD is still the best sounding format that a lot of projects will end up on, particularly if they are from the golden age of the CD - say, early 80’s to early 2010’s. I also love vinyl, I especially love tape in the rare instances it’s available - I’m pretty format agnostic. I just want whatever will get me closest to the original master.



I wouldn’t say the majority of the Hi-res stuff on DSPs is fake or upsampled, though some of it certainly is. Not to mention the mastering style on some new remasters is frequently very bad compared to older releases, regardless of the sampling rate any of them may have been performed at. It’s hard to do a real apples to apples comparison when there are so many factors at play.

Nowadays the labels tend to prepare and submit one Hi-res master to the DSPs and all other formats are converted or downsampled from that. To varying degrees of quality, might I add. So for newer stuff I will always download the highest resolution available for my library, I don’t see the point in getting downsampled versions if that is not what was actually worked on or what left the studio. Especially when you don’t know how well that downsampling is performed. For older stuff that is 16/44.1 only, or even newer stuff for that matter - fine by me! That’s what the master format is so there’s nothing to be upset about. High fidelity isn’t about making a recording something it’s not, it’s about respecting what the recording actually is.
 
I worry about the people who claim they can hear a difference between a 16/44 FLAC and a 16/44 WAV.
I worry that their ignorance (EDIT - no, not their ignorance: their steadfast refusal to be educated) makes them gullible easy targets.
FLAC is nothing but a lossless compression algorithm. Like ZIP tailored for audio.
Uncompress the FLAC, shove it in a DAW with the original WAV, invert the phase of one of them and you'll get total silence, because they will null each other to zero, like -1+1. It's literally that level of maths.
Please at least let that sink in, even if you won't buy into the truth that you don't need hi-res audio (and mostly don't even need lossless).
 
Last edited:
I worry about the people who claim they can hear a difference between a 16/44 FLAC and a 16/44 WAV.
I worry that their ignorance (EDIT - no, not their ignorance: their steadfast refusal to be educated) makes them gullible easy targets.
FLAC is nothing but a lossless compression algorithm. Like ZIP tailored for audio.
Uncompress the FLAC, shove it in a DAW with the original WAV, invert the phase of one of them and you'll get total silence, because they will null each other to zero, like -1+1. It's literally that level of maths.
Please at least let that sink in, even if you won't buy into the truth that you don't need hi-res audio (and mostly don't even need lossless).
I strongly disagree with your stance on the necessity of hi-res, or lossless audio at any resolution, but you at least do have an understanding of how FLAC works. A null test would totally work but even simpler - generate an MD5 checksum.
At “standard resolution” there are 44100 16 bit samples every second, and if you get matching checksums between 2 files you know as a pure fact every single sample across the entire duration of the file is bit-for-bit identical.
I have come across the “WAV is better than FLAC” mindset occasionally and I have no hesitation to remind them that they may be mentally ill.
FLAC is better than WAV, or ALAC for that matter, for a variety of entirely valid reasons. Tagging, compression efficiency, data integrity… but sound quality is not a differentiating factor between any of these formats.
 
  • Like
Reactions: MajorFubar
I strongly disagree with your stance on the necessity of hi-res, or lossless audio at any resolution, but you at least do have an understanding of how FLAC works. A null test would totally work but even simpler - generate an MD5 checksum.
The thing with lossy and hi-res, I can prove at a technical level that they are different from a 16/44 file.
So while I do believe that hi-res files are completely unnecessary, and that mostly, 320kbit/sec lossy files are audibly indistinguishable from a lossless file, I can't in all honesty sit here and tell you that everyone who thinks they can hear the difference 100% of the time is kidding themselves, because the technical part of my brain knows the differences exist. Which is why generally I do keep out of those arguments.

But when it drops to the level of pure science fiction, like the audio quality of FLAC vs ALAC vs WAV, I do find myself worrying if we're going to 'make it' as a species.

EDIT: corrected a typo.
 
Last edited:
  • Like
Reactions: MrRom92
The thing with lossy and hi-res, I can prove at a technical level that they are different from a 16/44 file.
So while I do believe that hi-res files are completely unnecessary, and that mostly, 320kbit/sec lossy files are audibly indistinguishable from a lossless file, I can't in all honesty sit here and tell you that everyone who thinks they can hear the difference 100% of the time is kidding themselves, because the technical part of my brain knows the differences exist. Which is why generally I do keep out of those arguments.

But when it drops to the level of pure science fiction, like the audio quality of FLAC vs ALAC vs WAV, I do find myself worrying if we're going to 'make it' as a species.

IMHO the basic issue is the reasoning as of why there should be some audible difference between files.
  • Lossy vs. Lossless? There are reasons to argue that there might be audible differences. Tests show otherwise at good bitrates for good lossy compression algorithms, but at least the theoretical basis supporting such an hypothesis is reasonable.
  • 44.1kHz vs "hi-res"? For what reason there might be audible differences, excluding artifacts introduced by the supersonic components, which should be inaudible by definition instead?
  • FLAC vs. WAV? Again, for what reason there might be audible differences?
The first point has at least some theoretical basis as of why there might be audible differences. I still fail to see any theoretical basis that support audible differences in the latter points though.
 
  • Like
Reactions: MajorFubar
IMHO the basic issue is the reasoning as of why there should be some audible difference between files.
  • Lossy vs. Lossless? There are reasons to argue that there might be audible differences. Tests show otherwise at good bitrates for good lossy compression algorithms, but at least the theoretical basis supporting such an hypothesis is reasonable.
  • 44.1kHz vs "hi-res"? For what reason there might be audible differences, excluding artifacts introduced by the supersonic components, which should be inaudible by definition instead?
  • FLAC vs. WAV? Again, for what reason there might be audible differences?
The first point has at least some theoretical basis as of why there might be audible differences. I still fail to see any theoretical basis that support audible differences in the latter points though.


As to your second point…

No downsampling process is transparent. There are different methods, at different levels of precision, and none will output a completely identical result at lower sample rates and bit-depths relative to the input data, as originally recorded. Sure, you can say the difference may be difficult to hear, especially for inexperienced listeners, but you can’t say the difference doesn’t exist. Your third point is the only instance where there truly is no difference in what is passed to the converter.

It’s also worth considering that there are very few “bit-perfect” playback systems. All manner of signal processing is employed on playback, oversampling and filtering are incredibly common at the conversion stage, and starting off with a higher resolution allows the DSP to be carried out at a higher precision. Something as simple as altering the volume is a destructive process, and much less consequential using 24 bit math than 16 bit math.
Simple test - take a 16 bit file, lower the volume by 9db, raise it back up +9db again - back to the original volume, right? But listen to how much it sounds like the original file. Now do the same with a 24 bit file.
 
No downsampling process is transparent. There are different methods, at different levels of precision, and none will output a completely identical result at lower sample rates and bit-depths relative to the input data, as originally recorded. Sure, you can say the difference may be difficult to hear, especially for inexperienced listeners, but you can’t say the difference doesn’t exist.

The resulting input will not be identical, but that still does not provide a theoretical reason supporting the different input having audible differences compared to the original.

As example, downsampling from e.g. 96kHz to 44.1kHz only affects ultrasonic components. The ultrasonic components of the signal are by definition inaudible. Technically the signal will be different, but the components in hearing range are actually identical to the original, so there is no theoretical reason supporting being able to discern any difference.

It’s also worth considering that there are very few “bit-perfect” playback systems. All manner of signal processing is employed on playback, oversampling and filtering are incredibly common at the conversion stage, and starting off with a higher resolution allows the DSP to be carried out at a higher precision. Something as simple as altering the volume is a destructive process, and much less consequential using 24 bit math than 16 bit math.

That's also true, but having "higher resolution" audio is not always helpful and can actually be problematic in itself:

192kHz digital music files offer no benefits. They're not quite neutral either; practical fidelity is slightly worse. The ultrasonics are a liability during playback.

Neither audio transducers nor power amplifiers are free of distortion, and distortion tends to increase rapidly at the lowest and highest frequencies. If the same transducer reproduces ultrasonics along with audible content, any nonlinearity will shift some of the ultrasonic content down into the audible range as an uncontrolled spray of intermodulation distortion products covering the entire audible spectrum. Nonlinearity in a power amplifier will produce the same effect. The effect is very slight, but listening tests have confirmed that both effects can be audible.

The article provides sample files to test an audio system for such distortions.
 
Last edited:
The resulting input will not be identical, but that still does not provide a theoretical reason supporting the different input having audible differences compared to the original.

As example, downsampling from e.g. 96kHz to 44.1kHz only affects supersonic components. The ultrasonic components of the signal are by definition inaudible. Technically the signal will be different, but the components in hearing range are actually identical to the original, so there is no theoretical reason supporting being able to discern any difference.



That's also true, but having "higher resolution" audio is not always helpful and can actually be problematic in itself:



The article provides sample files to test an audio system for such distortions.


If there is any half-decent amplifier or transducer that is so badly affected by ultrasonics, I haven’t heard it. Lots of gear is regularly fed these types of signals and still produce ultra-low IMD figures.

Take for instance tape machines, which generate low levels of noise across the entire spectrum but also record and reproduce ultrasonic bias frequencies at all times, as the very basis of even being able to record remotely linear sounding frequency response. Bias has been used in every tape recorder since it was discovered to be a necessity back in the 1940’s.

The bias frequency can be found in many 24/192 digital transfers of tape recordings, and that is a signal that would have always been present coming off the tape heads, head preamp, power amplifiers, going through various outboard equipment, equalizers, limiters, cutting amplifiers and cutterheads, other tape machines… you name it. All forms of electronics that have been manufactured to very excellent IMD specs over the years.

Also worth noting that some of this equipment was deliberately fed signals injected with significant ultrasonic content on a regular basis, like CD-4 FM modulated “discrete quad” back in the 70’s, or direct metal mastered discs which started gaining some popularity in the 80’s and are still commonly cut in some parts of Europe today. Both styles of records where ultrasonics were present through every piece of the mastering chain, and commonly played back by millions of people feeding those ultrasonics back into their own playback systems. To no ill effect. You can go right now and play one of those discs and measure the ultrasonic content coming off them.


Bottom line, if there is some sort of amplifier circuit design out there that is so negatively impacted by ultrasonic content to make even a measurable difference let alone an audible difference… it might be time to put that amp out on the curb. If you are driving an ultra-high efficiency low wattage system with horns or something, you may have no headroom for a lot of things, ultrasonics included. But that is a very deliberate design choice.
 
Simple test - take a 16 bit file, lower the volume by 9db, raise it back up +9db again - back to the original volume, right? But listen to how much it sounds like the original file. Now do the same with a 24 bit file.

The bit depth determines the dynamic range which the signal can represent. Increasing the volume does not affect the dynamic range of the signal. I'm of course talking about end-user playback, not sound engineering during production.

To be clear, during sound engineering higher bit depths or frequencies are not only useful but often outright necessary. This has no bearing to what it's useful or necessary for the end-user playback.
 
Bottom line, if there is some sort of amplifier circuit design out there that is so negatively impacted by ultrasonic content to make even a measurable difference let alone an audible difference… it might be time to put that amp out on the curb. If you are driving an ultra-high efficiency low wattage system with horns or something, you may have no headroom for a lot of things, ultrasonics included. But that is a very deliberate design choice.

I suggest you to read the whole article, because it does address the ways to avoid the extra distortion: (citing the article, emphasis mine):
  1. A dedicated ultrasonic-only speaker, amplifier, and crossover stage to separate and independently reproduce the ultrasonics you can't hear, just so they don't mess up the sounds you can.
  2. Amplifiers and transducers designed for wider frequency reproduction, so ultrasonics don't cause audible intermodulation. Given equal expense and complexity, this additional frequency range must come at the cost of some performance reduction in the audible portion of the spectrum.
  3. Speakers and amplifiers carefully designed not to reproduce ultrasonics anyway.
  4. Not encoding such a wide frequency range to begin with. You can't and won't have ultrasonic intermodulation distortion in the audible band if there's no ultrasonic content.
 
The bit depth determines the dynamic range which the signal can represent. Increasing the volume does not affect the dynamic range of the signal. I'm of course talking about end-user playback, not sound engineering during production.

To be clear, during sound engineering higher bit depths or frequencies are not only useful but often outright necessary. This has no bearing to what it's useful or necessary for the end-user playback.
As I mentioned previously, there is all sorts of DSP occurring on most playback systems. So whether you are working in a professional DAW with 100 plug-ins running, or you are simply listening on your iPhone and have the EQ enabled, there is still a benefit to using higher precision math in the processing.
 
As I mentioned previously, there is all sorts of DSP occurring on most playback systems. So whether you are working in a professional DAW with 100 plug-ins running, or you are simply listening on your iPhone and have the EQ enabled, there is still a benefit to using higher precision math in the processing.

For the processings involved in the typical playback scenarios, I doubt so. Of course if you have some sort of more complex scenario, basically you are doing your own sound engineering processing, so it makes sense that you don't use the format tailored for end-user playback. I agree in that case there are benefits in performing the processing starting from a higher resolution base.
 
99% of DACs today employ Delta-Sigma technology which is garbage. That’s why they can handle all these high-res formats like 24/192, DSD, DXD, etc., which they require to sound as good as 16/44.1 on older and better (or built today for extremely high prices) R-2R and Sign-Magnitude DACs.

Say that to the likes of Chord Electronics, dCS and Lapizator. It's not the method of conversion, rather the implementation of the DAC as a whole. Anyways, Apple Music loss sucks since it does not support ROON for HQPlayer oversampling
 
44.1kHz vs "hi-res"? For what reason there might be audible differences, excluding artifacts introduced by the supersonic components, which should be inaudible by definition instead?

If you're using a DAC that does NOT oversample e.g. a NOS DAC, there will be subtle differences between the two formats particularly in the audible treble region (10-20 KHz)
 
If you're using a DAC that does NOT oversample e.g. a NOS DAC, there will be subtle differences between the two formats particularly in the audible treble region (10-20 KHz)

If it's not pre-upsampled to a rate compatible with the DAC before feeding into it, sure, but that's not a shortcoming in the input, it's a failure in properly converting the input for playback.

The input signal at 44.1kHz is perfectly fine, as long as it's converted properly.
 
I mean, are there any wireless headphones that do support lossless audio even?
The Sony XM4 something or other over-ear AND in-ear support Sony’s proprietary LDAC hi-res lossless format, but sadly only Android devices currently support LDAC. (I have the Sony XM3 something or other over-ears, but I only use them on airplanes and they, and my AirPods Pro, are by far my cheapest headphones. I have Audeze and Sennheiser headphones ranging from $400-$4K, but I also have a headphone system that takes them to their limits.)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.