Hopefully I answered your question to your satisfaction. 😃
I would be interested in a reference to the specification of the test you have performed.
Hopefully I answered your question to your satisfaction. 😃
It's maddening isn't it - coming from the world that actually makes and mixes music to people who only consume it and are completely dumbed founded by snake oil and things they don't understand. The Audiophiles will forever be the suckers of the tech world that companies continually exploit. I've seen them come out with some absolute insane stuff and the irony is they think they're clued on it and really knowledgeable!
Indeed - and you can hear the difference between either of them!
The only need for ultra high res is during the recording stage. It enables us to process plugins at higher sample rates which without writing a scientific essay stops certain mathematical errors in the audio - however you can still do everything at 44.1khz and upsample in the plugin itself to do this. When it comes to listening back 44.1khz is already much higher than the human ear can hear and bit rate doesn't have any influence on sound quality just dynamic range.
In **theory** you could have a ridiculous low noise floor on a classical record at 24 bit with insane dynamic range, but in reality it wouldn't make much difference. 24bit is great for recording at as you don't need to worry about the levels (and indeed some mixers employing floating point 32bit and even 64bit which basically means it's impossible to over load and distort or have a signal to quiet) back in the analog days you were restricted to a lot less dynamic range on tape than even 16bit could offer you (and a hugely increased noise floor which used most of it up)
I’m not sure what more you’re asking beyond what I said, but if you scroll up you’ll see a post with descriptions of my system (and the amp I built) and pictures of it as well as my two best headphones. The one with the chrome grill and blood wood ear cups are my best headphones, which again is referenced in that post above. I won’t name the software because it’s not a legally purchased copy and was given to me by a fellow audio vendor.I would be interested in a reference to the specification of the test you have performed.
I can tell the difference pretty clearly between flac and wav on any half decent listening source.
The source recording matters a lot though.
A thin, **** recording shows almost no improvement even from 128 mp3 to flac.
You could blind a/b me as scientifically you want just as I have done to myself testing different capacitors, tubes, transformers etc when building, upgrading, and swapping components many times.
Yes. I'm really not normal but I do exist.
I’m not sure what more you’re asking beyond what I said
As far as frequency response from 20Hz to 20kHz, a compressed MP3 @ 320kbps vs. a lossless audio file is used, and you are given a 20-band EQ to correct what you perceive to be missing in the compressed MP3 audio file after first hearing a 15 second clip of the MP3 and then the same 15 second clip of the lossless file which is played three times in a row (and also a 30 second clip both played three times in a row and a 60 second clip both played three times in a row). On the 15 second clip I scored a 98.3% accuracy of 20-Band EQ correction for what I found lacking in the MP3; on the 30 second clip I scored a 96.8% accuracy; on the 60 second clip I scored a 95.4% accuracy. (Audio memory is typically extremely fleeting where most people can’t hear a difference after a short or long amount of time because lossless audio has a range of 19,980 different frequencies sampled 44,100 times per second.) That is the reason the accuracy is highest on the shortest clip. The average scores on this particular software and hardware (designed by professional audiologists working with “audiophiles”, that I was able to get my hands on through a fellow audio company, and because you need a USB key to run it and my version was hacked, I can’t say much about it) has an average accuracy scoring in the negative range of -12.8% to an average positive range of 7.2%. It is mainly purchased by audiophiles, though many audiologists use it as well and they don’t come close to using the type of high-end equipment I’m using. But I’ve been listening to high-end equipment since 1993 and trained my brain’s tone maps over the course of a two year period to adjust to flat, neutral sound.
Lossless alone isn't enough for some of the audiophiles round here though - Hi-Res lossless or bust apparently.Though Apple for some reason claims the APM are not capable of lossless audio using a wired capable, it is somewhat incorrect and misleading. If you take lossless audio to mean absolute bit-perfect reproduction of the original signal, then no, it is not capable of playing lossless audio. And that is because it goes through multiple digital to analog conversion stages.
Personally 16 bit, uncompressed CD quality is perfectly adequate for me and was for the longest time considered "lossless" (1:1 CD rip).If you take lossless to mean not encoded using lossy compression; then they ARE capable of playing lossless audio. Nothing is lossy compressed when using the APM via the 3.5mm cable.
Lossless alone isn't enough for some of the audiophiles round here though - Hi-Res lossless or bust apparently.
Personally 16 bit, uncompressed CD quality is perfectly adequate for me and was for the longest time considered "lossless" (1:1 CD rip).
As more stuff is re-mastered and newer music (possibly) optimized for high-res playback maybe it will make a tweeny tiny bit of difference (on the right equipment - *not Airpods*).
I'd take a stab in the dark that most "High-res" streaming music currently available is simply upsampled from the CD rip probably making it sound worse than the original format is was mastered for unless labels really going to submit individual masters of music for all these various lossless tiers? If simply upsampled / downsampled, somethings gotta give I would think.
At the end of the day most people are gonna be listening to youtube and spotify on-the-go anyhow. Airpods/m are "decent" and convenient low/mid range priced headphones intended for mass market, an area where golden-eared audiophiles are always going to be disappointed anyhow.
I strongly disagree with your stance on the necessity of hi-res, or lossless audio at any resolution, but you at least do have an understanding of how FLAC works. A null test would totally work but even simpler - generate an MD5 checksum.I worry about the people who claim they can hear a difference between a 16/44 FLAC and a 16/44 WAV.
I worry that their ignorance (EDIT - no, not their ignorance: their steadfast refusal to be educated) makes them gullible easy targets.
FLAC is nothing but a lossless compression algorithm. Like ZIP tailored for audio.
Uncompress the FLAC, shove it in a DAW with the original WAV, invert the phase of one of them and you'll get total silence, because they will null each other to zero, like -1+1. It's literally that level of maths.
Please at least let that sink in, even if you won't buy into the truth that you don't need hi-res audio (and mostly don't even need lossless).
The thing with lossy and hi-res, I can prove at a technical level that they are different from a 16/44 file.I strongly disagree with your stance on the necessity of hi-res, or lossless audio at any resolution, but you at least do have an understanding of how FLAC works. A null test would totally work but even simpler - generate an MD5 checksum.
The thing with lossy and hi-res, I can prove at a technical level that they are different from a 16/44 file.
So while I do believe that hi-res files are completely unnecessary, and that mostly, 320kbit/sec lossy files are audibly indistinguishable from a lossless file, I can't in all honesty sit here and tell you that everyone who thinks they can hear the difference 100% of the time is kidding themselves, because the technical part of my brain knows the differences exist. Which is why generally I do keep out of those arguments.
But when it drops to the level of pure science fiction, like the audio quality of FLAC vs ALAC vs WAV, I do find myself worrying if we're going to 'make it' as a species.
IMHO the basic issue is the reasoning as of why there should be some audible difference between files.
The first point has at least some theoretical basis as of why there might be audible differences. I still fail to see any theoretical basis that support audible differences in the latter points though.
- Lossy vs. Lossless? There are reasons to argue that there might be audible differences. Tests show otherwise at good bitrates for good lossy compression algorithms, but at least the theoretical basis supporting such an hypothesis is reasonable.
- 44.1kHz vs "hi-res"? For what reason there might be audible differences, excluding artifacts introduced by the supersonic components, which should be inaudible by definition instead?
- FLAC vs. WAV? Again, for what reason there might be audible differences?
No downsampling process is transparent. There are different methods, at different levels of precision, and none will output a completely identical result at lower sample rates and bit-depths relative to the input data, as originally recorded. Sure, you can say the difference may be difficult to hear, especially for inexperienced listeners, but you can’t say the difference doesn’t exist.
It’s also worth considering that there are very few “bit-perfect” playback systems. All manner of signal processing is employed on playback, oversampling and filtering are incredibly common at the conversion stage, and starting off with a higher resolution allows the DSP to be carried out at a higher precision. Something as simple as altering the volume is a destructive process, and much less consequential using 24 bit math than 16 bit math.
192kHz digital music files offer no benefits. They're not quite neutral either; practical fidelity is slightly worse. The ultrasonics are a liability during playback.
Neither audio transducers nor power amplifiers are free of distortion, and distortion tends to increase rapidly at the lowest and highest frequencies. If the same transducer reproduces ultrasonics along with audible content, any nonlinearity will shift some of the ultrasonic content down into the audible range as an uncontrolled spray of intermodulation distortion products covering the entire audible spectrum. Nonlinearity in a power amplifier will produce the same effect. The effect is very slight, but listening tests have confirmed that both effects can be audible.
The resulting input will not be identical, but that still does not provide a theoretical reason supporting the different input having audible differences compared to the original.
As example, downsampling from e.g. 96kHz to 44.1kHz only affects supersonic components. The ultrasonic components of the signal are by definition inaudible. Technically the signal will be different, but the components in hearing range are actually identical to the original, so there is no theoretical reason supporting being able to discern any difference.
That's also true, but having "higher resolution" audio is not always helpful and can actually be problematic in itself:
The article provides sample files to test an audio system for such distortions.
Simple test - take a 16 bit file, lower the volume by 9db, raise it back up +9db again - back to the original volume, right? But listen to how much it sounds like the original file. Now do the same with a 24 bit file.
Bottom line, if there is some sort of amplifier circuit design out there that is so negatively impacted by ultrasonic content to make even a measurable difference let alone an audible difference… it might be time to put that amp out on the curb. If you are driving an ultra-high efficiency low wattage system with horns or something, you may have no headroom for a lot of things, ultrasonics included. But that is a very deliberate design choice.
As I mentioned previously, there is all sorts of DSP occurring on most playback systems. So whether you are working in a professional DAW with 100 plug-ins running, or you are simply listening on your iPhone and have the EQ enabled, there is still a benefit to using higher precision math in the processing.The bit depth determines the dynamic range which the signal can represent. Increasing the volume does not affect the dynamic range of the signal. I'm of course talking about end-user playback, not sound engineering during production.
To be clear, during sound engineering higher bit depths or frequencies are not only useful but often outright necessary. This has no bearing to what it's useful or necessary for the end-user playback.
As I mentioned previously, there is all sorts of DSP occurring on most playback systems. So whether you are working in a professional DAW with 100 plug-ins running, or you are simply listening on your iPhone and have the EQ enabled, there is still a benefit to using higher precision math in the processing.
99% of DACs today employ Delta-Sigma technology which is garbage. That’s why they can handle all these high-res formats like 24/192, DSD, DXD, etc., which they require to sound as good as 16/44.1 on older and better (or built today for extremely high prices) R-2R and Sign-Magnitude DACs.
44.1kHz vs "hi-res"? For what reason there might be audible differences, excluding artifacts introduced by the supersonic components, which should be inaudible by definition instead?
If you're using a DAC that does NOT oversample e.g. a NOS DAC, there will be subtle differences between the two formats particularly in the audible treble region (10-20 KHz)
The Sony XM4 something or other over-ear AND in-ear support Sony’s proprietary LDAC hi-res lossless format, but sadly only Android devices currently support LDAC. (I have the Sony XM3 something or other over-ears, but I only use them on airplanes and they, and my AirPods Pro, are by far my cheapest headphones. I have Audeze and Sennheiser headphones ranging from $400-$4K, but I also have a headphone system that takes them to their limits.)I mean, are there any wireless headphones that do support lossless audio even?