Hmm, OK maybe you know more about the maths side than me then. But I studied computing at undergraduate level and music to an advanced level. and thought that Nyquist-Shannon proved you could reconstruct the original waveform from a discrete sample given a sufficient sample rate, this was in a networking context and maybe I’m missing some detail?
“Also, it is not a purely mathematical algorithm. They use psychoacoustical modelling of human hearing to remove nuances that people are very bad at hearing”.
Yeah I agree, hence why I mentioned information being removed by the codecs, and some of what’s removed isn’t just hard to hear it’s impossible. But my larger point, is that it’s ultimately futile anyway because you first have to define what’s true. IF you define that as the master, then 256kbps AAC at 44100KHz is good enough that hardly anyone can reliably and repeatedly tell the difference. BUT even if you could tell the difference, are you really listening to what the musicians really sounded like? Almost certainly not because there‘s a bunch of other stuff between you and the musicians that you can’t take out of the equation. From microphones to the decisions of recording engineers.