Dude, if someone "upscales" AAC files to some sort of lossless codec, they're going to be degrading the sound quality of that AAC so badly that I'm not surprised you can tell the difference.
It is much easier to tell the difference between a non-transcoded lossless file and a very poorly transcoded lossy file. It's been scientifically proven that the human ear cannot differentiate between a file at iTunes Plus quality (actually, even lower that iTunes Plus quality) and a totally uncompressed WAV file, given that all other things are equal including mastering and playback equipment.
Huh? I don't think you understood my post. I've never done such conversions. Why would I?
My point is that in order to prove it to someone else (online) they may prefer to make the files look identical so I can't cheat by simply looking at the file size and metadata. I thought that was pretty clear. Further, I believe that implementing a super-sample of frequency at a scale that is a whole integer won't produce detectable adverse affects on sound quality because the timing of each sample remains the same. Fractional scaling would technically change the timing but it's not clear given that you're super-sampling not sub-sampling how much or whether that would be detectable either. Certainly doing a fractional sub-sample (or any sub sample for that matter) is about the worst thing you can do to the audio file.
Even my high-resolution skeptic brother had to admit the difference was obvious and I could clearly distinguish the two even through the MacBook Pro's speakers (set at the higher res). This is a cursory finding involving one album only so far and there may be other variables affecting the result of which I'm not aware, but I feel confident enough in my claim to post it despite knowing how controversial the topic is. I'm going to test the other hi-res albums I own soon to see if I feel similarly. Should I create 256 down-sampled AAC from the high-res source and test that as well?
Of course at 88/24 the data rate is approximately 10 times that of 256 kbps AAC (accounting for lossless savings in both) so I'd be disappointed and surprised if there wasn't a difference.
I haven't yet done listening tests comparing standard lossless CDs to high-res but in the past when I've compared iTunes AAC to lossless CD I thought they were very close (possibly even indistinguishable) and I think 256 kbps AAC will be much closer to standard CD than the improvement I'm hearing in this particular high-res album.