Yes, yes, and YES. 256 AAC for a stereo recording is plenty for amazing transparency. There are so many things in the signal chain that matters more at that point. The recording quality/mastering (A BIG one), the speakers, the noise floor of the room, the signal/noise ratio of the play back chain, and especially the shape / acoustics of the room will all become much bigger obstacles to the sound than the playback data rate and format once you hit quality ceiling of 256 AAC.
No offense, but you are using the right words for wrong concepts.
The room where you record has no "noise floor". That concept relates to the signal chain (equipment connected in series or parallel fashion) of microphones, preamps, processors and AD converters. Is the small hiss you get when silence is picked up by the mics and is the result of the voltage of the mentioned signal chain. Generally you can get away with noise floor hiss that lies below -42dB, which means no one will hear it unless you turn the volume way up and have a really good amplifier/speakers combo.
The shape of a room is not an obstacle, is what you actually want to capture along with the character of the instrument or a singers voice. If your room has bad acoustics, have a crappy instrument or the singer has no talent, you'll get bad recordings, thus, not worth deploying to any media. You can compensate those factors with hardware processors or SW plugins to a certain extent, though.
Playback data rate relates to the AD/DC converter sampling rates. Most of the times for recording music, a sampling rate of 48KHz@24bit will suffice. That means you'll capture frequencies of up to 24KHz (above your listening threshold) with a dynamic range of 144dB.
You then bounce your audio to CD quality of 44.1KHz@16bit. You lose frequencies you are not able to hear, and lose some volume, and that's with no compression applied to the file and it's all because of a process called dithering to proportionally lower the dynamic range of 144dB to a range of 96dB without getting sound artifacts.
Why am I mumbling all of this: basically, frequencies above 7 or 8 KHz are where the brightness or sheen of instruments like cymbals, voice, some percussions and strings reside. Also, a room's "acoustic air" lies above those frequencies. When you compress to any format, regularly you lose a lot of the frequencies above 12 KHz (and you already lost some volume!,) so you actually are missing the acoustic environment and sheen a little when you compress to mp4 or AAC. It's basically some sort of 3D characteristic to the stereo sound, the actual breath of a singers voice... small details that give more life to a recording. And it happens to the lower end of frequencies too, regularly below 70Hz. If you take a raw 44.1KHz@16bit WAV file and compare it to any Apple Music file listening to them using AirPods, you'll hear the difference. Not huge, but you'll definitely pick something up and it will be in favor of the WAV file.
As for those 256kbps you are referring to mean how much data you stream per second, and is not directly related to which frequencies you are including or excluding. That depends on the compression parameters you use with your compression software.
In the end is how much frequency and dynamic range -and their balance to one another- you preserve from the original performance. That's a subject for mixing and mastering.
Sorry for the long post, but it's actually my hobby (and I love it!
Last edited: