Why would AIFF be more taxing on the CPU?
Feel absolutely free to correct me if I'm wrong (and I get a sense that might be sport for you). Unlike the video, you are not using a proxy file for the audio, you are dealing with that directly. A CD-quality stereo file which is virtually the same as AAIF is more than 11 times the size of a 128 kbps MP3. As you scrub, stream, render, or play back, there are more bits to process. Larger file, more bits. More bits, more CPU cycles, and more VM paging. I assumed that was well-understood and basic enough not to need an explanation, but there it is for those who just couldn't do that math.
Would that really be significant? Probably not for most situations. It likely depends on the platform you are using, and that is something that might be needed to be considered if you are using a less-beefy platform to do this. Our job as respondents is to overthink it so that the OP has all the facts. Then he can decide if that matters to him. That also probably really doesn't warrant an explanation, but then I am not the one who raised the question.
On the other hand, possibly all of that would be offset by the decoding needed for the MP3. Since a $49 consumer device can decode mp3s as well as operate a GUI, maintain a database, and convert those MP3s to analog all day long without breaking a sweat, it seems like maybe that task is not really all that CPU-intensive. My best guess it that it would take more, but maybe it would actually take less CPU, and maybe that is why you felt the need to cryptically embolden the word "more". I just don't know; only you have that particular answer.
If you have a theory, you can place it up against mine. But if you have an actual answer supported by facts, maybe you can actually teach us something we didn't know. That's cool; I'm all about learning and not so much about always having to be the smartest guy in the room. That's what I come here for. It would be nice to finally meet someone who actually has all the answers, so I can hardly wait.
Why? He's editing in iMovie, it'll need to be recompressed on output, so he'll see no benefit of the smaller file size in the finished piece. And when it's accompanying 40Mb/s video, the difference in file size between uncompressed and MP3 would be around 5%.
Yes. 5% LARGER, like I said. Whether that is significant would be up to the user. And he is not necessarily editing files at 40 Mb/s; scores of professional organizations, including the one I work for, still use DV25 or IMX30 on occasion. Since the OP is a self-described "complete noob at video work" I think we can assume he is not always going to be compelled to use pro-level bit rates on his consumer camera and recorder, and that's just fine, but it means that the audio could then be a great deal more than 5% of the file size. And that's for 2 tracks; he may want to use more.
So much for whose is bigger. Turning to the recompression issue:
When you chain compression algorithms together, which is what you would be doing when iMovie recompresses MP3 audio (which I assume it does since you claim that it recompresses the file), the rounding errors from the first compression algorithm increase exponentially during that process. The best way to prevent digital generational loss is to start with as pristine of an original as you can muster, so AIFF would be the choice over MP3 if you want the best result for the audio among those two.
It may appear that this does not matter since it will be converted to a lower quality eventually anyway, but it matters greatly. This is not the same as increasing the treble above 5 KHz making no difference to a song only ever played on a 5 KHz analog AM radio (which would be completely useless). In the digital world, things operate very differently.
What you start with has a major effect on the resultant product. That fact is in no way obviated by the fact that you must then compress to a lower rate eventually, but supported because to do that you must compress
a second time, which is the critical difference here. This is one of the first and most basic rules of digital compression that we who get paid to do this for a living (handsomely, by top-shelf major media companies who do this as their bread and butter) have to learn and understand at its basic physics level (but you can just take my word for it): chaining dissimilar coding algorithms results in exponential increases in rounding errors, and that manifests as exponentially-increased unwanted artifacts. Put simply, its a very bad idea, and at lower bit rates professional compressionists avoid that like your sister.
For example, if you create an MP3 at 192 kbps from a CD and then later convert that to a 192 kbps AAC, the results will be far inferior to a 192 kbps AAC that was created from original CD quality. And the reason is the same; severe compression may be acceptable, but a second stage of severe compression increases the rounding errors from the first stage exponentially, and rounding errors manifest as audio artifacts, reducing fidelity, in spite of the fact that the end process is workflow-identical. The end result will absolutely not be.
If there are no rounding errors in the original, then there is no exponential increase. The greater the number of rounding errors in the original due to compression, the worse the second-stage conversion will sound, and not proportionally worse but exponentially worse for a lower original quality.
And this is exactly why iTunes is going directly to the artist to get high-quality masters to encode their 256 kbps iTunes store versions. If they can get a 24 bit 192 KHz master, or an analog master, that is much higher in quality than a 16 bit 44.1 KHz CD, then the resulting 256 AAC version made from the pristine master
will sound noticeably better than the one made from a CD, even though the final process is workflow-identical and yields identical compression.