WAV or MP3 audio for iMovie?

Discussion in 'Digital Video' started by rusty2192, Jul 9, 2012.

  1. rusty2192, Jul 9, 2012
    Last edited: Jul 9, 2012

    rusty2192 macrumors 6502a

    Joined:
    Oct 15, 2008
    Location:
    Kentucky
    #1
    I just got a new Tascam DR-05 audio recorder to go along with my Canon T2i for video. I will be using iMovie (either existing '09 or upgrade to '11) for my basic editing and post processing.

    The recorder is capable of recording in both WAV (16 and 24 bit) and MP3 formats, both at several rates. From what I understand, WAV files are not compatible with iMovie and would need to be converted before importing. So would it be better to go with the higher quality WAV format from the recorder and then convert at the computer to an acceptable format? Or would I be better off just going with MP3 straight from the source to cut out that extra conversion?

    As you can probably guess, I am a complete noob at video work. I usually hang out next door in the Digital Photography forum but I will be giving video a try this weekend.
     
  2. aarond12 macrumors 65816

    aarond12

    Joined:
    May 20, 2002
    Location:
    Dallas, TX USA
    #2
    You can convert your WAV files to AIFF. (Both are lossless formats.) iMovie works with AIFF just fine.
     
  3. rusty2192 thread starter macrumors 6502a

    Joined:
    Oct 15, 2008
    Location:
    Kentucky
    #3
    Thanks for the advice.

    I got the recorder in the mail today and tried it out right away. iMovie 09 actually had no problem importing the WAV file directly.
     
  4. TyroneShoes2 macrumors regular

    Joined:
    Aug 17, 2011
    #4
    Wav converted to AIFF will have the benefit of higher quality than MP3, but it will also have the disadvantage of a larger file size. Your movies would be bigger with AIFF and you would need more CPU power to run the program and render transitions. If you have the CPU power and the available storage, AIFF might be the way to go. Otherwise, MP3, if you can stand the loss in quality.

    So its not a Hobson's choice, you have real options. I would experiment with high bit-rate MP3 to see if that will be good enough. If so, use that. Spoken-word stereo in MP3 can be as small as 64 kbps before quality loss is really noticeable, but I would start at 256 or higher, and move up or down from there, depending on how well that works out for you.

    Incidentally, wav to AIFF is not a re-encode, just a rewrap, so there should be no cross-conversion loss at all. They are also both uncompressed formats, so are "lossless" by definition (it is a misnomer to otherwise use the term "lossless" when referring to uncompressed formats, since compression is exactly what causes crossconversions to be potentially lossy in the first place).
     
  5. KeithPratt macrumors 6502a

    Joined:
    Mar 6, 2007
    #5
    Why would AIFF be more taxing on the CPU?

    Why? He's editing in iMovie, it'll need to be recompressed on output, so he'll see no benefit of the smaller file size in the finished piece. And when it's accompanying 40Mb/s video, the difference in file size between uncompressed and MP3 would be around 5%.
     
  6. TyroneShoes2, Jul 10, 2012
    Last edited: Jul 10, 2012

    TyroneShoes2 macrumors regular

    Joined:
    Aug 17, 2011
    #6
    Feel absolutely free to correct me if I'm wrong (and I get a sense that might be sport for you). Unlike the video, you are not using a proxy file for the audio, you are dealing with that directly. A CD-quality stereo file which is virtually the same as AAIF is more than 11 times the size of a 128 kbps MP3. As you scrub, stream, render, or play back, there are more bits to process. Larger file, more bits. More bits, more CPU cycles, and more VM paging. I assumed that was well-understood and basic enough not to need an explanation, but there it is for those who just couldn't do that math.

    Would that really be significant? Probably not for most situations. It likely depends on the platform you are using, and that is something that might be needed to be considered if you are using a less-beefy platform to do this. Our job as respondents is to overthink it so that the OP has all the facts. Then he can decide if that matters to him. That also probably really doesn't warrant an explanation, but then I am not the one who raised the question.

    On the other hand, possibly all of that would be offset by the decoding needed for the MP3. Since a $49 consumer device can decode mp3s as well as operate a GUI, maintain a database, and convert those MP3s to analog all day long without breaking a sweat, it seems like maybe that task is not really all that CPU-intensive. My best guess it that it would take more, but maybe it would actually take less CPU, and maybe that is why you felt the need to cryptically embolden the word "more". I just don't know; only you have that particular answer.

    If you have a theory, you can place it up against mine. But if you have an actual answer supported by facts, maybe you can actually teach us something we didn't know. That's cool; I'm all about learning and not so much about always having to be the smartest guy in the room. That's what I come here for. It would be nice to finally meet someone who actually has all the answers, so I can hardly wait.

    Yes. 5% LARGER, like I said. Whether that is significant would be up to the user. And he is not necessarily editing files at 40 Mb/s; scores of professional organizations, including the one I work for, still use DV25 or IMX30 on occasion. Since the OP is a self-described "complete noob at video work" I think we can assume he is not always going to be compelled to use pro-level bit rates on his consumer camera and recorder, and that's just fine, but it means that the audio could then be a great deal more than 5% of the file size. And that's for 2 tracks; he may want to use more.

    So much for whose is bigger. Turning to the recompression issue:

    When you chain compression algorithms together, which is what you would be doing when iMovie recompresses MP3 audio (which I assume it does since you claim that it recompresses the file), the rounding errors from the first compression algorithm increase exponentially during that process. The best way to prevent digital generational loss is to start with as pristine of an original as you can muster, so AIFF would be the choice over MP3 if you want the best result for the audio among those two.

    It may appear that this does not matter since it will be converted to a lower quality eventually anyway, but it matters greatly. This is not the same as increasing the treble above 5 KHz making no difference to a song only ever played on a 5 KHz analog AM radio (which would be completely useless). In the digital world, things operate very differently.

    What you start with has a major effect on the resultant product. That fact is in no way obviated by the fact that you must then compress to a lower rate eventually, but supported because to do that you must compress a second time, which is the critical difference here. This is one of the first and most basic rules of digital compression that we who get paid to do this for a living (handsomely, by top-shelf major media companies who do this as their bread and butter) have to learn and understand at its basic physics level (but you can just take my word for it): chaining dissimilar coding algorithms results in exponential increases in rounding errors, and that manifests as exponentially-increased unwanted artifacts. Put simply, its a very bad idea, and at lower bit rates professional compressionists avoid that like your sister.

    For example, if you create an MP3 at 192 kbps from a CD and then later convert that to a 192 kbps AAC, the results will be far inferior to a 192 kbps AAC that was created from original CD quality. And the reason is the same; severe compression may be acceptable, but a second stage of severe compression increases the rounding errors from the first stage exponentially, and rounding errors manifest as audio artifacts, reducing fidelity, in spite of the fact that the end process is workflow-identical. The end result will absolutely not be.

    If there are no rounding errors in the original, then there is no exponential increase. The greater the number of rounding errors in the original due to compression, the worse the second-stage conversion will sound, and not proportionally worse but exponentially worse for a lower original quality.

    And this is exactly why iTunes is going directly to the artist to get high-quality masters to encode their 256 kbps iTunes store versions. If they can get a 24 bit 192 KHz master, or an analog master, that is much higher in quality than a 16 bit 44.1 KHz CD, then the resulting 256 AAC version made from the pristine master will sound noticeably better than the one made from a CD, even though the final process is workflow-identical and yields identical compression.
     
  7. MisterMe macrumors G4

    MisterMe

    Joined:
    Jul 17, 2002
    Location:
    USA
    #7
    It is AIFF not AAIF.

    There is lots of nonsense in this thread. This is part of it. File size is a storage issue, not a processor issue. Audio is played in units of kilohertz (kHz). The rate at which computers process data is measured in gigahertz (GHz). Your computer is at least a 1000 times faster than the bitrate of the least compressed audio file. Even your iPhone has no trouble performing other tasks while playing your most intense audio recordings.

    To the OP, WAV is an uncompressed audio format that originated on MS-DOS. AIFF is a much more sophisticated audio format. MP3 (MPEG 1 Layer 3) is a format that uses a simply lossy-compression algorithm. MP3 can be decompressed, but the data lost during compression cannot be recovered.

    The opinions of novelists notwithstanding, no audio file format is going to tax your computer unless maybe you are running an Apple ][ or Commodore 64 or some such.
     
  8. KeithPratt macrumors 6502a

    Joined:
    Mar 6, 2007
    #8
    Then take a seat and pull out a pencil.

    No. The MP3 must be decoded to PCM, meaning more bits as well as the small decoding CPU tax.

    I didn't have time to read all of that in my lunch hour, but are you essentially recommending he record uncompressed? If so, we agree.
     
  9. rusty2192 thread starter macrumors 6502a

    Joined:
    Oct 15, 2008
    Location:
    Kentucky
    #9
    Sorry, I didn't mean to start a war :)

    Thanks for everyone's inputs, no matter how long or short. So what I take away from this is to just record in the highest quality format possible. When it needs to be converted down the road I'll let the software do it. In this case I will go with 24 bit WAV at a relatively high sampling rate and then convert to AIFF before importing to iMovie.

    I'm not really too worried about how taxing it will be on my computer. This is a favor for a family member and not a paying gig, so time isn't really a factor. Besides, it will most likely just be burned to a DVD for viewing anyway, with a digital copy on the side.

    Thanks again to everyone.
     
  10. aarond12 macrumors 65816

    aarond12

    Joined:
    May 20, 2002
    Location:
    Dallas, TX USA
    #10
    LOL! Not your fault. Just some faulty "knowledge" from Tyrone.

    Back in the 80386SX 16MHz days, my computer had a bit of a problem keeping up with uncompressed 44.1kHz 16-bit stereo audio. Today's computers don't have that issue. AIFF or WAV is your best choice. No additional decoding from MP3/MP4/AAC necessary. Also, no issue with scrubbing forward and backward like you might get with MP3/MP4/AAC.

    Another analogy: Apple converts HDV and AVCHD video files to Apple Intermediate Codec for use in iMovie. The AIC files end up being about 3x the size of the original files but take up a LOT less CPU to edit. The amount of CPU time to read a file is far less than the CPU time needed to decompress an audio or video file.
     

Share This Page