what happens when i convert dv video to h.264 mpg video

Discussion in 'Digital Video' started by stoopkitty, Jan 16, 2011.

  1. stoopkitty macrumors member

    Joined:
    Jan 26, 2010
    #1
    so, i recently got my home videos converted from analog tapes to digital files and they gave them to me in a .dv format. some of these files were over 10 or 15 gb, so i decided to convert them to h.264 mpg using handbrake, because i know that that has smaller file sizes than dv. i did this, and i expected it to reduce the resolution or framerate or something. instead, preserved the framerate and the resolution and gave me a file about 1/4 of the size of DV.

    so i was wondering, what did it do to reduce the size? i don't get all of this encoding stuff and i was wondering if someone could explain what it changes to make the filesize smaller
     
  2. chrismacguy macrumors 68000

    Joined:
    Feb 13, 2009
    Location:
    United Kingdom
    #2
    It compresses it. probably by removing extraneous data (Audio probably compressed, resolution probably compressed - but since the Video was low quality to begin with you wouldnt notice it)
     
  3. martinX macrumors 6502a

    martinX

    Joined:
    Aug 11, 2009
    Location:
    Australia
    #3
  4. KeithPratt macrumors 6502a

    Joined:
    Mar 6, 2007
    #4
    Have you seen the Simpsons episode 'Bart's Comet', where Principal Skinner is looking through his telescope, reeling off co-ordinates, each time followed by "no sighting"? Well uncompressed video is kinda like that. Just as there was no need for Bart to note down every "no sighting", we don't necessarily need to know the value of every pixel of every frame of a movie.

    Take the appended diagram of a video. It's 4 frames of pure black. The uncompressed data for this video would look something like this (it doesn't really look like this but it works as a representation of the compression involved):

    Frame 1:
    Pixel 1A: R 0, G 0, B 0
    Pixel 1B: R 0, G 0, B 0
    Pixel 1C: R 0, G 0, B 0
    Pixel 1D: R 0, G 0, B 0
    Pixel 2A: R 0, G 0, B 0
    Pixel 2B: R 0, G 0, B 0
    Pixel 2C: R 0, G 0, B 0
    Pixel 2D: R 0, G 0, B 0
    Pixel 3A: R 0, G 0, B 0
    Pixel 3B: R 0, G 0, B 0
    Pixel 3C: R 0, G 0, B 0
    Pixel 3D: R 0, G 0, B 0
    Pixel 4A: R 0, G 0, B 0
    Pixel 4B: R 0, G 0, B 0
    Pixel 4C: R 0, G 0, B 0
    Pixel 4D: R 0, G 0, B 0
    Frame 2:
    Pixel 1A: R 0, G 0, B 0
    Pixel 1B: R 0, G 0, B 0
    Pixel 1C: R 0, G 0, B 0
    Pixel 1D: R 0, G 0, B 0
    Pixel 2A: R 0, G 0, B 0
    Pixel 2B: R 0, G 0, B 0
    Pixel 2C: R 0, G 0, B 0
    Pixel 2D: R 0, G 0, B 0
    Pixel 3A: R 0, G 0, B 0
    Pixel 3B: R 0, G 0, B 0
    Pixel 3C: R 0, G 0, B 0
    Pixel 3D: R 0, G 0, B 0
    Pixel 4A: R 0, G 0, B 0
    Pixel 4B: R 0, G 0, B 0
    Pixel 4C: R 0, G 0, B 0
    Pixel 4D: R 0, G 0, B 0
    Frame 3:
    Pixel 1A: R 0, G 0, B 0
    Pixel 1B: R 0, G 0, B 0
    Pixel 1C: R 0, G 0, B 0
    Pixel 1D: R 0, G 0, B 0
    Pixel 2A: R 0, G 0, B 0
    Pixel 2B: R 0, G 0, B 0
    Pixel 2C: R 0, G 0, B 0
    Pixel 2D: R 0, G 0, B 0
    Pixel 3A: R 0, G 0, B 0
    Pixel 3B: R 0, G 0, B 0
    Pixel 3C: R 0, G 0, B 0
    Pixel 3D: R 0, G 0, B 0
    Pixel 4A: R 0, G 0, B 0
    Pixel 4B: R 0, G 0, B 0
    Pixel 4C: R 0, G 0, B 0
    Pixel 4D: R 0, G 0, B 0
    Frame 4:
    Pixel 1A: R 0, G 0, B 0
    Pixel 1B: R 0, G 0, B 0
    Pixel 1C: R 0, G 0, B 0
    Pixel 1D: R 0, G 0, B 0
    Pixel 2A: R 0, G 0, B 0
    Pixel 2B: R 0, G 0, B 0
    Pixel 2C: R 0, G 0, B 0
    Pixel 2D: R 0, G 0, B 0
    Pixel 3A: R 0, G 0, B 0
    Pixel 3B: R 0, G 0, B 0
    Pixel 3C: R 0, G 0, B 0
    Pixel 3D: R 0, G 0, B 0
    Pixel 4A: R 0, G 0, B 0
    Pixel 4B: R 0, G 0, B 0
    Pixel 4C: R 0, G 0, B 0
    Pixel 4D: R 0, G 0, B 0

    But if we use spatial compression we need only note this:

    Frame 1:
    Pixel 1A to 4D: R 0, G 0, B 0
    Frame 2:
    Pixel 1A to 4D: R 0, G 0, B 0
    Frame 3:
    Pixel 1A to 4D: R 0, G 0, B 0
    Frame 4:
    Pixel 1A to 4D: R 0, G 0, B 0

    And if we use temporal compression we need only note this:

    Frame 1 to 4:
    Pixel 1A to 4D: R 0, G 0, B 0


    Obviously not many videos are like this, but think of the credits of a movie, when most of the screen will be black for about 8 mins.

    The actual compression used in the videos you made in Handbrake is a lot more complicated than this, but this should give you a sense of the principle of throwing out the unnecessary data. The next stage on from what I illustrated above is deciding what colours are close enough that we can treat them as a single colour without losing too much image quality, and what is close enough to not moving that we can treat it as stationary. The smaller you attempt to make the video file size, the more lenient you have to be in this regard.

    EDIT: DV is not uncompressed, it's just compressed in a very basic way when compared to H.264. A lot more can be squeezed.


    I think I deserve a gold star for this.
     

    Attached Files:

  5. FSMBP macrumors 68020

    FSMBP

    Joined:
    Jan 22, 2009
    #5
    That was, by far, the best explanation I've seen in a while. Not only so simple but really gives a useful analogy.
     
  6. chrismacguy macrumors 68000

    Joined:
    Feb 13, 2009
    Location:
    United Kingdom
    #6
    +1 - Moderators - award this dude a medal :D
     
  7. zblaxberg Guest

    zblaxberg

    Joined:
    Jan 22, 2007
    #7
    Buy this man a drink!
     

Share This Page