My friend is telling me that a DVD that is handbroken will look better than the actual DVD will after it has been converted in MPEG-4 format.
Is this true? And if so, how is it actual true.
Is this true? And if so, how is it actual true.
That is what I thought, but my friend said this to me.
When you use Handbrake to convert a DVD to MPEG-4, the program opens up the original file and then reconverts the original analog file to MPEG-4 and more or less upconverts the video and makes it look better than the original MPEG-2.
That is what I thought, but my friend said this to me. When you use Handbrake to convert a DVD to MPEG-4, the program opens up the original file and then reconverts the original analog file to MPEG-4 and more or less upconverts the video and makes it look better than the original MPEG-2.
That is what I thought, but my friend said this to me.
When you use Handbrake to convert a DVD to MPEG-4, the program opens up the original file and then reconverts the original analog file to MPEG-4 and more or less upconverts the video and makes it look better than the original MPEG-2.
Is that quote from your friend?Ok, I misunderstood what my friend was saying about the "analog" part of the DVD, but read this and see if this makes more sense.
Of course there isn't an ANALOG version on the DVD. I think the way you put the statement was misleading. Handbrake doesn't take the "original analog" source and convert it but rather decodes the MPG2 file into analog and then resamples it using MPG4 codec compression. Since the resampling is a better algorithm as well has a higher bit rate (and resolution) then the result can usually look a bit better than the original. Again it is kind of like the "upconvert" that dvd players do. It won't be anything earthshattering but in alot of instances the image will appear to be sharper.
Handbrake decodes the MPG2 file into analog and then resamples it using MPG4 codec compression
Let's assume that the user is using an older version of Apple's DVD Player application that doesn't handle de-interlacing very well.
Your friend is still on drugs or just plain stupid. There is absoluteluy no conversion to analog going from MPEG-2 to MPEG-4 with Handbrake or any other software. Computers deal only in digital - ones and zeros.
Handbrake was designed so people could convert their DVD footage so it would be compatible with playback devices that require MPEG-4.
-DH
But that's an improvement on the hardware/software used to display the content. It's not an improvement on the content itself. A DVD played back using a 15yr old TV and a 10yr old DVD player will look worse than if you played it back using a brand new TV and a brand new DVD player.There is some truth to this... when you look at some of the advanced de-interlacing options in VLC, for instance, some of the blending / blurring ones are significantly better than just progressive-scanning.
You see the same effect on things like emulators of older gaming systems, like the Gameboy Color. You can do some tricks to reprocess the video on modern hardware to make it look much better than it did on the Gameboy.
De-interlacing is just throwing information away.
There is always loss of spacial and/or temporal resolution and typically artifacts not present in the original video are created as well. At its worst de-interlacing tosses away 50% of the resolution of the image and at its best the software tries to fill in the gaps by guesstimating what the missing visual information should look like.But, there's no loss of video content itself, outside of that, as I understand it.
If you start with 480i, each frame of the video stream has 240 rows, and on a display that is capable of interlacing, those rows are shown interleaved from one frame to the next. When you de-interlace in its most basic form, you take each pair of frames with 240 rows and interleave them permanently, replacing both frames with the 480 line result. The only sense in which there is loss of information is that extremely fast motions (something that moves during the pair of interleaved frames, so that its position is different in the two frames) gets lost because the two frames are now shown simultaneously instead of in sequence.
Not quite right. In NTSC, each frame of video is made from two fields; odd and even. The odd and even fields make up the 525 TV scan lines of a full frame of NTSC video. When you deinterlace video, you throw away one field.
-DH
You are kind of confusing pulldown and interlacing. If you "weave" two interlaced fields together (show them both at the same time) to get one progressive frame you *will* get nasty artifacts unless there is very little to no motion in the frame because you are combining two images that were recorded 1/60th of a second apart. You don't lose any spacial resolution but you do lose temporal resolution (60i down to 30p) and the image most likely looks like crap too boot.NTSC TV signals display at ~60 /fields/ per second, or ~30 /frames/ per second. You actually have both odd and even for a single frame of video. You don't throw away any fields (unless you are doing a detelecine, but that is something entirely different, and the data thrown away is redundant anyway), instead, you do as the other poster stated: you merge both fields together into a single progressive frame. No loss of data. You will get combing or other artifacts if the frames don't match up though. That is usually due to it being 'telecined' into 29.97fps for DVD from a 23.94fps source film. You get weird frames that have mis-matched even and odd fields intentionally so that it will display at 29.97fps, but the timing is still correct for 23.94fps content. Detelecine routines remove/fix these mis-matched frames, and bring the content back to its original 23.94fps format. As long as the DVD author hasn't done anything really weird (like frame blending), you shouldn't be throwing away any data you need.
You are kind of confusing pulldown and interlacing. If you "weave" two interlaced fields together (show them both at the same time) to get one progressive frame you *will* get nasty artifacts unless there is very little to no motion in the frame because you are combining two images that were recorded 1/60th of a second apart. You don't lose any spacial resolution but you do lose temporal resolution (60i down to 30p) and the image most likely looks like crap too boot.
The vast majority of video production is done at 60i (or 50i for our PAL brothers) and has been since the birth of TV. Video cameras that shoot 24p have only been available for about 8-9 years or so and 24fps is only a good to use for certain types of projects.You are confusing the stored format and the recording format as well. I'd wager that the vast majority of DVD content out there is 24p telecined. Yes, there are others that break the rules (animation in general is a pain, but can mostly be addressed with a good VFR encoder that can detect the frame pairs)... but the general rule is pretty straightforward. Who in the world actually shoots at 60i these days? My understanding is that both movie studios and TV studios have been using 24fps progressive cameras for decades.
It's already been mentioned in this thread but I want to mention it again so that it is stressed:It's like ripping a CD in essence. Even if you rip a CD in a 'lossless' format, you're still going to be losing some quality. The reason for this is that the CD's tracks have been compressed/encoded previously, doing it again will reduce quality.
Who in the world actually shoots at 60i these days? My understanding is that both movie studios and TV studios have been using 24fps progressive cameras for decades.