Best single camera technique you can use to make videos look better on the Internet
Here’s the best single thing you can do to get the most quality out of your video that is compressed for the web. Shoot on a tripod and only make camera movements when it is necessary to tell your story. That means limit pans, zooms and tilts. And although those moving shots on a dolly or jib look really sweet, when the codec gets done compressing a moving shot, it will have significantly lowered the quality of the shot, so that it isn’t so sweet after all.
Here’s why: No matter what kind of compression you use (mpg, mov, wmv, flv, etc) the encoding process is designed to severely reduce the amount of information that has to be shared each second across the Internet. It does this in two major ways that I’ll explain in a non-technical, simplistic manner.
- Codecs look for pixels in each frame of video (typically about 30 fps, but often reduced to 15 fps on the web) that are exactly the same as the pixels in the preceding frame. If it finds an exact match, it just tells your player (the decoder) to use the pixel from before. This is a very efficient way to reduce the amount of information that has to be sent and decoded each second, which frees up the codec to use the bandwidth you’ve allotted to create better looking images and better sounding audio, because it’s not spending all its power on just capturing the thousands of pixels in each frame. However, whenever the camera is moving, every pixel of every frame is changing and there is no information from the previous frame that the codec can use. There is no efficiency. Whatever data isn’t reduced in this manner is then reduced in the following way.
- Codecs reduce quality in every possible way; including reduction of the color palate, reduction of the contrast between lights and darks, elimination of details (making things blurry or pixelated) , reduction of the depth and clarity of the sound, and many other things. It does this because your video in it’s uncompressed state requires millions and millions of bites of information to be transferred per second, but Internet connection speeds (bandwidth) are much, much smaller than that. So to reduce the amount of information that the codec has to share per second to match the available bandwidth, it simply eliminates data. And reduced data means less information to be shared about the picture and the sound. That means severe loss of quality.
Okay, that means that if your camera is not moving, there will be a significant number of pixels that are the same from frame to frame (such as the wall behind your actors, which isn’t moving). That accounts for a huge amount of data that doesn’t have to be sent and processed (#1 above). So, the codec does not have to trash your video and audio quality as much to get it down to match the available bandwidth (#2 above).
I’m not saying that you should never move the camera. Of course, there are many times that camera movement is essential to tell the story, or direct attention, or keep up with the action. Just know that while the camera is moving, the quality of the video and audio will definitely drop. So always ask yourself if the movement is essential or is there another way you can shoot the scene without that movement and increase the overall visual and audible quality of your program?