ntroduction

In the beginning, Filo T. Farnsworth invented the television. One of his biggest problems, which still exists today, was that he wanted to send more data than was possible given the space (bandwidth) available. So instead he came up with the first video compression scheme: interlaced video. Since 1921, the video that we have watched has been interlaced and only recently have new, non-interlaced (progressive) formats become available. Despite the availability of progressive display technologies like plasma and LCD televisions, we continue to watch a legacy of artifacts related to interlaced video. This paper will do three things: describe how interlaced video works; illustrate why it is not appropriate for the new progressive display devices now on the market; and explain how to do a good job of de-interlacing a video image.

What is interlaced video?

Most all televisions use a Cathode Ray Tube, or CRT. Your TV screen is actually the end of this tube, which is painted on the inside with phosphor ? a chemical that glows when it is hit by an electron beam. Near the front of the tube (the back of your TV), there is an ?electron gun? that sends a beam of electrons towards the screen. The electronics in the TV allow the gun to be aimed, which allows the entire front face of the tube to be ?painted? with the electron beam. This causes the front of the tube to glow, and makes the pictures we observe.

In order for the TV to paint the video picture, the image is broken down into a series of horizontal lines, which make up a single frame of video. One frame is a still picture on the TV. To generate a moving picture, the frames are continuously updated ? usually about 60 times every second. To draw a higher definition picture, more horizontal lines are added. It is not this easy, however, because television transmission systems only allow so many lines per second to be transmitted. If we send more lines per frame, then we are not able to update the frames as often. And if the frame rate is too slow, then we see a flickering image.

When the current television standards were developed, there was a careful trade-off made between the number of lines per frame and the number of frames per second, given the available bandwidth (how much data can be sent per second). The original developers of television had very limited bandwidth, which resulted in a difficult choice between a picture with terrible resolution, or one with a lot of flicker. So they decided to cheat. They developed a system where the TV first drew a low resolution picture using every other line of the frame, and then went back and filled in the missing lines. The result was an image with an acceptable resolution (525 horizontal lines) and a fast enough frame rate (60 frames per second) for the result to be of acceptable quality. This basic system, first introduced about 70 years ago, is still what most people watch when they turn on the TV.

In the figure above, the television?s electron gun first draws the red lines. The dotted lines show the path that the CRT beam takes when it ends one line and goes back to the beginning of the next line (known as horizontal retrace). Once all the red lines have been drawn, the CRT beam goes back up to the top, along the black dotted line path, and draws in the green lines. The black dotted line is known as the vertical retrace. Each half of the total frame ? the red half and the green half ? is known as a field. The two fields are said to be interlaced.

Interlacing takes advantage of the latency in our visual system. When a line is drawn on the TV screen for a very short time, we continue to see it, even after it has actually faded. A sequence of images, 60 of them every second, seems to us to be continuous. By alternating the odd and even lines, we get twice the vertical resolution for the available bandwidth and we avoid visible flickering of the image. The video image is scanned on the CRT at the same time as it is received from the broadcaster, which means that there is no memory required in the system.

So what?s wrong with interlaced video?

Today, many television monitors are not CRT-based, and in many cases, they are not even scanned systems. If you take a photograph of a CRT with a fast shutter speed, you only see a portion of the entire picture, since the image is being continually repainted on the screen. If you take a photograph of an LCD or plasma display, you will see the entire image since every pixel is ?on? all of the time. In order to display an interlaced image on one of these displays, it must first be converted to a non-interlaced, or progressive scan image. This process is called de-interlacing.

In addition to the new displays, there are new video formats, not all of which are interlaced. The common HDTV standards in North America are usually referred to as 1080i (1080 lines, interlaced) and 720p (720 lines, progressive). Plasma and LCD display devices can also have their own native resolutions which may not match either of these standards. As a result, de-interlacing is often done in conjunction with the resizing of an image, as a part of the coordinated image processing system inside your TV or DVD player.

There are several methods used to de-interlace a video image, many of which are grossly inadequate. The two most common of these are weaving and blending. In weaving, field 1 and field 2 are shown together during the first 60th of a second. For the next 60th of a second, field 2 and field 3 are shown, and so on, such that half of the lines are updated every successive 60th of a second. Sounds clever, until you realize that you are displaying field 1 and field 2 at the same time, when they were actually captured 1/60th of a second apart. Doing this creates visual artefacts. As an example, imagine how a hockey stick might look moving from left to right across the screen:

The hockey stick is drawn on the screen as a series of consecutive lines. In the interlaced format, the camera only captures and transmits every other line every 1/60th of a second:

When two fields that were sampled at different times are shown at the same time, the result is blurring along the direction of motion. Moreover, sharp edges in the original image become jagged ? an effect known as ?mouse teeth?. Mouse teeth is a common artefact that results from using the weaving method of de-interlacing.

Another way to easily de-interlace a video image is called blending. In blending, consecutive fields are averaged and then displayed as one frame. Blending has the advantage of not generating any mouse teeth, but it results in a loss of both vertical and temporal resolution ? making your high definition TV not so high definition.

The best way to perform de-interlacing is also the most difficult to implement. Rather than simply regenerating field 1 and displaying it along with field 2, each portion of the image in field 1 is analyzed, and a new field 1? is generated and displayed with field 2. Field 1? is similar to field 1, but motion-adapted. If the de-interlacer determines that there is a hockey stick moving from left to right across the screen, it generates the new field 1? with the stick located where it ought to be at the time of field 2. This method produces a video frame with no mouse teeth, at the expense of only a few million calculations per second. The complex algorithms involved in motion-compensated de-interlacing are implemented within high end image processors such as Gennum?s GF9350 with VXP? technology.

Gennum implents two distinct algorithms in their chipset to properly de-interlace an image. FineEdge? processing is a motion compensated de-interlacing algorithm that eliminates mouse teeth. Another algorithm, called TruMotionHD?, assists FineEdge? by processing the time-based (temporal) characteristics of the video.

The VXP? technology recognizes that the most common type of processing within the video display is not simply de-interlacing, but also rescaling of the image. This two step processing is most effectively done when moving from higher to lower resolutions; therefore, in a system using the Gennum VXP? image processor, the input image is usually first upconverted and de-interlaced to a 1080p (progressive) image, then downconverted to the native resolution of the display.

Even with the complex motion-adaptive algorithms used in high end displays, there is another potential situation that can result in de-interlacing artefacts. Motion picture films are shot at a rate of 24 frames per second. Video is distributed at 30 frames per second. In order to change the frame rate, a process called ?3:2 pulldown? is implented.

Imagine an initial film sequence of four frames: A, B, C, D. A telecine machine samples the frames and generates interlaced samples of each frame ? A , A, B , B, etc, where A is the field generated using the even numbered lines of frame A, and A is the field generated using the odd numbered lines. To produce the final interlaced video output, the fields are sent in the order: A, A, A, B, B, C, C, C, D, D?.. The result is that there are 3 fields of A, then 2 of B, then 3 of C, then 2 of D.

For a normal video image, the processor would assume that A comes 1/60th of a second after A when, in fact, they were sampled by the motion picture camera at the same time. In addition, the time difference between A and B is not 1/60th of a second, but 1/24th. To obtain a good quality image on an HD monitor using source material that may have originated as either motion picture film or as video, the de-interlacer must be able to account for film mode. The Gennum TruMotionHD? algorithm has very robust film mode detection and automatically shifts its de-interlacing algorithm in order to accommodate for film mode.

With the emergence of high definition television formats, and the plasma and LCD progressive displays that take advantage of them, the need for high quality de-interlacing is self-evident. Unfortunately, not all de-interlacers are created equal ? this is easy to see if you go down to your local electronics store and look at the monitors on display. Are high contrast moving edges sharp? Is there blurring and motion where there shouldn?t be? Even the most expensive progressive display televisions out there can show the artefacts of poor de-interlacing. If you?re going to spend so much on a television that in every other respect is far superior to the traditional CRT, you also deserve the best possible processing to ensure the best quality image!