Understanding Video Formats for the Maxedia




Дата канвертавання19.04.2016
Памер45.62 Kb.
Understanding Video Formats for the Maxedia
With the Maxedia system Martin is getting into a new world called: Video.

Suddenly you are confronted with a lot of new names, formats and abbreviations from the video world when you are doing a demo for the Maxedia. In this document I try to explain briefly the different formats. Please use this document as a guideline, much more information can be found on the internet.


A. Video connections analog:
In the old days, things were easy; we connected video outputs to video inputs and were done. Now we've got several video connections: Composite video, Y/C (or S-Video), RGB, digital DV, SDI, HD-SDI, DVI, VGA,…

Mainly all analog signals are getting replaced more and more by digital signals. Here is a short overview:



Composite Video: Composite video or called CVBS (Color Video Blanking synchronization) is still the most used connection. This signal contains all the signals to transfer analog video. It’s a ‘composition’ of Luminance, color and synchronization signals into one signal. Which makes it easy to connect video between equipment through one coax-cable. This is the lowest quality for a video source, but again, it is also the most common. The connections are through BNC or RCA (Cinch) connectors.

BNC-Plug RCA-plugs RCA to BNC converter



S-Video : S-video is often incorrectly referred to as S-VHS since it first came into home use with the introduction of that video tape format. Professionals prefer the name Y/C video rather than S-video since it is more clearly description of the signal format.

The connection has 4-pins. Two for ground, one for the composite image in Black and White (Luminance), and one for the color information (chrominance).

This cable is good for short distances between video equipment, and gives better quality of video certainly on the colors.

It is not useful for longer distances, due to color shifting which can appear.

The better Video recorders have a setting to change timing settings of the incoming signal to remove the shifting of both signals caused by a long cable.

S-Video Cable Connections



SCART (Syndicat des Constructeurs d'Appareils Radiorécepteurs et Téléviseurs) or EUROconnector: SCART is mainly used in the home environment to connect a video with a TV set. This connection allows Composite, S-video RGB analog video and analog stereo audio to be transmitted between equipment using a single multi-cable. The cable lengths are always short due to this purpose.

Many adaptors are available to interconnect Composite, S-video (and more) to the Scart connection.


SCART Cable SCART adaptor



RGB: This analog RGB (Red Green Blue) connection has higher quality then the composite or s-video signals and is been used for high quality consumer and professional equipment. The signal consists of the 3 basic colors of video. Therefore less signal conversion and handling needs to be done inside the equipment which gives better quality. The green signal contains also synchronization signals.

Connections are in 3 colors and in RCA or BNC





YUV: (Y R-Y B-Y, YPbPr) With the introduction of DVD players, an analog YUV video connection was introduced for the high-end TV’s. Since there is some confusion of what to label the connector YUV, R-Y B-Y and YPbPr is been used. YUV is derived from gamma corrected RGB, and gives more accurate and crisp images. The same cables as RGB are been used for connections.

YUV cable



VGA (Video Graphics Array): a graphics display system for PCs developed by IBM. VGA has become one of the ‘de facto’ standards for PCs. Since its introduction in 1987, several other standards have been developed that offer greater resolution and more colors (SVGA, XGA,…), but VGA remains the default. All PCs made today support VGA, and possibly some other more advanced standard. The signal is distributed through a 15-pin connector DB-15 and adaptor cables exist of 5-wires. The so called 5-wire connection. Red, Green, Blue, Horizontal Sync and Vertical sync.

Connector DB-15 Adapter VGA to 5-wire BNC


B. Video connections digital:
DV, Firewire or IEEE1394: Firewire accepted by the IEEE as the IEEE1394 standard was invented by Apple to be a peripheral standard between devices across its entire product line at 400Mbits/sec. In 1995, a tiny connector showed up on the first DV camcorders shipped by Sony. The so-called DV (Digital Video) connection. DV was the killer application for Firewire. Later on everybody adapted this protocol for new digital video devices like camera’s, VCR’s, DVD recorders etc… both in the consumer and professional world this has become a common connector to distribute DV short distance between devices. According to the standard 400 Mbit/sec over 4,5 meters. This means it’s mainly been used for NLE (Non linear Editing) purposes on PCs and Macs.

Therefore it’s not suitable for the Maxedia to connect a Camera since the distance is very short. (Although I’ve been doing tests up to 50 meters with CAT5 cable since the DV data is about 100Mbit/sec.) It’s only to be used to transfer video into the Maxedia.

The 2 possible connectors. The normal and small connector.




The DV signal contains PAL (720x576 pixels 25fr/S) or NTSC (720x480 pixels 30fr/S) signal at sustained 3,5 Mbyte data per second.
To connect a Live camera to the Maxedia either analog composite/S-Video or SDI is suitable.
SDI (Serial Digital Interface) YCbCr: This is the digital version of the YUV. The YUV is digitized and then serialized at very high speed around 270Mhz signal in a 8-bit signal. This signal is now more and more replacing the analog composite and RGB signals in broadcast and presentation events. Camera’s, Video mixers etc all start to use this signal.

The signal digitizes Pal 576i (720x576 pixels) and NTSC 480i signals (720x480 pixels)

The signal is distributed on a BNC. And the same coaxial cable used for Composite and RGB signal can be used again. This makes it a popular signal since the same cabling can be re-used and the quality is digital on long distances (300 meters). An even longer distance can be achieved by transforming the signal onto fiber.

SDI BNC connection


HD-SDI: The High-Definition SDI version has higher 10-bit video resolution and High definition TV formats up to 1080x720 pixels, 1920x1080 pixels. The luminance and color is serialized: Therefore the signal needs to go higher in frequency up to 1,45Gbit.

The same BNC Cables are been used. It is possible to transmit the video symmetrical on two BNC’s.



HD-SDI BNC connector


DVI (Digital Video Interface): A digital interface standard created by the Digital Display Working Group (DDWG) to convert analog signals like VGA into digital signals to accommodate both analog and digital monitors. DVI handles bandwidths up to 160MHz and supports UXGA and HDTV with a single link. Higher resolutions can be supported with a dual set of links parallel. The DVI is going to replace the analog VGA for graphic cards, monitors and other devices like plasma screens or projectors for example.
DVI is limited to 5-meters in distance, according to the DVI 1.0 specifications. But with solid soldering techniques, using low capacitance, thicker DVI cables and fiber optic interface technologies, the distance of DVI transmission is not limited and can extend as far
as 500-meters from the source to the display device.

The Maxedia uses the DVI output to transmit the high quality output images.




C. Video formats

For who wants to know more about video formats, what is interlacing or progressive video, DV etc. Here is an overview.


What is analog Video?

Originally when video formats were invented they were designed to be transmitted through TV broadcast stations on wireless Video transmitters.

What are the basics? A camera is scanning the video lines one by one from top to bottom until the whole video field was scanned. And then the camera started to re-scan the lines again from top to bottom. On the other side the TV-set is receiving the same signal and displays the same video line of which the camera is scanning at that moment. To be able to sync the two systems, the camera is transmitting extra sync pulses so that the TV set can synchronize at the same line which is transmitted and at the same field. This explains if the sync pulses are not received properly by the TV set because the video signal is too weak for example, the screen starts scrolling and flipping.

3 video formats are used world wide. These formats differentiate from each other by the amount of frames/sec and the format of sync signals.


- PAL (Phase Alternation by Line) Mainly used in Europe.

50 frames per second, 625 lines per frame.


- NTSC (National Television System Committee) Mainly used in America, Asia.

59,94 fields per second 525 lines per frame.

- SECAM (Systeme Electronique Couleur Avec Memoire)

50 frames per second, 625 lines per frame.


Mainly Pal and NTSC formats are been used.
Why is video interlaced?

A video movie is a sequence of images (called frames). PAL TV is video displayed at 25fps (frames per second) and NTSC TV at 30fps.

Video at 25 or 30fps is enough with human eye properties but on TV screen, image is perceived flickering. Engineers found that displaying the same frame in two parts (called fields), meaning one field is for the odd lines (1, 3, 5, 7...) and one field for the even lines (2, 4, 6, 8...) and doubling the rate (60 1/2 fps and 50 1/2 fps) avoid the flicker.

Therefore analog video became interlaced video: 60 half-images per second for NTSC and 50 half-images per second for PAL.

This is important to know because these interlaced video images displayed on a computer monitor, are resulting into strange shifted images:

The faster the image is moving on video, the more the image will be distorted, because of the even and odd lines displayed simultaneously on a computer monitor which are shifted in time 1/60 or 1/50 of a second.


What is deinterlacing?

For the best video quality on a computer monitor the even and odd video fields need to be merged back together for a proper video image.

The simplest method is remove one field and duplicate the other field:

But the end result is a picture with only half of the resolution since two lines are duplicated.

The result is improved by using interpolation or “estimating” the second field's values by working out mid-values from the lines above and below it from the first field. This leads to a smoother looking computer image:



What is progressive scan?

New video formats which don’t interlace the original video image but generate frames directly without doubling the fields into even and odd is called progressive scan. Some video-camera’s like the Sony PD-150 DVCAM can be set into progressive scan.

This results into better images when used on a computer image output and gives better end-results on the Maxedia for Live Camera.
Important:

Video files which are used on the Maxedia system should therefore be deinterlaced first or be in progressive format.

The movie will look better and the system will calculate only 25 or 30 frames instead of 50 or 60 fields. Which results in better performance for the system in overall.


What is digital Video?

Digital video is digitized analog video. Video consists of video-frames. Each frame is digitized separately. Each frame consists of pixels. The amount of pixels defines the resolution of the video frame. For example 720x576 pixels for a full Pal frame. Each pixel contains information for Red, Green and Blue. And finally each of the 3 colors is digitized as an 8 to 10 bit resolution data. As you see, this is a lot of data to store! If you would store all this raw RGB data for each pixel then you would end up for this example in 31Mbyte/sec data. This is so-called uncompressed video data.

Since data storage is not endless, the amount of data to store a video-file is very important. Therefore, a video file is most of the time compressed when it is digitized, to save valuable data. To be able to stream video over the internet for example, video needs to be compressed to very high level.
What is compression?

Compressing video is a smart way to remove unnecessary data. An extreme example would be: if all pixels of a whole video-frame would be white, it is not needed to store white for each individual pixel for that frame. In reality it is much more complex then that. That’s why there are so many compression formats available. These are called Codec’s: DV, MPEG1; MPEG2, MPEG4, Cinepac, Indeo, Ligos, DivX, WMV, REAL, etc… None of them are ideal since it’s always a question of quality versus data-storage/sec. The more the file is compressed, the more the video file will loose its original quality in general. But every year the compression methods are getting better and better. Some codec’s compress individual frames like the AVI and QuickTime formats. Other codec’s even compare the differences between the frames and store only the differences and re-compress that data again to make it even smaller. Like the MPEG formats 1, 2 & 4, Windows Media, Real-player etc…

Which format is the best is depending on the needs. If it’s for the web some codec’s like REAL-Player, QuickTime, Flash Video, Windows media. Other codec’s are made to have a good quality while the data rate needs to be constant, for example Video Disk VCD is using MPEG1 at a constant bit rate because the mechanics are the same as from a CD-player. This decoder needs a constant bit stream. MPEG2 is been used as the standard for DVD video. At this very moment the Windows Media Encoder 9 is having very good results for new High Definition video formats and is been used recently by news reporters in Iraq for broadcasting news reports from their difficult location to the TV stations. They are using a little satellite dish to upload compressed newsgathering to the news stations with their Laptop with compressed Windows Media 9 video files.
The main video formats for the Maxedia are AVI (Windows) and MOV (QuickTime Macintosh). MPEG, WMV formats are not been used at the moment since they need a lot more calculation to decode real-time.
What is decoding?

To show compressed video on a screen the video must be decompressed or decoded first, to generate back the original image. This needs a lot of calculation. The computers are getting faster and faster each day to be able to do this only in software. Before computers were fast enough dedicated hardware interfaces were needed to do the job. Especially for high compressed formats in high resolution video like the MPEG formats.


AVI files can be encoded into different formats.

The best video quality for the Maxedia is achieved with the AVI DV-codec. This is the same codec that digital DV-video camera’s are using. Although AVI files can be encoded with other codec’s, the performance is not as good, because it takes more processing time to decode more compressed files. Resulting in lower output frames/sec.
What is DV?

Originally DV is the codec video camera’s use to store the digitized video on the internal tape of the recorder. This codec uses low compression 5:1 resulting in very high quality images. DV is good enough that it's used by both consumers and many professionals as a compression standard.

There are 3 tape formats known as DV formats: MiniDV, DVCAM, and DVCPRO (see below for a more detailed description).
Through a Firewire or IEEE1394 port of a computer the digital data can be streamed without quality loss to or from the computer. The data is 3,5Mbyte /sec or 25Mbps. This is why it is called DV25. This data rate is both fixed and constant. It does not matter if the video is a high action sequence or is totally black, it's data rate will always be 25Mbps. While this can be inefficient, it also makes it easy to predict how much space an amount of video will take up on a hard drive. Here are some useful figures (sizes are approximate):

Second = 3.5 MB


1 Minute = 215 MB
4 Minutes, 40 Seconds = 1 GB
1 Hour = 13 GB

Are there more DV Formats?


MiniDV: Also referred to as "Regular DV" "Consumer DV" or just "DV," MiniDV is the most common DV tape format. MiniDV provides the most universal playback compatibility. MiniDV tapes can be played in any DV device (including DVCAM and DVCPRO), while many MiniDV devices can play all three formats. MiniDV, as its name implies, uses only the small DV tapes. Note that while the small DV tapes made by Sony are often marked DVCAM and the tapes made by Panasonic are often marked DVCPRO, it is the device that determines which format is recorded; the tapes are identical. So if a Sony tape that is marked DVCAM is used in a MiniDV camera, the DV data will be recorded as MiniDV.

DVCAM: DVCAM is developed and supported by Sony. DVCAM uses the same DV25 codec as MiniDV, so it has identical picture quality, yet it has a faster tape speed and a wider track pitch. This means that the data is recorded over a larger area, which reduces dropouts (dropouts are defects in the picture caused by the physical loss of the particles used on the tape to record). This makes DVCAM a much sturdier and dependable format that has found favor with higher-end productions. Also, a MiniDV deck, if used in an analog suite, cannot perform frame-accurate edits, while DVCAM can. This makes it a good choice for post-production houses that want to simply add DV to their existing set- up, but is of no importance to those editing over Firewire. DVCAM can only be recorded in specific DVCAM decks (made by Sony), yet it can be played back in most MiniDV and DVCPRO decks.

DVCPRO: DVCPRO is Panasonic's entry into the DV format arena. DVCPRO has an even wider track pitch than DVCAM (18 microns) and uses a metal-particle tape (as opposed to MiniDV's and DVCAM's metal-evaporated tape) for even better durability. DVCPRO was aimed mainly at the TVs news market, were it has gained some acceptance. Among digital editors it is less used, and its adoption was hampered by an initial short-sighted decision to not include Firewire on DVCPRO equipment, although nowadays DVCPRO equipment has Firewire.

So this means we can record these high quality images on hard drives (with only 5:1 compression) from these broadcast standards simply through the Firewire connection to the computer. Editing programs NLE (Non Linear editing programs) can be used (even Windows Movie maker) to cut and edit the footage to be used for Maxedia.


These DV-AVI video files are stored as:

- Pal 720x576 pixels 25 frames or

- NTSC 720x480 pixels 30 frames per second.

(see below for description of PAL & NTSC formats)


Digital-S from JVC and DVCPRO50 from Panasonic use 2 DV codec’s in parallel at 50MBit/second to store higher data with more color information. These formats are been used when chroma-keying is needed. Better known as blue- or green-screen keying. For these applications the color information is crucial for a good keying of the person in front of the screen, to be able to replace the color by another image.

These format’s don’t have Firewire connections, so cannot be used to capture images to the computer with the standard Firewire connections.


What is SD?

SD stands for Standard Definition video and stands for the PAL 720x576 (576i) and NTSC 720x480 (480i) resolution uncompressed digitized. You can compare it with the uncompressed DV format 235 Mbps/sec. This data format is been transmitted between video devices as SDI signal


What are HD-formats?

Although the Maxedia is not using the HD-formats yet, it is the upcoming future. At the NAB2004 exhibition the trend all-over is HD.

HD stands for upcoming High Definition Video formats.

The standard video formats below the HD format is called SD (Standard Definition).

HD equipment is still very expensive. An HD recorder will cost between $40.000 and $100.000, based on what’s available today.

Therefore at this moment it’s not widely used yet, only in broadcast studios and the better TV series, but it’s good to know it exists.


DVI format: Digital Video Interface. DVI, launched in 1999, is a purely digital interface that transmits visual data from a source device to a display. With DVI, there is no digital-to-analog or analog-to-digital conversion that needs to take place before, during or after transmission. This streamlined, straightforward process makes DVI superior to other interface connectors, such as analog.
HDCP: (High definition content protection): No more piracy. With the advent of DVI, however, came HDCP (high definition content protection), a standard "key" encoded into the DVI signal to prevent video data from being pirated. HDCP
HDMI (High Definition Multimedia Interface): Now enters HDMI (high definition multimedia interface), a new connector that bases itself on the DVI-HDCP model. HDMI with HDCP addresses the concerns of the Hollywood film community while providing consumers the ability to view high quality content on all HDTV displays and projectors. It also offers several features that make it perfectly suited for the consumer electronics market. Because of these advantages, HDMI is expected to become the standard digital interface for consumer home theater devices, computers systems
and DTVs. Already, HDMI is being featured on Silicon Image's new PanelLink Cinema chipset providing HDMI with HDCP for a high definition
content-ready solution, and consumers will likely see HDMI connectors on new electronics devices for many years to come.

Summarized:
Input:

For Maxedia the best video quality is achieved by using:

- Progressive or deinterlaced video.

- DV-AVI files preferable, or AVI files encoded with low compression. QuickTime is an alternative but not native to the windows environment.


Output:

The output image is distributed by DVI which is the highest quality possible for digital images produced by the Maxedia.


База данных защищена авторским правом ©shkola.of.by 2016
звярнуцца да адміністрацыі

    Галоўная старонка