Wir verwenden Cookies, um Ihre Erfahrungen besser machen. Um der neuen e-Privacy-Richtlinie zu entsprechen, müssen wir um Ihre Zustimmung bitten, die Cookies zu setzen. Erfahren Sie mehr.
- Formate und mehr
General Infos, Formats, Codecs, Recording Media and more
In the following we summarised some general information about audio and video. The abridgment is incomplete, we will be pleased to answer your questions, so do not hesitate to contact us.
HD, 2k, 4k, and more
More Info on FormatsGet more Info on Formats here
XDCAM, AVC, H264, H265, HEVC and more
More Info on CodecsGet more Info on Codecs here
Prof. Disc, Stick, SDHC and more
More Info on Media & StorageGet more Info on Recording & Storage here
General Infos and more
Composite Video, is the format of an analog television (picture only) signal before it is combined with a sound signal and modulated onto an RF carrier. In contrast to component video (YPbPr) it contains all required video information, including colors in a single line-level signal. Like component video, composite-video cables do not carry audio and are often paired with audio cables. Composite video is often designated by the CVBS initialism, meaning "Composite Video, Blanking, and Sync." It is usually in standard formats such as NTSC, PAL, and SECAM. (Source: www.princeton.edu)
High Definition Video
High Definition Video is video of higher resolution and quality than standard-definition. While there is no standardized meaning for high-definition, generally any video image with more than 480 horizontal lines (North America) or 576 lines (Europe) is considered high-definition. 720 scan lines is generally the minimum even though the majority of systems greatly exceed that. Images of standard resolution captured at rates faster than normal (60 frames/second North America, 50 fps Europe), by a high-speed camera may be considered high-definition in some contexts. (Source: Wikipedia)
Interlaced video is a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth. The interlaced signal contains two fields of a video frame captured at two different times. This enhances motion perception to the viewer, and reduces flicker by taking advantage of the phi phenomenon effect.
This effectively doubles the time resolution (also called temporal resolution) as compared to non-interlaced footage (for frame rates equal to field rates). Interlaced signals require a display that is natively capable of showing the individual fields in a sequential order. Only CRT displays and ALiS plasma displays are capable of displaying interlaced signals, due to the electronic scanning and lack of apparent fixed-resolution.(Source: Wikipedia)
Progressive scanning (alternatively referred to as noninterlaced scanning) is a way of displaying, storing, or transmitting moving images in which all the lines of each frame are drawn in sequence. This is in contrast to interlaced video used in traditional analog television systems where only the odd lines, then the even lines of each frame (each image called a video field) are drawn alternately, so that only half the number of actual image frames are used to produce video. (Source: Wikipedia)
Pulse Code Modulation PMC
Pulse-code modulation (PCM) is a method used to digitally represent sampled analog signals. It is the standard form of digital audio in computers, Compact Discs, digital telephony and other digital audio applications. In a PCM stream, the amplitude of the analog signal is sampled regularly at uniform intervals, and each sample is quantized to the nearest value within a range of digital steps. (Source: Wikipedia)
An audio signal is a representation of sound, typically as an electrical voltage. Audio signals have frequencies in the audio frequency range of roughly 20 to 20,000 Hz (the limits of human hearing). Audio signals may be synthesized directly, or may originate at a transducer such as a microphone, musical instrument pickup, phonograph cartridge, or tape head. Loudspeakers or headphones convert an electrical audio signal into sound. Digital representations of audio signals exist in a variety of formats (Source: Wikipedia)
UHD Ultra High Definition
Video technology was first developed for cathode ray tube (CRT) television systems, but several new technologies for video display devices have since been invented. Charles Ginsburg led an Ampex research team developing one of the first practical video tape recorder (VTR). In 1951 the first video tape recorder captured live images from television cameras by converting the camera's electrical impulses and saving the information onto magnetic video tape.
Video recorders were sold for $50,000 in 1956, and videotapes cost $300 per one-hour reel. However, prices gradually dropped over the years; in 1971, Sony began selling videocassette recorder (VCR) decks and tapes to the public.
The use of digital techniques in video created digital video, which allowed higher quality and, eventually, much lower cost than earlier analog technology. After the invention of the DVD in 1997 and Blu-ray Disc in 2006, sales of videotape and recording equipment plummeted. Advances in computer technology allowed even inexpensive personal computers to capture, store, edit and transmit digital video, further reducing the cost of video production, allowing program-makers and broadcasters to move to tapeless production.
The advent of digital broadcasting and the subsequent process of digital television transition is in the process of relegating analog video to the status of a legacy technology in most parts of the world. As of 2015, with the increasing use of high-resolution video cameras with improved dynamic range and color gamuts, and high-dynamic-range digital intermediate data formats with improved color depth, modern digital video technology is slowly converging with digital film technology (Source: Wikipedia)