Furtul unui portofel surprins de sisteme de supraveghere

Furtul unui portofel surprins cu sistemul de supraveghere

WHEN you’re talking compression, you can’t help talking Tektronix. That’s because this company makes the test equipment necessary to test the various MPEG standards and we reckon they should be an accurate place to start. This article examines the history and structure of MPEG and the evolution of the various MPEG standards.
What is MPEG?
V I D E O with Pickpockets and other thieves, especially those working in teams
MPEG is the Moving Pictures Experts Group, a committee that comes under the joint control of the International Standards Organisation (ISO) and the International Electro-technical Commission (IEC). IEC handles international standardisation for electrical and electronic technologies; ISO handles virtually everything else. At the start of the information technology age ISO and IEC formed a joint technical committee (JTC1) to address IT issues.
JTC1 has a number of working groups including JPEG (Joint Photographic Experts Group) and WG11, which is MPEG. The committee was formed in 1988 under the leadership of MPEG convener Dr. Leonardo Chiariglione of Italy. Attendance at MPEG meetings, normally held four times each year, has grown from about 15 delegates in 1988 to some 300 in 2003.
JCTI established an enviable track record of generating standards that achieve widespread adoption, MPEG-1, MPEG-2, and the MP3 audio compression standard (MPEG-1 Audio, layer 3). This reputation was somewhat tarnished by MPEG-4, not because of deficiencies in the standard, but as a result of the long delay in publishing licensing terms and the strong adverse reaction to the first terms that were eventually published in early 2002.
It should be noted that MPEG itself has no role in licensing. As a committee under ISO and IEC, it requires that technology included in its standards be licensable under “reasonable and non-discriminatory terms” but there is no accepted definition of “reasonable.” Instead licensing is the responsibility of the holders of the relevant patents and typically this means many organisations throughout the world that have contributed to research and development and wish to see some recompense.
For MPEG-2, the patent holders grouped together and formed MPEG-LA (MPEG licensing authority). All the essential patents are certified by this group and are licensed as a block to any organisation wishing to implement the standards. These worked well for MPEG-2 but, as noted above, greater difficulties are being experienced with MPEG-4 and many hold the delays in publishing licensing terms responsible for the current lack of commercial success of MPEG-4.
This, of course, may change. The MPEG-4 Industry Forum is working hard to find solutions acceptable to patent holders and potential users and revised proposals released in mid-2002 are likely to be more readily accepted.
MPEG-1
The MPEG-1 system, ISO/IEC 11172, is the first international compression standard for motion imagery and was developed between 1988 and 1992. It uses DCT transforms, coefficient quantisation, and variable length coding in a similar manner to JPEG, but also includes motion compensation for temporal compression.
It has three parts: ISO/IEC 11172-1, the multiplex structure; ISO/IEC 11172-2, video coding; and ISO/IEC 11172-3, audio coding.
In its day MPEG-1 represented a remarkable technical achievement. It was designed to compress image streams with SIF picture size, 352×288 (25fps PAL) or 352×240 (30fps NTSC), and associated audio, to approximately 1.5 Mbps total compressed data rate. This rate is suitable for transport over T1 data circuits and for replay from CD-ROM, and corresponds approximately to the resolution of a consumer video recorder. A measure of this achievement may be seen by comparing the numbers for an audio CD. A normal audio CD, carrying two-channel audio, at 16-bit resolution with a sampling rate of 44.1 kHz, has a data transfer rate of up to 1.5 Mbps. MPEG-1 succeeds in compressing video and audio so that both may be transmitted within the same data rate!
The CIF format is a compromise between European and American SIF (source input format) formats: spatial resolution for 625 SIF (352×288) and temporal resolution 525 SIF (29.97 fps). This is the basis for video conferencing. MPEG-1 was designed for CIF images and has no tools to handle interlaced images, so it had little obvious impact in the world of broadcast television.
Before leaving MPEG-1, it’s important to note what is actually included in the standard and how interoperability is achieved. The standard defines a tool set, the syntax of the bit stream, and the operation of the decoder. It does not define the operation of the encoder – any device that produces a syntactically valid bit stream that can be decoded by a compliant decoder is a valid MPEG encoder. Also, it does not define the quality of the picture, or encoding quality. This allows for the evolution of encoding technology without change to the standard, and without rendering existing decoders obsolete. This model is used throughout the MPEG standards. The success of this strategy is obvious; although MPEG-2 is used for video, MPEG-1, layer 2 audio is still in use as the principal audio compression system in the DVB transmission systems today.
MPEG-2
MPEG-1 was frozen (i.e., subsequent changes were allowed to be editorial only) in 1991. In the same year the MPEG-2 process was started, and MPEG-2 eventually became a standard in 1994. The initial goals were simple; there was a need for a standard that would accommodate broadcast quality video width. This required the coding of “full size” standard definition images (704×480 at 29.97 fps, and 704×576 at 25 fps), and the ability to code interlaced video efficiently. In many ways, MPEG-2 represents the “coming of age” of MPEG.
The greater flexibility of MPEG-2, combined with the increased availability of large-scale integrated circuits, meant that MPEG-2 could be used in a vast number of applications. The success of MPEG-2 is best highlighted by the demise of MPEG-3, intended for high-definition television. MPEG-3 was soon abandoned when it became clear that MPEG-2 could accommodate this application with ease.
MPEG-2 is, of course, the basis for both the ATSC and DVB broadcast standards, and the compression system used by DVD. MPEG-2 was also permitted to be a moving target. By the use of profiles and levels, discussed below, it was possible to complete the standard for one application, but then to move on to accommodate more demanding applications in an evolutionary manner. Work on extending MPEG-2 continues.
MPEG-2 is documented as ISO/IEC 13818, currently in 10 parts. The most important parts of this standard are:
ISO/IEC 13818-1 Systems (transport and programs streams), PES, T-STD buffer model and the basic PSI tables: CAT, PAT, PMT and NIT.
ISO/IEC 13818-2 video coding.
ISO/IEC 13818-3 audio coding.
ISO/IEC 13818-4 MPEG test and conformance.
ISO/IEC 13818-6 data broadcast and DSMCC.
One of the major achievements of MPEG-2 defined in 13818-1, the transport stream, is described in Section 8. The flexibility and robustness of this design have permitted it to be used for many applications, including transport of MPEG-4 and MPEG-7 data.
Note: DVB and ATSC transport streams carry video and audio PES within “program” groupings, which are entirely different than “program streams” (these are used on DVD & CD). MPEG Transport Streams are normally constant bit rate but program streams are normally variable bit rate.
Profiles and Levels in MPEG-2
With certain minor exceptions, MPEG-1 was designed for one task; the coding of fixed size pictures and associated audio to a known bit rate of 1.5 Mbps. The MPEG-1 tools and syntax can and have been used for other purposes, but such use is outside the standard, and requires proprietary encoders and decoders. There is only one type of decoder compliant to the MPEG-1 standard.
At the outset, there was a similar goal for MPEG-2. It was intended for coding of broadcast pictures and sound, nominally the 525/60 and 625/50 interlaced television systems. However, as the design work progressed, it was apparent that the tools being developed were capable of handling many picture sizes and a wide range of bit rates. In addition, more complex tools were developed for scalable coding systems. This meant that in practice there could not be a single MPEG-2 decoder. If a compliant decoder had to be capable of handling high-speed bit streams encoded using all possible tools, it would no longer be an economical decoder for mainstream applications. As a simple example, a device capable of decoding high-definition signals at, say, 20 Mbps would be substantially more expensive than one limited to standard-definition signals at around 5 Mbps. It would be a poor standard that required the use of an expensive device for the simple application.
MPEG devised a two-dimensional structure of profiles and levels for classifying bit streams and decoders. Profiles define the tools that may be used. For example, bidirectional encoding (B-frames) may be used in the main profile, but not in simple profile. Levels relate just to scale.
A high level decoder must be capable of receiving a faster bit stream, and must have more decoder buffer and larger frame stores than a main level decoder. However, main profile at high level (MP@HL) and main profile at main level (MP@ML) use exactly the same encoding/decoding tools and syntax elements.
The simple profile does not support bidirectional coding, and so only Iand P-pictures will be output. This reduces the coding and decoding delay and allows simpler hardware. The simple profile has only been defined at main level.
The Main Profile is designed for a large proportion of uses. The low level uses a low-resolution input having only 352 pixels per line. The majority of broadcast applications will require the MP@ML subset of MPEG, which supports SDTV (standard definition TV).
The high-1440 level is a high definition scheme that doubles the definition compared to the main level. The high level not only doubles the resolution but maintains that resolution with 16:9 format by increasing the number of horizontal samples from 1440 to 1920.
In compression systems using spatial transforms and requantising, it is possible to produce scalable signals. A scalable process is one in which the input results in a main signal and a “helper” signal. The main signal can be decoded alone to give a picture of a certain quality, but if the information from the helper signal is added, some aspect of the quality can be improved.
For example, a conventional MPEG coder, by heavily requantising coefficients, encodes a picture with moderate signal-to-noise ratio results. If, however, that picture is locally decoded and subtracted pixel-by-pixel from the original, a quantizing noise picture results. This picture can be compressed and transmitted as the helper signal. A simple decoder only decodes the main, noisy bit stream, but a more complex decoder can decode both bit streams and combine them to produce a low-noise picture. This is the principle of SNR (signal-to-noise ratio) scalability.
As an alternative, coding only the lower spatial frequencies in a HDTV picture can produce a main bit stream that an SDTV receiver can decode. If the lower definition picture is locally decoded and subtracted from the original picture, a definition-enhancing picture would result. This picture can be coded into a helper signal.
A suitable decoder could combine the main and helper signals to recreate the HDTV picture. This is the principle of spatial scalability. The high profile supports SNR and spatial scalability as well as allowing the option of 4:2:2 sampling. The 4:2:2 profile has been developed for improved compatibility with digital production equipment. This profile allows 4:2:2 operation without requiring the additional complexity of using the high profile. For example, a HP@ML decoder must support SNR scalability, which is not a requirement for production.
The 4:2:2 profile has the same freedom of GOP structure as other profiles, but in practice it is commonly used with short GOPs making editing easier. 4:2:2 operation requires a higher bit rate than 4:2:0, and the use of short GOPs require an even higher bit rate for a given quality. The concept of profiles and levels is another development of MPEG-2 that has proved to be robust and extensible; MPEG-4 uses a much more complex array of profiles and levels, to be discussed next month along with MPEG-7 and MPEG-21.
Acknowledgment: Tektronix Inc.
*Les Simmonds is an independent electronic security consultant and video testing authority. Email: lessimmo@bigpond.net.au. Part 2 next month.

This entry was posted in Sisteme de supraveghere and tagged , , . Bookmark the permalink.