PolyU IR
 

PolyU Institutional Repository >
Electronic and Information Engineering >
EIE Journal/Magazine Articles >

Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/246

Title: Low-complexity and high-quality frame-skipping transcoder for continuous presence multipoint video conferencing
Authors: Fung, Kai-tat
Chan, Yui-lam
Siu, Wan-chi
Subjects: Compressed-domain processing
Frame skipping
Video compression
Video conferencing
Video transcoding
Issue Date: Feb-2004
Publisher: IEEE
Citation: IEEE Transactions on multimedia, Feb. 2004, v. 6, no. 1, p. 31-46.
Abstract: This paper presents a new frame-skipping transcoding approach for video combiners in multipoint video conferencing. Transcoding is regarded as a process of converting a previously compressed video bitstream into a lower bitrate bitstream. A high transcoding ratio may result in an unacceptable picture quality when the incoming video bitstream is transcoded with the full frame rate. Frame skipping is often used as an efficient scheme to allocate more bits to representative frames, so that an acceptable quality for each frame can be maintained. However, the skipped frame must be decompressed completely, and should act as the reference frame to the nonskipped frame for reconstruction. The newly quantized DCT coefficients of prediction error need to be recomputed for the nonskipped frame with reference to the previous nonskipped frame; this can create an undesirable complexity in the real time application as well as introduce re-encoding error. A new frame-skipping transcoding architecture for improved picture quality and reduced complexity is proposed. The proposed architecture is mainly performed on the discrete cosine transform (DCT) domain to achieve a low complexity transcoder. It is observed that the re-encoding error is avoided at the frame-skipping transcoder when the strategy of direct summation of DCT coefficients is employed. By using the proposed frame-skipping transcoder and dynamically allocating more frames to the active participants in video combining, we are able to make more uniform peak signal-to-noise ratio (PSNR) performance of the subsequences and the video qualities of the active subsequences can be improved significantly.
Rights: © 2004 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
Type: Journal/Magazine Article
URI: http://hdl.handle.net/10397/246
ISSN: 15209210
Appears in Collections:EIE Journal/Magazine Articles

Files in This Item:

File Description SizeFormat
continuus-presence_04.pdf734.09 kBAdobe PDFView/Open



Facebook Facebook del.icio.us del.icio.us LinkedIn LinkedIn


All items in the PolyU Institutional Repository are protected by copyright, with all rights reserved, unless otherwise indicated.
No item in the PolyU IR may be reproduced for commercial or resale purposes.

 

© Pao Yue-kong Library, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong
Powered by DSpace (Version 1.5.2)  © MIT and HP
Feedback | Privacy Policy Statement | Copyright & Restrictions - Feedback