PolyU Institutional Repository >
Electrical Engineering >
EE Conference Papers & Presentations >
Please use this identifier to cite or link to this item:
|Title: ||Efficient interframe transform coding using temporal context|
|Authors: ||Chan, Yui-lam|
|Subjects: ||Discrete cosine transforms|
|Issue Date: ||1996 |
|Citation: ||1996 IEEE International Symposium on Circuits and Systems : circuits and systems connecting the world : ISCAS 96: May 12-15, Atlanta, v. 2, p. 786-789.|
|Abstract: ||Three dimensional (3D) transform coding can reduce the interframe redundancy among a number of consecutive frames, while the motion compensation technique can only reduce the redundancy of at most two frames. The former is very efficient when the correlation between interframe pixels is high. However, the performance will be degraded for complex scenes with a large amount of motions. This paper presents a three dimensional discrete cosine transform (3D-DCT) coding with variable temporal lengths, which is based on the local temporal activity. Two scene change detectors are used to detect the local temporal activity. Our idea is to let the motion activity in each block (with variable temporal length) be very low, while the efficiency of the 3D-DCT coding is increased. Through intensive computer simulations, the performance of the proposed 3D-DCT coding has been found to have substantial improvement over the conventional fixed length 3D-DCT coding. Furthermore, it is significant to point out that the performance of our proposed algorithm is better than that of the MPEG coding.|
|Description: ||DOI: 10.1109/ISCAS.1996.541843|
|Rights: ||© 1996 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.|
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
|Type: ||Conference Paper|
|Appears in Collections:||EE Conference Papers & Presentations|
All items in the PolyU Institutional Repository are protected by copyright, with all rights reserved, unless otherwise indicated.
No item in the PolyU IR may be reproduced for commercial or resale purposes.