Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/3441
Title: Terminology extraction using contextual information
Authors: Ji, Luning
Subjects: Hong Kong Polytechnic University -- Dissertations
Chinese language -- Terms and phrases -- Data processing
Natural language processing (Computer science)
Chinese language -- Data processing
Issue Date: 2007
Publisher: The Hong Kong Polytechnic University
Abstract: This work investigates different algorithms of automatic terminology extraction. The investigation considers two characteristics of terminology - unithood and termhood corresponding to two steps in terminology extraction, namely, term extraction and terminology verification. In the first step for term extraction, two statistic-based measurements considering the internal and contextual relationship are used to estimate the soundness of an extracted string pattern being a valid term. In the second step for terminology verification, window-based contextual information within a logical sentence is used. Two window-based approaches using on domain knowledge and syntax of the contextual information are proposed. After evaluating the merits and problems of each approach, a hybrid approach is designed to combine both the syntactic information and domain specific knowledge to verify the extracted candidate terms as terminology or not. Furthermore, a component-based composition algorithm is proposed to help verify the extracted terms as valid terminology. Experiments show that the hybrid approach can achieve significant improvement with the best F-measure, not only maintaining a good precision but also a good recall. Due to the special nature of Chinese, this work investigates details of the effect of word segmentation in terminology extraction through the comparisons of two preprocessing models - a character-based model and a word-based model. Limitations of segmentation and some feasible suggestions for dealing with these limitations are also provided. Furthermore, this work investigates methods to construct a core lexicon for a specific domain from an existing domain lexicon. The core lexicon contains the most fundamental terms used in a domain through which other terms in the domain can be constructed. Three different approaches considering four characteristics of core lexicon are proposed and implemented. Evaluations show that the automatic extracted core lexicon can have good coverage of the domain lexicon as well as being minimal with on redundant terms. The use of a core lexicon can reduce program runtime and memory usage in real applications.
Description: xii, 146 leaves : ill. ; 30 cm.
PolyU Library Call No.: [THS] LG51 .H577M COMP 2007 Ji
Rights: All rights reserved.
Type: Thesis
URI: http://hdl.handle.net/10397/3441
Appears in Collections:COMP Theses
PolyU Electronic Theses

Files in This Item:
File Description SizeFormat 
b21459538_link.htmFor PolyU Users 162 BHTMLView/Open
b21459538_ir.pdfFor All Users (Non-printable) 1.53 MBAdobe PDFView/Open


All items in the PolyU Institutional Repository are protected by copyright, with all rights reserved, unless otherwise indicated. No item in the PolyU IR may be reproduced for commercial or resale purposes.