Skip to Main Content
Skip Nav Destination
ASME Press Select Proceedings

International Conference on Information Technology and Computer Science, 3rd (ITCS 2011)

Editor
V. E. Muhin
V. E. Muhin
National Technical University of Ukraine
Search for other works by this author on:
W. B. Hu
W. B. Hu
Wuhan University
Search for other works by this author on:
ISBN:
9780791859742
No. of Pages:
656
Publisher:
ASME Press
Publication date:
2011

Currently the size of most statistical language models based on large-scale training corpus always goes beyond the storage ability of many handheld devices. This paper proposes a language model compression method which combined the domain compressing and the importance pruning, by setting the unit retained rate to control the size of the language model. In this paper, we use perplexity to evaluate the performance of the language model which get from the new compression method. The experimental results show that the new compression method can well adapted to the needs of handheld devices.

This content is only available via PDF.
You do not currently have access to this chapter.
Close Modal

or Create an Account

Close Modal
Close Modal