SRILM Manual Pages
Papers and Tutorials
Novice users should consult the
following papers and tutorials first, where applicable.
SRILM - An Extensible Language Modeling Toolkit,
in Proc. Intl. Conf. Spoken Language Processing, Denver, Colorado,
Gives an overview of SRILM design and functionality.
A. Stolcke, J. Zheng, W. Wang, & V. Abrash,
SRILM at Sixteen: Update and Outlook,
in Proc. IEEE Automatic Speech Recognition and Understanding Workshop,
Waikoloa, Hawaii, December 2011.
Reviews updates to the toolkit since 2002.
Lecture 11 of his course on "Speech Recognition and Synthesis" at Stanford.
Excellent introduction to the basic concepts in LM.
The State of The Art in Language Modeling,
presented at the
6th Conference of the Association for Machine Translation in the Americas
(AMTA), Tiburon, CA, October, 2002.
Tutorial presentation and overview of current LM techniques
(with emphasis on machine translation).
Statistical language model adaptation: review and perspectives,
Speech Communication 42, 93-108, 2004.
Good overview of LM adaptation techniques, several of which are implemented in SRILM.
K. Kirchhoff, J. Bilmes, and K. Duh,
Factored Language Models Tutorial,
Tech. Report UWEETR-2007-0003, Dept. of EE, U. Washington, June 2007.
This report serves as both a tutorial and reference manual on FLMs.
S. F. Chen and J. Goodman,
An Empirical Study of Smoothing Techniques for Language Modeling,
Tech. Report TR-10-98, Computer Science Group,
Harvard U., Cambridge, MA, August 1998
Excellent overview and comparative study of smoothing methods.
Served as a reference for many of the methods implemented in SRILM.
T. Alumäe and M. Kurimo,
Efficient Estimation of Maximum Entropy Language Models with N-gram features: an SRILM extension,
Proc. Interspeech, Makuhari, Japan, 2010.
Describes the maximum entropy model extension that is now incorporated into SRILM (as of version 1.7.1).
frequently asked questions
and notes on
N-gram smoothing implementations.
These are the top-level executables that are currently part of SRILM:
- count N-grams and estimate language models
- merge N-gram counts
- apply N-gram language models
- induce word classes from N-gram statistics
- disambiguate text tokens using an N-gram model
- tag hidden events between words
- rescore N-best lists and lattices
- optimize score combination for N-best word error minimization
- interpolate N-best posterior probabilities
- segment text using N-gram language model
- rescore and segment N-best lists using N-gram language models
- count posterior-weighted N-grams in N-best lists
- build multiword N-gram models
- manipulate word lattices
- score pronunciations and pauses in N-best hypotheses
Additional tools implemented as scripts:
- miscellaneous conveniences for language model training
- manipulate N-gram language models
- manipulate perplexities
- create and manipulate finite-state networks
- rescore and evaluate N-best lists
- select a maximum-likelihood vocabulary from a mixture of corpora
- retrieve configuration information
Some of the data formats used by SRILM:
- ARPA backoff N-gram models
- Word class definitions
- Decipher(TM) probabilistic finite-state grammars
- N-best hypotheses lists
- Word posterior lattices
LM Library Classes
These are some of the basic classes of the SRILM library.
Note that this list is woefully incomplete, as this part of the documentation
is largely yet to be written.
- Generic language model
- Vocabulary indexing for SRILM
- Probabilities for SRILM
- Wrapper for stdio streams
Back to the SRILM home page.