nbestoptimize
nbestoptimize
NAME
nbestoptimize  optimize score combination for Nbest word error minimization
SYNOPSIS
nbestoptimize [ help ] option ... [ scoredir ... ]
DESCRIPTION
nbestoptimize
reads a set of Nbest lists, additional score files, and corresponding
reference transcripts and optimizes the score combination weights
so as to minimize the word error of a classifier that performs
wordlevel posterior probability maximization.
The optimized weights are meant to be used with
nbestlattice(1)
and the
usemesh
option,
or the
nbestrover
script (see
nbestscripts(1)).
nbestoptimize
determines both the best relative weighting of knowledge source scores
and the optimal
posteriorscale
parameter that controls the peakedness of the posterior distribution.
Alternatively,
nbestoptimize
can also optimize weights for a standard, 1best hypothesis rescoring that
selects entire (sentence) hypotheses
(1best
option).
In this mode sentencelevel error counts may be read from external files,
or computed on the fly from the reference strings.
The weights obtained are meant to be used for Nbest list rescoring with
rescorereweight
(see
nbestscripts(1)).
A third optimization criterion is the BLEU score used in machine translation.
This also requires the associated scores to be read from external files.
One of three optimization algorithms are available:
 1.

The default optimization method is gradient descent on a smoothed (sigmoidal)
approximation of the true 0/1 word error function (Katagiri et al. 1990).
Therefore, the result can only be expected to be a
local
minimum of the error surface.
(A more global search can be attempted by specifying different starting
points.)
Another approximation is that the error function is computed assuming a fixed
multiple alignment of all Nbest hypotheses and the reference string,
which tends to slightly overestimate the true pairwise error between any
single hypothesis and the reference.
 2.

An alternative search strategy uses a simplexbased "Amoeba" search on
the (nonsmoothed) error function (Press et al. 1988).
The search is restarted multiple times to avoid local minima.
 3.

A third algorithm uses Powell search (Press et al. 1988)
on the (nonsmoothed) error function.
OPTIONS
Each filename argument can be an ASCII file, or a
compressed file (name ending in .Z or .gz), or ``'' to indicate
stdin/stdout.
 help

Print option summary.
 version

Print version information.
 debug level

Controls the amount of output (the higher the
level,
the more).
At level 1, error statistics at each iteration are printed.
At level 2, word alignments are printed.
At level 3, full score matrix is printed.
At level 4, detailed information about word hypothesis ranking is printed
for each training iteration and sample.
 nbestfiles filelist

Specifies the set of Nbest files as a list of filenames.
Three sets of standard scores are extracted from the Nbest files:
the acoustic model score, the language model score, and the number of
words (for insertion penalty computation).
See
nbestformat(5)
for details.
In BLEU optimization mode, since there is no acoustic score, the
position
of the first score is taken by the "acreplacement" score, which can be
any score used by the machine translation system.
A typical example is a score measuring
word order distortion between the source and target languages.
 srinterpformat

Parse nbest list in SRInterp format, which has
features and text in the same line. nbest optimize will also generate
rovercontrol file in SRInterp format, where each line is in the form of:
F1=V1 F2=V2 ... Fm=Vm W1 W2 ...Wn
where
F1
through
Fm
are feature names,
V1
through
Vm
are feature values,
W1
through
Wn
are words.
Also generate SRInterp control file, in the format of:
F1:S1 F2:S2 ... Fm:Sm
where
S1
through
Sm
are scaling factors (weights) for feature
F1
through
Fm
.
 refs references

Specifies the reference transcripts.
Each line in
references
must contain the sentence ID (the last component in the Nbest filename
path, minus any suffixes) followed by zero or more reference words.
 insertionweight W

Weight insertion errors by a factor
W.
This may be useful to optimize for keyword spotting tasks where
insertions have a cost different from deletion and substitution errors.
 wordweights file

Read a table of words and weights from
file.
Each word error is weighted according to the wordspecific weight.
The default weight is 1, and used if a word has no specified weight.
Also, when this option is used, substitution errors are counted
as the sum of a deletion and an insertion error, as opposed to counting
as 1 error as in traditional word error computation.
 antirefs file

Read a file of "antireferences" for use with the
antirefweight
option (see below).
 antirefweight W

Compute the hypothesis errors for 1best optimization by adding the
edit distance with respect to the "antireferences" times the weight
W
to the regular error count.
If
W
is negative this will tend to generate hypotheses that are different from
the antireferences (hence the name).
 1best

Select optimization for standard sentencelevel hypothesis selection.
 1bestfirst

Optimized first using
1best
mode, then switch to full optimization.
This is an effective way to quickly bring the score weights near an
optimal point, and then finetune them jointly with the posterior scale
parameter.
 errors dir

In 1best mode, optimize for error counts that are stored in separate files
in directory
dir.
Each Nbest list must have a matching error counts file of the same
basename in
dir.
Each file contains 7 columns of numbers in the format
wcr wer nsub ndel nins nerr nw
Only the last two columns (number of errors and words, respectively) are used.
If this option is omitted, errors will be computed from the Nbest hypotheses
and the reference transcripts.
 bleucounts dir

Perform BLEU optimization, reading BLEU reference counts from directory
dir.
Each Nbest list must have a matching counts file of the same
basename in
dir,
containing the following information:
N M L1 ... LM
where
N
is the number of hypotheses in the Nbest list,
M
is the number of references for the utterance,
and
L1
through
LM
are the reference lengths (word counts) for each reference.
Following this line, there are
N
lines of the form
K C1 C2 ... Cm
where
K
is the number of words in the hypothesis and
C1
through
Cm
are the Ngram counts occurring in the references for each Ngram order
1, ...,
m.
Currently,
m
is limited to 4.
 minimumbleureference

Use shortest reference length to compute the BLEU brevity penalty.
 closestbleureference

Use closest reference length for each translation hypothesis to compute
the BLEU brevity penalty.
 averagebleureference

Use average reference length to compute the BLEU brevity penalty.
 errorbleuratio R

Specifies the weight of error rate when combined with BLEU as optimization
objective: (1BLEU) + ERR x R.
ERR
is error rate computed by #errors/#references.
 maxnbest n

Limits the number of hypotheses read from each Nbest list to the first
n.
 rescorelmw lmw

Sets the language model weight used in combining the language model log
probabilities with acoustic log probabilities.
This is used to compute initial aggregate hypotheses scores.
 rescorewtw wtw

Sets the word transition weight used to weight the number of words relative to
the acoustic log probabilities.
This is used to compute initial aggregate hypotheses scores.
 posteriorscale scale

Initial value for scaling log posteriors.
The total weighted log score is divided by
scale
when computing normalized posterior probabilities.
This controls the peakedness of the posterior distribution.
The default value is whatever was chosen for
rescorelmw,
so that language model scores are scaled to have weight 1,
and acoustic scores have weight 1/lmw.
 combinelinear

Compute aggregate scores by linear combination, rather than loglinear
combination.
(This is appropriate if the input scores represent logposterior probabilities.)
 nonnegative

Constrain search to nonnegative weight values.
 vocab file

Read the Nbest list vocabulary from
file.
This option is mostly redundant since words found in the Nbest input
are implicitly added to the vocabulary.
 tolower

Map vocabulary to lowercase, eliminating case distinctions.
 multiwords

Split multiwords (words joined by '_') into their components when reading
Nbest lists.
 multichar C

Character used to delimit component words in multiwords
(an underscore character by default).
 noreorder

Do not reorder the hypotheses for alignment, and start the alignment with
the reference words.
The default is to first align hypotheses by order of decreasing scores
(according to the initial score weighting) and then the reference,
which is more compatible with how
nbestlattice(1)
operates.
 noise noisetag

Designate
noisetag
as a vocabulary item that is to be ignored in aligning hypotheses with
each other (the same as the pau word).
This is typically used to identify a noise marker.
 noisevocab file

Read several noise tags from
file,
instead of, or in addition to, the single noise tag specified by
noise.
 hiddenvocab file

Read a subvocabulary from
file
and constrain word alignments to only group those words that are either all
in or outside the subvocabulary.
This may be used to keep ``hidden event'' tags from aligning with
regular words.
 dictionary file

Use word pronunciations listed in
file
to construct word alignments when building word meshes.
This will use an alignment cost function that reflects the number of
inserted/deleted/substituted phones, rather than words.
The dictionary
file
should contain one pronunciation per line, each naming a word in the first
field, followed by a string of phone symbols.
 distances file

Use the word distance matrix in
file
as a cost function for word alignments.
Each line in
file
defines a row of the distance matrix.
The first field contains the word that is the row index,
followed by one or more word/number pairs, where the word represents the
column index and the number the distance value.
 initlambdas 'w1 w2 ...'

Initialize the score weights to the values specified
(zeros are filled in for missing values).
The default is to set the initial acoustic model weight to 1,
the language model weight from
rescorelmw,
the word transition weight from
rescorewtw,
and all remaining weights to zero initially.
Prefixing a value with an equal sign (`=')
holds the value constant during optimization.
(All values should be enclosed in quotes to form a single commandline
argument.)
Hypotheses are aligned using the initial weights; thus, it makes sense
to reoptimize with initial weights from a previous optimization in order
to obtain alignments closer to the optimimum.
 alpha a

Controls the error function smoothness;
the sigmoid slope parameter is set to
a.
 epsilon e

The stepsize used in gradient descent (the multiple of the gradient vector).
 minloss x

Sets the loss function for a sample effectively to zero when its value falls
below
x.
 maxdelta d

Ignores the contribution of a sample to the gradient if the derivative
exceeds
d.
This helps avoid numerical problems.
 maxiters m

Stops optimization after
m
iterations.
In Amoeba search, this limits the total number of points in the parameter space
that are evaluated.
 maxbaditers n

Stops optimization after
n
iterations during which the actual (nonsmoothed) error has not decreased.
 maxamoebarestarts r

Perform only up to
r
repeated Amoeba searches.
The default is to search until
D
searches give the same results, where
D
is the dimensionality of the problem.
 maxtime T

Abort search if new lowererror point isn't found in
T
seconds.
 epsilonstepdown s

 minepsilon m

If
s
is a value greater than zero, the learning rate will be multiplied by
s
every time the error does not decrease after a number of iterations
specified by
maxbaditers.
Training stops when the learning rate falls below
m
in this manner.
 converge x

Stops optimization when the (smoothed) loss function changes relatively by less
than
x
from one iteration to the next.
 quickprop

Use the approximate secondorder method known as "QuickProp" (Fahlman 1989).
 initamoebasimplex 's1 s2 ...'

Perform Amoeba simplex search.
The argument defines the step size for the initial Amoeba simplex.
One value for each nonfixed search dimension should be specified,
plus optionally a value for the posterior scaling parameter
(which is searched as an added dimension).
 initpowellrange 'a1,b1 a2,b2 ...'

Perform Powell search.
The argment initializes the weight ranges for Powell search.
One commaseparated pair of values for each search dimension should
be specified. For each dimension, if the upper bound equals lower bound
and initial lambda, that dimension will be fixed, even if not so specified by
initlambda .
 numpowellruns N

Sets the number of random runs for quick Powell grid search
(default value is 20).
 dynamicrandomseries

Use time and process ID to initialize seed for pseudo random series used
in Powell search.
This will make results unrepeatable but may yield better results through
multiple trials.
 printhyps file

Write the best word hypotheses to
file
after optimization.
 printtopn N

Write out the top
N
rescored hypotheses.
In this case
printhyps
specifies a directory (not a file)
and one file per Nbest list is generated.
 printuniquehyps

Eliminate duplicate hypotheses when writing out Nbest hypotheses.
 printoldranks

Output the original hypothesis ranks when writing out Nbest hypotheses.
 computeoracle

Find the lowest error rate or the highest BLEU score achievable by choosing
among all Nbest hypotheses.
 printoraclehyps file

Print output oracle hyps to
file.
 writerovercontrol file

Writes a control file for
nbestrover
to
file,
reflecting the names of the input directories and the optimized parameter
values.
The format of
file
is described in
nbestscripts(1).
The file is rewritten for each new minimal error weight combination found.
In BLEU optimization, the weight for the acreplacement score will be written
in the place of the posterior scale,
since posterior scaling is not used in BLEU optimization.
 skipopt

Skip optimization altogether, such as when only the
printhyps
function is to be exercised.
 

Signals the end of options, such that following commandline arguments are
interpreted as additional scorefiles even if they start with `'.
 scoredir ...

Any additional arguments name directories containing further score files.
In each directory, there must exist one file named after the sentence
ID it corresponds to (the file may also end in ``.gz'' and contain compressed
data).
The total number of score dimensions is thus 3 (for the standard scores from
the Nbest list) plus the number of additional score directories specified.
SEE ALSO
nbestlattice(1), nbestscripts(1), nbestformat(5).
S. Katagiri, C.H. Lee, & B.H. Juang, "A Generalized Probabilistic Descent
Method", in
Proceedings of the Acoustical Society of Japan, Fall Meeting,
pp. 141142, 1990.
S. E. Fahlman, "FasterLearning Variations on BackPropagation: An
Empirical Study", in D. Touretzky, G. Hinton, & T. Sejnowski (eds.),
Proceedings of the 1988 Connectionist Models Summer School, pp. 3851,
Morgan Kaufmann, 1989.
W. H. Press, B. P. Flannery, S. A. Teukolsky, & W. T. Vetterling,
Numerical Recipes in C: The Art of Scientific Computing,
Cambridge University Press, 1988.
BUGS
Gradientbased optimization is not supported (yet) in 1best or BLEU mode
or in conjunction with the
combinelinear
or
nonnegative
options;
use simplex or Powell search instead.
The Nbest directory in the control file output by
writerovercontrol
is inferred from the
first Nbest filename specified with
nbestfiles,
and will therefore only work if all Nbest lists are placed in the same
directory.
The
insertionweight
and
wordweights
options only affect the word error computation, not the construction
of hypothesis alignments.
Also, they only apply to sausagebased, not 1best error optimization.
(1best errors may be explicitly specified using the
errors
option).
The
antirefs
and
antirefweight
options do not work for sausagebased or BLEU optimization.
AUTHORS
Andreas Stolcke <andreas.stolcke@microsoft.com>
Dimitra Vergyri <dverg@speech.sri.com>
Jing Zheng <zj@speech.sri.com>
Copyright (c) 20002012 SRI International, 2012 Microsoft Corp.