<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 11/6/2012 6:50 AM, Md. Akmal Haidar
wrote:<br>
</div>
<blockquote
cite="mid:1352213457.66682.YahooMailNeo@web161006.mail.bf1.yahoo.com"
type="cite">
<div style="color:#000; background-color:#fff; font-family:times
new roman, new york, times, serif;font-size:12pt">
<div class="yui_3_7_2_18_1352211731059_53" style="font-family:
times new roman, new york, times, serif; font-size: 12pt;">Hi,<br>
<br>
I have found the same WER scoring result using the LM with two
different smoothing (additive/Witten-Bell). <br>
First I have created the HTK lattice using the LM. Then, I
used the lattice-tool to find the nbest-list. <br>
<br>
How the the two LM trained on the same text with different
smoothing give the same WER result?<br>
<br>
Thanks<br>
Best Regards<br>
Akmal<br>
</div>
<span></span></div>
</blockquote>
Do the LM probabilities differ in the details? (Compare the
rescored nbest lists.)<br>
<br>
If so then it could just be that your data is such that the
smoothing method by itself does not make enough of a difference to
change the top hypothesis choice.<br>
<br>
Andreas<br>
<br>
</body>
</html>