berkeleylm icon indicating copy to clipboard operation
berkeleylm copied to clipboard

Unrealistic perplexity

Open GoogleCodeExporter opened this issue 9 years ago • 3 comments

I'm trying to evaluate 5-gram model on a Vietnamese corpus but the perplexity 
doesn't seem to be right...


What steps will reproduce the problem?
1. Download and extract problem.zip
2. Follow the README file


What is the expected output? What do you see instead?

The result from BerkeleyLM and SRILM should be comparable but in fact 
BerkeleyLM return an unrealistic perplexity of around 1.


What version of the product are you using? On what operating system?

1.1.5 on Ubuntu.

Please provide any additional information below.

Original issue reported on code.google.com by [email protected] on 12 Feb 2014 at 3:27

Attachments:

GoogleCodeExporter avatar Jul 16 '15 16:07 GoogleCodeExporter

Sorry for taking so long to get back. My first guess is that it has something 
to do with the score for unseen words. Can you verify that running scoring the 
data you generated the LM from (so that there are no unknown words) with both 
SRILM and BerkeleyLM gives similar results? Otherwise, it might be some ugly 
character encoding issues. 

Original comment by [email protected] on 18 Feb 2014 at 12:09

GoogleCodeExporter avatar Jul 16 '15 16:07 GoogleCodeExporter

It is better but still an order of magnitude smaller (in absolute value)
than that of SRILM. My corpus is encoded in UTF-8. Vietnamese text makes
heavy use of accented characters which cannot be represented in ASCII.


$ . ./env.sh

$ java -ea -mx1000m -server -cp berkeleylm.jar
edu.berkeley.nlp.lm.io.MakeKneserNeyArpaFromText 5 segmented.arpa
$SEGMENTED_CORPUS_TRAIN
$ java -ea -mx1000m -server -cp berkeleylm.jar
edu.berkeley.nlp.lm.io.MakeLmBinaryFromArpa segmented.arpa segmented.binary
$ java -ea -mx1000m -server -cp berkeleylm.jar
edu.berkeley.nlp.lm.io.ComputeLogProbabilityOfTextStream segmented.binary
$SEGMENTED_CORPUS_TRAIN
Log probability of text is: *-67358.47160708543*

$ ngram-count -ukndiscount -order 5 -lm segmented.srilm.arpa -text
$SEGMENTED_CORPUS_TRAIN
$ ngram -lm segmented.srilm.arpa -ppl $SEGMENTED_CORPUS_TRAIN
file segmented.train.txt: 68197 sentences, 1.54738e+06 words, 0 OOVs
0 zeroprobs, logprob= *-3.16751e+06* ppl= 91.3297 ppl1= 111.435

Original comment by [email protected] on 18 Feb 2014 at 12:26

GoogleCodeExporter avatar Jul 16 '15 16:07 GoogleCodeExporter

Interesting. Full disclosure: I don't have time to do real debugging anymore 
myself, so I think you're largely on your own. SRILM by default does different 
things with modified KN smoothing and computation of discount factors. At one 
point, I made sure they did exactly the same thing for some simplified settings 
of SRILM, but I couldn't tell you what those settings are. 

If I were you, I check very short sentence with very common words. Most of the 
difference between SRILM and BerkeleyLM happens for low-count words, so the 
difference should shrink if that's all that's going on.

Original comment by [email protected] on 18 Feb 2014 at 12:42

GoogleCodeExporter avatar Jul 16 '15 16:07 GoogleCodeExporter