seqlearn
seqlearn copied to clipboard
Bug in HMM normalization?
Hi,
I was trying seqlearn for a realistic problem and the HMM performance was much worse than a structured perceptron (1% vs ~30% F-score). Thinking something was wrong I even tried the implementation in NLTK which performs almost as well as the perceptron.
Upon inspecting the learned parameters I find unexpected results, while NLTK looked sensible. After going through the calculation step by step I think the problem is this; with this patch I get good performance.
I should write a failing test to actually verify that this is correct, and I will try to do so, but it might take a while to find the time...
I suppose so indeed! At first glance the nltk implementation doesn't seem to have an equivalent. Maybe it would be good to write up the specifics of the model implemented here.
On September 22, 2016 6:45:46 PM EDT, Mikhail Korobov [email protected] wrote:
kmike commented on this pull request.
@@ -68,10 +68,10 @@ def fit(self, X, y, lengths): final_prob -= logsumexp(final_prob)
like https://github.com/larsmans/seqlearn/pull/29?
You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub: https://github.com/larsmans/seqlearn/pull/30
Sent from my Android device with K-9 Mail. Please excuse my brevity.