kalman-filter
kalman-filter copied to clipboard
Expectation maximization
Hello, thanks for your great library.
I was wondering how hard it would be to implement Expectation Maximization for generating the matrixes, specially the observation covariance matrix. For better filtering performance on non-linear systems. Like pykalman does here:
https://github.com/pykalman/pykalman/blob/master/pykalman/standard.py#L1339
Maybe the implementation can make use of this library:
https://github.com/lovasoa/expectation-maximization
Can you give me any guidance on how to make that?
@recidive I feel this is a very good idea, but it will require some work ! I can give support on this, but i won't have time in following months for doing the implementation.
Strategy
EM algorithm is based on a metric of 'how good does the KF match the observation'.
There are 2 different possible (and valid) approaches for this :
- We could reuse the idea of the README.md (measure KF model against observation) and use the mahalanobis distance for the EM algorithm
- We could do the same thing as in pykalman, and before implementing the EM, we should focus on log-likelihood (see #44)
I feel, one should worry, on the mathematical ground, about the deep difference between log likelihood and mahalanobis for us :
- It is equivalent ?
- If not what difference does it make for us ?
What do you think ?
EM Lib
expectation-maximization seems a good implementation, anyway it is depending on numeric and multivariate-gaussian which dependencies i do not have yet (i have reimplemented most of lineagebra i needed to avoid big footprint).
So i feel we could do a first implementation here, but we would need to discuss the added footprint of adding this additionnal deps in the project.
Maybe there are some workaround like :
- Implement
kalman-filter-empackage as a separated lib from the kalman-filter library, so we can require both lib when optimizing the KF, but only the core lib when using the KF - Reimplement the EM code here
Anyway, i feel this question is not the top priority, and we could work with the expectation-maximization now, once unit test will pass and the results will be alright, we will take care of the final package footprint.
TDD
If you want to work on this, i would like to be able to discuss the unit test and the signatures before the beginning of the implementation.
I'd like to work in TDD (Test-Driven-Development), (A) first create unit test and make them fail (B) then code the feature
I would suggest like :
- You create a branch
<feature>for the project - You create another a branch (from
<feature>)<feature>-unit-test - We use a PR to validate the, still failing, unit test together, and discuss them
- Once test are agreed (but still failing), we merge
<feature>-unit-test-><feature>