soft-dtw icon indicating copy to clipboard operation
soft-dtw copied to clipboard

Greedy speedup

Open ghannum opened this issue 7 years ago • 2 comments

Soft-dtw looks like the perfect solution for my deep-learning model. However the speed is a major bottleneck in training (minibatches of 64 samples, w. 2000 positions x 25 classes).

Would it be possible to add a parameter for greedy scoring which would scale better in time?

For example, I never need alignments with more than a few insertions/deletions. Perhaps this can be achieved by controlling the maximum recursion depth?

ghannum avatar Jan 24 '18 21:01 ghannum

I think the right way to do it would be to add a band constraint, as done in Fast Global Alignment Kernels by @marcocuturi. This would allow to only compute distances for pairs of observations not too far from the diagonal. This should be fairly straightforward but we haven't got around to doing it yet.

mblondel avatar Feb 13 '18 06:02 mblondel

@ghannum You will probably be interested in PR #9.

mblondel avatar Apr 23 '18 12:04 mblondel