refactored and optimized segmentation metrics eval
Refactored and optimized segmentation metrics evaluation.
-
Refactoring: key purpose is to make some of the methods more useful for extension in client libraries, make code more concise, reducing code duplications, thus also increasing possible community support
-
Optimisation: here we note, that some code duplications lead us to recalculate some preparative steps for metric calculation. Reusing common logical steps helped to significantly increase performance. In our microbenchmark of
segment.eval()we have achieved 1.58 speedup! This may be crucial in some applications that may require intensive metric calculation (like Hyperparameter Optimization) -
Benchmarking details: We took some symbolic music files (midis) with known expert markup of musical segments. There were 20 data points considered. We ran 300 iterations of eval method over all data provided to get better averaging.
Benchmarking results: Before: time: 24.702 and latency: 0.0823, avg per data point: 0.0041 After: time: 15.615 and latency: 0.0520, avg per data point: 0.0026 Speedup is: 1.582 (~ -37%)
@bmcfee @craffel This PR is getting stale, would you consider reviewing and merging it? TY
Sorry I won't have time to review this for a few weeks.
Sorry for the delay, I'm looking at this now. It's not the easiest PR to review because there's a mixture of cosmetic changes (ie variable names) and refactoring to sift through. It also appears that tests are not passing.