Allow retracking
Hi! There was an issue allowing for retracking predicted instances without having to infer the instances again: https://github.com/talmolab/sleap/issues/260
But it is not working anymore with Sleap v1.2.4.
If the Inference Pipeline Type in the GUI is set to None, and Tracker set to Flow or Simple, it returns with an error that it can run the inference because no Model was specified.
-
Versions:
SLEAP: 1.2.4 TensorFlow: 2.6.3 Numpy: 1.19.5 Python: 3.7.12 OS: Windows-10-10.0.19041-SP0 -
SLEAP installed with Conda from package
Thanks for the report @getzze! We'll look into this and try to get a fix up ASAP.
In the meantime, you might be able to use this notebook as a workaround to do retracking without inference: https://sleap.ai/notebooks/Post_inference_tracking.html
Cheers,
Talmo
Problem Analysis
Initially, on Nov 20, 2019 - Add cli for running tracking by itself, we added a function in tracking.py called retrack() that is only called if __name__ == '__main__' and is not able to be accessed through any SLEAP command. Although the retracking feature was added on Jan 22, 2020 via Add tracking-only ui to sleap-track, this commit was later removed seemingly by accident on Mar 19, 2020 via Revert to 28a0031.
Proposal
The code base has changed considerably since Add tracking-only ui to sleap-track, so we need a custom solution - no copy paste. We can mesh the logic from retrack() into the sleap-track command.
Relevant Code
-
Make the predictor from CLI args https://github.com/talmolab/sleap/blob/44e466197c469684502be9dcf13a88f5f0e91bd3/sleap/nn/inference.py#L4287-L4288
-
Create the predictor from the model https://github.com/talmolab/sleap/blob/44e466197c469684502be9dcf13a88f5f0e91bd3/sleap/nn/inference.py#L4209-L4215
-
The
Predictorinstance gets created insideload_modelhttps://github.com/talmolab/sleap/blob/44e466197c469684502be9dcf13a88f5f0e91bd3/sleap/nn/inference.py#L3901-L3906 -
The
Trackeris added to thePredictorinsideload_modelhttps://github.com/talmolab/sleap/blob/44e466197c469684502be9dcf13a88f5f0e91bd3/sleap/nn/inference.py#L3908-L3914 -
Run inference https://github.com/talmolab/sleap/blob/44e466197c469684502be9dcf13a88f5f0e91bd3/sleap/nn/inference.py#L4294-L4295
-
Generate predictions https://github.com/talmolab/sleap/blob/44e466197c469684502be9dcf13a88f5f0e91bd3/sleap/nn/inference.py#L431-L437
6a. Process batch https://github.com/talmolab/sleap/blob/44e466197c469684502be9dcf13a88f5f0e91bd3/sleap/nn/inference.py#L402-L403 6b. Run inference on batch inside
process_batch(note:self.inference_modelis created through abstract methodPredict._initialize_inference_model) https://github.com/talmolab/sleap/blob/44e466197c469684502be9dcf13a88f5f0e91bd3/sleap/nn/inference.py#L311-L312 6c. Callsuper=tf.keras.Modelto predict on a single batch insidepredict_on_batchhttps://github.com/talmolab/sleap/blob/44e466197c469684502be9dcf13a88f5f0e91bd3/sleap/nn/inference.py#L916
- Make labeled frames from generated predictions https://github.com/talmolab/sleap/blob/44e466197c469684502be9dcf13a88f5f0e91bd3/sleap/nn/inference.py#L433-L437
7a. If the model is either
TopDownPredictororBottomUpPredictor, then call the tracker insidePredict._make_labeled_frames_from_generator... https://github.com/talmolab/sleap/blob/44e466197c469684502be9dcf13a88f5f0e91bd3/sleap/nn/inference.py#L2164-L2168 https://github.com/talmolab/sleap/blob/44e466197c469684502be9dcf13a88f5f0e91bd3/sleap/nn/inference.py#L2699-L2703 7b. ... and do some post-processing track cleaning https://github.com/talmolab/sleap/blob/44e466197c469684502be9dcf13a88f5f0e91bd3/sleap/nn/inference.py#L2178-L2179 https://github.com/talmolab/sleap/blob/44e466197c469684502be9dcf13a88f5f0e91bd3/sleap/nn/inference.py#L2713-L2714
This issue has been resolved in the new release - install SLEAP v1.2.7 here.