Add object keypoint similarity method
Using the flow tracker with two mice, I was sometimes getting unexpected identity switches when one of the mouse was missing half or more of its keypoints.
I figured it was a problem with the instance_similarity function. So I wrote a new object_keypoint_similarity function (in fact a function factory because it has parameters). The instance_similarity function compute the distance between each keypoints from a reference instance and a query instance, takes the exp(-d**2), sum for all the keypoints and divide by the number of visible keypoints in the reference instance.
This is a description of the three changes I did and why:
-
Adding an scale to the distance between the reference and query keypoint. Otherwise, if the ref and query keypoints are 3 pixels apart, they contribute to 0.0001 to the similarity score, versus 0.36 if they are 1 pixel apart. This is very sensitive to single pixel fluctuations. Instead, the distance is divided by a user-defined pixel scale before applying the gaussian function. The scale can be chosen to be the error for each keypoint found during training of the model with the validation set. Ideally this could be retrieved automatically, it is now hidden in the
metrics.val.npzfile of the model. This is what they use in this paper. -
The prediction score for each keypoint can be used to weigh the influence of each keypoint similarity in the total similarity. Like this, uncertain keypoints will not bias the total similarity.
-
Dividing the sum of individual keypoint similarities by the number of visible keypoints in the reference instance results in higher similarity scores if the reference has few keypoints (meaning a bad reference instance). Imagine a query instance with 4 keypoints:
- a first ref instance with 1 keypoint that matches exactly one keypoint: similarity = exp(-0)/1 = 1
- a second ref instance with 4 keypoints, where 3 keypoints only match exactly the query instance: similarity = (1+1+1)/4 = 0.75 Dividing by the total number of keypoints instead gives 0.25 and 0.75 respectively, which is preferable.
I didn't create a cli option to change the point 3, but it can be easily added. Implementing points 1 and 3 dramatically improved the tracking.
Codecov Report
Attention: Patch coverage is 76.25000% with 19 lines in your changes missing coverage. Please review.
Project coverage is 74.33%. Comparing base (
7ed1229) to head (207d749). Report is 22 commits behind head on develop.
| Files | Patch % | Lines |
|---|---|---|
| sleap/nn/tracker/components.py | 74.35% | 10 Missing :warning: |
| sleap/nn/tracking.py | 80.55% | 7 Missing :warning: |
| sleap/gui/learning/runners.py | 60.00% | 2 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## develop #1003 +/- ##
===========================================
+ Coverage 73.30% 74.33% +1.02%
===========================================
Files 134 135 +1
Lines 24087 24705 +618
===========================================
+ Hits 17658 18364 +706
+ Misses 6429 6341 -88
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
I added the option to change normalization_keypoints from the CLI at least.
Just a small comment but the stack widget is very annoying. When I select "object keypoint", I don't see the options, the only way is to increase the window size vertically. But because the window is already high, I first have to unselect the model, increase the window size, fill the options and then change back the model.
Maybe adding a scroll widget to the window would solve the problem? Otherwise I think it's more usable to add the options below like in my original proposal. What do you think?
I think you are right - I also dislike how large the inference GUI has become. I like the organization of the stack widget, but adding a scroll widget is definitely a better idea than hardcoding a minimum size.
Hi folks, are we ready to merge this? Do we need to change the UI a bit more still?
Hi, I reverted the GUI to the original proposal (without stacked widget) that was more convenient (or less annoying). I think this is good to go now!
But maybe there should be a revamp of the tracking window (in another PR) because it has grown quite a bit with my PRs :D Maybe the Kalman filter part could go as it is not working great. Or the tracking options could be in another tab like the inference options...
Hi @getzze,
Yes, you are adding too many features for the GUI to handle! kudos 😎 The hold-up to merge this has indeed been displaying all the new features. I like your proposals for re-organizing the Training/Inference Pipeline dialog. Also agreed that those should be handled in a different PR.
I am going to hold-off on merging this until after our next release (I'd like both the Training dialog and this PR to be included in the same release, but the revamping won't be happening prior to the long over-do 1.3.0). Aiming to get 1.3.0 out by the end of this week, then will be working on much needed GUI revamping to accompany this PR.
Thanks! Liezl
Hey @roomrys , I just wanted to bump this PR, as it is very useful (at least to me) so I would like to see you in the main branch. Thanks!
[!WARNING]
Rate limit exceeded
@getzze has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 8 minutes and 35 seconds before requesting another review.
How to resolve this issue?
After the wait time has elapsed, a review can be triggered using the
@coderabbitai reviewcommand as a PR comment. Alternatively, push new commits to this PR.We recommend that you space out your commits to avoid hitting the rate limit.
How do rate limits work?
CodeRabbit enforces hourly rate limits for each developer per organization.
Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.
Please see our FAQ for further information.
Commits
Files that changed from the base of the PR and between a2954a7a7b78f545ac109dc92e35416901c67477 and 207d749c316df3ea95c44535ca3d2e3480d571ee.
Walkthrough
The recent changes boost the functionality and flexibility of the Sleap tracking framework. Key updates include the addition of parameters for object keypoint similarity, enhancing accuracy in keypoint assessments. The codebase also saw structural improvements for better parameter handling and testing, leading to a more robust and maintainable tracking process.
Changes
| Files | Change Summary |
|---|---|
sleap/config/pipeline_form.yaml |
Added parameters for object keypoint similarity in the inference section; updated similarity options. |
sleap/gui/learning/runners.py |
Enhanced make_predict_cli_call to include new tracking parameters; improved space handling. |
sleap/nn/tracker/components.py |
Introduced factory_object_keypoint_similarity function for keypoint similarity calculations; added logging for error handling. |
sleap/nn/tracking.py |
Modified get_candidates and related methods to support max_tracking parameter; added OKS options. |
tests/fixtures/datasets.py |
Added centered_pair_predictions_sorted fixture for sorting labeled frames. |
tests/nn/test_inference.py |
Updated tests to utilize the new similarity method and sorted predictions. |
tests/nn/test_tracker_components.py |
Added a new tracking test function and modified existing tests for enhanced parameter handling. |
Sequence Diagram(s)
sequenceDiagram
participant User
participant CLI
participant Tracker
participant SimilarityFunction
User->>CLI: Invoke tracking command with parameters
CLI->>Tracker: Process parameters (including OKS options)
Tracker->>SimilarityFunction: Calculate similarity based on keypoints
SimilarityFunction-->>Tracker: Return calculated similarity
Tracker-->>CLI: Provide tracking results
CLI-->>User: Display tracking results
🐇 In the meadow, I hop with glee,
New features sprout like leaves on a tree.
Keypoints dancing, weights in play,
Tracking’s magic grows each day!
With tests that leap and bounds that soar,
Let’s celebrate these changes and explore! 🎉
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?
Tips
Chat
There are 3 ways to chat with CodeRabbit:
- Review comments: Directly reply to a review comment made by CodeRabbit. Example:
I pushed a fix in commit <commit_id>.Generate unit testing code for this file.Open a follow-up GitHub issue for this discussion.
- Files and specific lines of code (under the "Files changed" tab): Tag
@coderabbitaiin a new review comment at the desired location with your query. Examples:@coderabbitai generate unit testing code for this file.@coderabbitai modularize this function.
- PR comments: Tag
@coderabbitaiin a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:@coderabbitai generate interesting stats about this repository and render them as a table.@coderabbitai show all the console.log statements in this repository.@coderabbitai read src/utils.ts and generate unit testing code.@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.@coderabbitai help me debug CodeRabbit configuration file.
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.
CodeRabbit Commands (invoked as PR comments)
@coderabbitai pauseto pause the reviews on a PR.@coderabbitai resumeto resume the paused reviews.@coderabbitai reviewto trigger an incremental review. This is useful when automatic reviews are disabled for the repository.@coderabbitai full reviewto do a full review from scratch and review all the files again.@coderabbitai summaryto regenerate the summary of the PR.@coderabbitai resolveresolve all the CodeRabbit review comments.@coderabbitai configurationto show the current CodeRabbit configuration for the repository.@coderabbitai helpto get help.
Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
CodeRabbit Configuration File (.coderabbit.yaml)
- You can programmatically configure CodeRabbit by adding a
.coderabbit.yamlfile to the root of your repository. - Please see the configuration documentation for more information.
- If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation:
# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json
Documentation and Community
- Visit our Documentation for detailed information on how to use CodeRabbit.
- Join our Discord Community to get help, request features, and share feedback.
- Follow us on X/Twitter for updates and announcements.
Hi @roomrys @talmo I rebased this PR, I hope it will make it to version 1.4 (oldest open PR now !!). The UI problem has been solved so nothing should block it.
Tests in tests/nn/test_tracker_components.py were not passing because of bugs introduced by the max_tracks new feature that were actually not tested. Now max_track is also tested and fixed.
Cheers
The max_tracks bugs happen when calling Tracker.make_tracker_by_name, which should rarely happen, even when scripting.
It's due to mismatches between the tracker, max_tracks and max_tracking argument to this method. Due to command-line and GUI processing of the inputs, invalid combinations were not possible. But it's safer to correct them in Tracker also.
Thanks!