Fix duplicate skeletons during labels merge
Description
This PR fixes the duplicate skeleton issue when merging labels file. After every update to the labels file, we check if there's a existing skeleton that matches with a new Skeleton associated with an instance in the Labeled frame. If the skeleton doesn't match, then we add it to the list of skeletons in the Labels object.
Types of changes
- [ ] Bugfix
- [ ] New feature
- [ ] Refactor / Code style update (no logical changes)
- [ ] Build / CI changes
- [ ] Documentation Update
- [ ] Other (explain)
Does this address any currently open issues?
- #2025
- #1090
- #713
Outside contributors checklist
- [ ] Review the guidelines for contributing to this repository
- [ ] Read and sign the CLA and add yourself to the authors list
- [ ] Make sure you are making a pull request against the develop branch (not main). Also you should start your branch off develop
- [ ] Add tests that prove your fix is effective or that your feature works
- [ ] Add necessary documentation (if appropriate)
Thank you for contributing to SLEAP!
:heart:
Summary by CodeRabbit
- Refactor
- Enhanced the internal label management for more reliable merging and display.
- New Features
- Improved the label import process, resulting in more accurate grouping and consolidation.
- Streamlined the export flow by automatically applying default filenames, removing the need for manual file selection.
- Tests
- Updated tests to align with the revised label import behaviors.
- Adjusted assertions in tests to reflect changes in expected track counts.
- Removed a test function related to skeleton unification, indicating a shift in testing strategy.
Walkthrough
This update refines the label update process in the dataset module and adjusts GUI command tests. In the dataset code, the merging logic for skeletons, nodes, and tracks has been reorganized for clarity and reliability. Additionally, the expected track count for DeepLabCut imports has been modified, and a test function related to skeleton unification has been removed, indicating a shift in testing focus.
Changes
| File(s) | Change Summary |
|---|---|
| sleap/io/dataset.py | Modified the _update_from_labels method in the Labels class to update skeletons only when empty and to add a merge block when the merge flag is set. Simplified node updates by removing merge logic, and streamlined track merging. Also includes minor code cleanup for clarity. |
| tests/gui/test_commands.py | Updated the expected track count in test_import_labels_from_dlc_folder (from 3 to 2). |
| tests/io/test_dataset.py | Removed the test_dont_unify_skeletons function, which tested the behavior of the Labels class regarding skeleton unification. |
Sequence Diagram(s)
sequenceDiagram
participant L as Labels Instance
participant S as Skeletons List
participant N as Nodes List
participant T as Tracks List
L->>L: _update_from_labels(merge)
alt Skeleton list is empty
L->>S: Create new skeletons
else merge flag is true
L->>S: Check and merge duplicate skeletons
end
alt Nodes list is empty
L->>N: Build nodes from skeletons
end
alt Tracks list is empty
L->>T: Update and merge tracks
end
Poem
In the code garden, I happily hop,
Updating skeletons till the bugs all stop.
Nodes and tracks align in a row,
Merging logic makes the clean code glow.
Hoppity changes from a rabbit with a techy heart 🐇💻!
📜 Recent review details
Configuration used: CodeRabbit UI Review profile: CHILL Plan: Pro
📥 Commits
Reviewing files that changed from the base of the PR and between d497566bb331208540d104684abc442deaeb87e3 and 74455e24b178081275b9a31d474a6a325980553b.
📒 Files selected for processing (2)
sleap/io/dataset.py(2 hunks)tests/gui/test_commands.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/gui/test_commands.py
- sleap/io/dataset.py
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: Tests (macos-14)
- GitHub Check: Tests (windows-2022)
- GitHub Check: Tests (ubuntu-22.04)
🪧 Tips
Chat
There are 3 ways to chat with CodeRabbit:
- Review comments: Directly reply to a review comment made by CodeRabbit. Example:
I pushed a fix in commit <commit_id>, please review it.Generate unit testing code for this file.Open a follow-up GitHub issue for this discussion.
- Files and specific lines of code (under the "Files changed" tab): Tag
@coderabbitaiin a new review comment at the desired location with your query. Examples:@coderabbitai generate unit testing code for this file.@coderabbitai modularize this function.
- PR comments: Tag
@coderabbitaiin a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:@coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.@coderabbitai read src/utils.ts and generate unit testing code.@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.@coderabbitai help me debug CodeRabbit configuration file.
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.
CodeRabbit Commands (Invoked using PR comments)
@coderabbitai pauseto pause the reviews on a PR.@coderabbitai resumeto resume the paused reviews.@coderabbitai reviewto trigger an incremental review. This is useful when automatic reviews are disabled for the repository.@coderabbitai full reviewto do a full review from scratch and review all the files again.@coderabbitai summaryto regenerate the summary of the PR.@coderabbitai generate docstringsto generate docstrings for this PR.@coderabbitai resolveresolve all the CodeRabbit review comments.@coderabbitai planto trigger planning for file edits and PR creation.@coderabbitai configurationto show the current CodeRabbit configuration for the repository.@coderabbitai helpto get help.
Other keywords and placeholders
- Add
@coderabbitai ignoreanywhere in the PR description to prevent this PR from being reviewed. - Add
@coderabbitai summaryto generate the high-level summary at a specific location in the PR description. - Add
@coderabbitaianywhere in the PR title to generate the title automatically.
CodeRabbit Configuration File (.coderabbit.yaml)
- You can programmatically configure CodeRabbit by adding a
.coderabbit.yamlfile to the root of your repository. - Please see the configuration documentation for more information.
- If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation:
# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json
Documentation and Community
- Visit our Documentation for detailed information on how to use CodeRabbit.
- Join our Discord Community to get help, request features, and share feedback.
- Follow us on X/Twitter for updates and announcements.
Codecov Report
:x: Patch coverage is 97.14286% with 1 line in your changes missing coverage. Please review.
:white_check_mark: Project coverage is 76.15%. Comparing base (7991f14) to head (74455e2).
:warning: Report is 181 commits behind head on develop.
| Files with missing lines | Patch % | Lines |
|---|---|---|
| sleap/io/dataset.py | 97.14% | 1 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## develop #2075 +/- ##
===========================================
+ Coverage 75.43% 76.15% +0.71%
===========================================
Files 134 134
Lines 24749 25050 +301
===========================================
+ Hits 18670 19077 +407
+ Misses 6079 5973 -106
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.