lightning-flash
lightning-flash copied to clipboard
Updated the learn2learn "image_classification_imagenette_mini" example
What does this PR do?
There were some problems in the downloading of the dataset as well as defining the ImageClassificationInputTransform, for the ImageClassificationData, so I have remade the tutorial for easy integration of learn2learn with flash
Fixes #1376
Before submitting
- [ ] Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
- [x] Did you read the contributor guideline, Pull Request section?
- [x] Did you make sure your PR does only one thing, instead of bundling different changes together?
- [x] Did you make sure to update the documentation with your changes?
- [x] Did you write any new necessary tests? [not needed for typos/docs]
- [ ] Did you verify new and existing tests pass locally with your changes?
- [ ] If you made a notable change (that affects users), did you update the CHANGELOG?
PR review
- [x] Is this pull request ready for review? (if not, please submit in draft mode)
Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Definitely Make sure you had fun coding 🙃
Check out this pull request on ![]()
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
Codecov Report
Merging #1383 (5897183) into master (0e21259) will increase coverage by
2.51%. The diff coverage isn/a.
@@ Coverage Diff @@
## master #1383 +/- ##
==========================================
+ Coverage 90.38% 92.90% +2.51%
==========================================
Files 286 286
Lines 12861 12861
==========================================
+ Hits 11625 11949 +324
+ Misses 1236 912 -324
| Flag | Coverage Δ | |
|---|---|---|
| unittests | 92.90% <ø> (+2.51%) |
:arrow_up: |
Flags with carried forward coverage won't be shown. Click here to find out more.
| Impacted Files | Coverage Δ | |
|---|---|---|
| flash/text/question_answering/model.py | 93.19% <0.00%> (-1.37%) |
:arrow_down: |
| flash/core/classification.py | 95.15% <0.00%> (+0.60%) |
:arrow_up: |
| flash/core/adapter.py | 97.14% <0.00%> (+1.42%) |
:arrow_up: |
| flash/core/data/utilities/loading.py | 98.52% <0.00%> (+1.47%) |
:arrow_up: |
| flash/core/data/utilities/paths.py | 92.53% <0.00%> (+1.49%) |
:arrow_up: |
| flash/core/data/data_module.py | 95.27% <0.00%> (+1.71%) |
:arrow_up: |
| flash/image/classification/model.py | 81.03% <0.00%> (+1.72%) |
:arrow_up: |
| flash/image/data.py | 100.00% <0.00%> (+3.50%) |
:arrow_up: |
| flash/core/model.py | 91.42% <0.00%> (+3.57%) |
:arrow_up: |
| flash/image/segmentation/heads.py | 95.65% <0.00%> (+4.34%) |
:arrow_up: |
| ... and 28 more |
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
Hi, @uakarsh - Thank you for the PR, I appreciate you taking a look at it. Apologies for the delay in review.
Would you mind converting this into a script instead? We usually prefer having python scripts in flash_examples, and any notebooks currently go in flash_notebooks folder. This will also help us review the script, and add suggestions.
A few points:
- I guess you changed the batch sizes to
1for testing your PR, so can you revert those parameters to the original? (unless there was another reason that I'm missing) - Any reason you changed the tthe ransforms applied for training stage by adding
train_per_sample_transform? Just curious, I noticed that we didn't have those before (please correct me if I'm wrong). - I see that you have gotten rid of
ddp_sharedaccelerator which I think makes sense for examples. cc: @ethanwharris - I think it's a good idea to not have multi-gpu enabled by default in examples?
Also, just FYI:
We made a few changes to the transform API, if I'm not wrong, it now takes transform= ImageClassificationInputTransform in your ImageClassificationData.from_tensors call (instead of train_transform and val_transform kwargs). Whether you need to make this change in this PR or not, depends on which release this PR land for. I think @ethanwharris can comment more on this. :)
Sure, I can convert it into a script
For your questions:
-
The reason for using batch size 1 is, that I was using Amazon Sagemaker Studio (and, taking a batch size of more than 1, would lead to CUDA OUT OF MEMORY ERROR, so I had no other option but to use a batch size of 1 for making the code run
-
It was taken from here
-
I removed it because I think it is not useful for people having only a single GPU (and I rely only on Colab/Kaggle/Sagemaker Studio, so for that purpose I removed the DDP strategy) for coding/ removing the bugs
Definitely, it depends upon Flash's version of download, but when I just did pip install, it threw an error of train_transform and validation_transform, but in the Github Repo, it was just transform, maybe the main branch is not deployed yet, correct me if I am wrong
Hey @uakarsh instead of creating a script from scratch, could you just apply your changes to the script here: https://github.com/Lightning-AI/lightning-flash/blob/master/flash_examples/integrations/learn2learn/image_classification_imagenette_mini.py
Happy for you to drop the ddp_shared, looks like a typo anyway as I guess it was meant to be ddp_sharded, not too sure.
Hope that helps :smiley:
Hey, @uakarsh - Any updates on this? :) Let us know if you need any help.
Would surely do it today. Got a bit engaged in my own work and about using PyTorch Lightning with Speech Recognition.
Update: Have updated the script (along with instructions for downloading the dataset) and removed the notebook.
Sure, I have added the entry for the updation in CHANGELOG.md The change is here Is there any need of creating a PR for the same updation, i.e in CHANGELOG.md ?
Failing tests are irrelevant to this PR. cc: @ethanwharris - can you please merge this whenever you are back?
@uakarsh - Apologies for the delay in getting this merged. The CI was failing for a couple of weeks, and we had to prioritize fixing it! As always, we appreciate your contribution. 🎉
Also, the CHANGELOG entry can just go in this PR! I've pushed the entry from your commit to this branch, hope that's fine! :)
Hi @krshrimali, thanks for merging this. This is my first contribution to any issue on Github, so thanks for giving me this chance and looking forward to helping/resolving more issues.
Hi @krshrimali, thanks for merging this. This is my first contribution to any issue on Github, so thanks for giving me this chance and looking forward to helping/resolving more issues.
It's always a memory, the first contribution! I still remember the exact time when I got my first PR merged, wish you the best! If you ever find an issue you find exciting to work upon, feel free to ping me, and I'll be happy to help wherever I can 🎉 ⚡