MART
MART copied to clipboard
Update torch requirement from ~=1.13.1 to ~=2.0.1
You can trigger a rebase of this PR by commenting @dependabot rebase
.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
-
@dependabot rebase
will rebase this PR -
@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it -
@dependabot merge
will merge this PR after your CI passes on it -
@dependabot squash and merge
will squash and merge this PR after your CI passes on it -
@dependabot cancel merge
will cancel a previously requested merge and block automerging -
@dependabot reopen
will reopen this PR if it is closed -
@dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually -
@dependabot ignore this major version
will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this minor version
will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) -
@dependabot ignore this dependency
will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
We should also upgrade to the latest template: https://github.com/ashleve/lightning-hydra-template/compare/v1.4.0...v2.0.2
Notable changes
-
pytorch_lightning
->lightning.pytorch
- use
lightning.fabric
for TPU - upgrade to hydra 1.3
- upgrade toolkits in
pre-commit
- config
datamodule
->data
- add a
cpu
trainer config - move
src.tasks
tosrc
- stop exporting extra log after a task exception.
- add
aim
as a logger - split
src.utils.utils
into several .py files - update
.gitignore
for theaim
logger
Changes in pytorch-lightning ~= 1.6.5
-> lightning ~= 2.0.2
.
-
import pytorch_lightning as pl
->from lightning import pytorch as pl
-
LightningModule
:training_epoch_end(self, outputs)
->on_train_epoch_end(self)
. -
LightningModule
: removetraining_step_end()
-
LightningModule
: change argumentsconfigure_gradient_clipping()
lightning 2.0.2
depends on torchmetrics<2.0
and >=0.7.0
.
However, we want to keep torchmetrics == 0.6.0
because mAP is super slow in later versions.
I hope torchmetrics
will change the backend of mAP soon in the upcoming release.
Changes in torchmetrics == 0.6.0
-> torchmetrics == 0.11.4
-
Accuracy
requiresnum_classes
in arguments. -
torchmetrics.detection.MAP
->torchmetrics.detection.mean_ap.MeanAveragePrecision
The reason we used torchmetrics == 0.6.0
is because MeanAveragePrecision
is super slow in newer versions. It looks like they're finally going to revert back to the original implementation that uses the COCOapi: https://github.com/Lightning-AI/torchmetrics/issues/1024.
Should this be closed @mzweilin?