AdaFace
AdaFace copied to clipboard
AttributeError: 'MultiStepLR' object has no attribute 'get_epoch_values'
\AdaFace with the following property
self.m 0.4
self.h 0.333
self.s 64.0
self.t_alpha 0.01
Global seed set to 42
/home/administrator/anaconda3/envs/py388/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:441: LightningDeprecationWarning: Setting Trainer(gpus=1)
is deprecated in v1.7 and will be removed in v2.0. Please use Trainer(accelerator='gpu', devices=1)
instead.
rank_zero_deprecation(
Using 16bit native Automatic Mixed Precision (AMP)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
Trainer(limit_train_batches=1.0)
was configured so 100% of the batches per epoch will be used..
Trainer(val_check_interval=1.0)
was configured so validation will run at the end of the training epoch..
start training
making validation data memfile
[rank: 0] Global seed set to 42
Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/1
distributed_backend=nccl All distributed processes registered. Starting with 1 processes
creating train dataset record file length 490623 creating val dataset laoding validation data memfile laoding validation data memfile laoding validation data memfile laoding validation data memfile laoding validation data memfile LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
0 | model | Backbone | 43.6 M 1 | head | AdaFace | 5.4 M 2 | cross_entropy_loss | CrossEntropyLoss | 0
49.0 M Trainable params
0 Non-trainable params
49.0 M Total params
97.997 Total estimated model params size (MB)
Sanity Checking DataLoader 0: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:01<00:00, 9.94it/s]/home/administrator/anaconda3/envs/py388/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:233: UserWarning: You called self.log('agedb_30_num_val_samples', ...)
in your validation_epoch_end
but the value needs to be floating point. Converting it to torch.float32.
warning_cache.warn(
/home/administrator/anaconda3/envs/py388/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:233: UserWarning: You called self.log('epoch', ...)
in your validation_epoch_end
but the value needs to be floating point. Converting it to torch.float32.
warning_cache.warn(
/home/administrator/anaconda3/envs/py388/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:537: PossibleUserWarning: It is recommended to use self.log('agedb_30_val_acc', ..., sync_dist=True)
when logging on epoch level in distributed setting to accumulate the metric across devices.
warning_cache.warn(
/home/administrator/anaconda3/envs/py388/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:537: PossibleUserWarning: It is recommended to use self.log('agedb_30_best_threshold', ..., sync_dist=True)
when logging on epoch level in distributed setting to accumulate the metric across devices.
warning_cache.warn(
/home/administrator/anaconda3/envs/py388/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:537: PossibleUserWarning: It is recommended to use self.log('agedb_30_num_val_samples', ..., sync_dist=True)
when logging on epoch level in distributed setting to accumulate the metric across devices.
warning_cache.warn(
/home/administrator/anaconda3/envs/py388/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:537: PossibleUserWarning: It is recommended to use self.log('val_acc', ..., sync_dist=True)
when logging on epoch level in distributed setting to accumulate the metric across devices.
warning_cache.warn(
/home/administrator/anaconda3/envs/py388/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:537: PossibleUserWarning: It is recommended to use self.log('epoch', ..., sync_dist=True)
when logging on epoch level in distributed setting to accumulate the metric across devices.
warning_cache.warn(
Epoch 0: 0%| | 0/8635 [00:00<?, ?it/s]Traceback (most recent call last):
File "main.py", line 109, in
Is it a pytorch-lighting error?
same errors occur.
use lr = scheduler.get_last_lr()[0] instead of lr = scheduler.get_epoch_values(self.current_epoch)[0], it worked for me :)
use lr = scheduler.get_last_lr()[0] instead of lr = scheduler.get_epoch_values(self.current_epoch)[0], it worked for me :)
Yup. get_epoch_values() is deprecated