autoalbument
autoalbument copied to clipboard
AttributeError: module 'tensorflow.python.keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects'
Hello, I tried to run autoalbument to the shipped examples autoalbument-search --config-dir /home/unix/own/autoalbument/examples/cifar10/
Can someone provide hints to fix that problem?
/home/unix/miniconda3/lib/python3.8/site-packages/autoalbument/cli/search.py:15: UserWarning: register_resolver() is deprecated.
See https://github.com/omry/omegaconf/issues/426 for migration instructions.
OmegaConf.register_resolver("config_dir", get_config_dir)
/home/unix/miniconda3/lib/python3.8/site-packages/hydra/core/utils.py:143: UserWarning: register_resolver() is deprecated.
See https://github.com/omry/omegaconf/issues/426 for migration instructions.
OmegaConf.register_resolver(name, f)
_version: 2
task: classification
policy_model:
task_factor: 0.1
gp_factor: 10
temperature: 0.05
num_sub_policies: 100
num_chunks: 8
operation_count: 4
operations:
- _target_: autoalbument.faster_autoaugment.models.policy_operations.ShiftRGB
shift_r: true
- _target_: autoalbument.faster_autoaugment.models.policy_operations.ShiftRGB
shift_g: true
- _target_: autoalbument.faster_autoaugment.models.policy_operations.ShiftRGB
shift_b: true
- _target_: autoalbument.faster_autoaugment.models.policy_operations.RandomBrightness
- _target_: autoalbument.faster_autoaugment.models.policy_operations.RandomContrast
- _target_: autoalbument.faster_autoaugment.models.policy_operations.Solarize
- _target_: autoalbument.faster_autoaugment.models.policy_operations.HorizontalFlip
- _target_: autoalbument.faster_autoaugment.models.policy_operations.VerticalFlip
- _target_: autoalbument.faster_autoaugment.models.policy_operations.Rotate
- _target_: autoalbument.faster_autoaugment.models.policy_operations.ShiftX
- _target_: autoalbument.faster_autoaugment.models.policy_operations.ShiftY
- _target_: autoalbument.faster_autoaugment.models.policy_operations.Scale
- _target_: autoalbument.faster_autoaugment.models.policy_operations.CutoutFixedNumberOfHoles
- _target_: autoalbument.faster_autoaugment.models.policy_operations.CutoutFixedSize
classification_model:
_target_: model.Cifar10ClassificationModel
num_classes: _MISSING_
architecture: resnet18
pretrained: false
data:
dataset:
_target_: dataset.Cifar10SearchDataset
root: ~/data/cifar10
train: true
download: true
input_dtype: uint8
preprocessing: null
normalization:
mean:
- 0.4914
- 0.4822
- 0.4465
std:
- 0.247
- 0.243
- 0.261
dataloader:
_target_: torch.utils.data.DataLoader
batch_size: 128
shuffle: true
num_workers: 8
pin_memory: true
drop_last: true
searcher:
_target_: autoalbument.faster_autoaugment.search.FasterAutoAugmentSearcher
trainer:
_target_: pytorch_lightning.Trainer
gpus: 0
benchmark: true
max_epochs: 40
resume_from_checkpoint: null
optim:
main:
_target_: torch.optim.Adam
lr: 0.001
betas:
- 0
- 0.999
policy:
_target_: torch.optim.Adam
lr: 0.001
betas:
- 0
- 0.999
callbacks:
- _target_: autoalbument.callbacks.MonitorAverageParameterChange
- _target_: autoalbument.callbacks.SavePolicy
- _target_: pytorch_lightning.callbacks.ModelCheckpoint
save_last: true
dirpath: checkpoints
logger:
_target_: pytorch_lightning.loggers.TensorBoardLogger
save_dir: /home/unix/own/autoalbument/examples/cifar10/outputs/2021-06-13/11-37-52/tensorboard_logs
seed: 42
Working directory: /home/unix/own/autoalbument/examples/cifar10/outputs/2021-06-13/11-37-52
/home/unix/miniconda3/lib/python3.8/site-packages/hydra/utils.py:32: UserWarning: `OmegaConf.is_none()` is deprecated, see https://github.com/omry/omegaconf/issues/547
if OmegaConf.is_none(config):
[2021-06-13 11:37:54,846][pytorch_lightning.utilities.seed][INFO] - Global seed set to 42
/home/unix/miniconda3/lib/python3.8/site-packages/hydra/utils.py:32: UserWarning: `OmegaConf.is_none()` is deprecated, see https://github.com/omry/omegaconf/issues/547
if OmegaConf.is_none(config):
[2021-06-13 11:37:56,647][autoalbument.faster_autoaugment.datamodule][INFO] - Preprocessing transform:
Compose([
Normalize(always_apply=False, p=1.0, mean=[0.4914, 0.4822, 0.4465], std=[0.247, 0.243, 0.261], max_pixel_value=255),
ToTensorV2(always_apply=True, p=1.0, transpose_mask=True),
], p=1.0, bbox_params=None, keypoint_params=None, additional_targets={})
/home/unix/miniconda3/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:50: UserWarning: ModelCheckpoint(save_last=True, monitor=None) is a redundant configuration. You can save the last checkpoint with ModelCheckpoint(save_top_k=None, monitor=None).
warnings.warn(*args, **kwargs)
[2021-06-13 11:37:56,666][pytorch_lightning.utilities.distributed][INFO] - GPU available: False, used: False
[2021-06-13 11:37:56,666][pytorch_lightning.utilities.distributed][INFO] - TPU available: None, using: 0 TPU cores
Files already downloaded and verified
Files already downloaded and verified
Traceback (most recent call last):
File "/home/unix/miniconda3/lib/python3.8/site-packages/tensorboard/compat/__init__.py", line 42, in tf
from tensorboard.compat import notf # noqa: F401
ImportError: cannot import name 'notf' from 'tensorboard.compat' (/home/unix/miniconda3/lib/python3.8/site-packages/tensorboard/compat/__init__.py)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/unix/miniconda3/lib/python3.8/site-packages/pytorch_lightning/loggers/tensorboard.py", line 204, in log_metrics
self.experiment.add_scalar(k, v, step)
File "/home/unix/miniconda3/lib/python3.8/site-packages/pytorch_lightning/loggers/base.py", line 39, in experiment
return get_experiment() or DummyExperiment()
File "/home/unix/miniconda3/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py", line 40, in wrapped_fn
return fn(*args, **kwargs)
File "/home/unix/miniconda3/lib/python3.8/site-packages/pytorch_lightning/loggers/base.py", line 38, in get_experiment
return fn(self)
File "/home/unix/miniconda3/lib/python3.8/site-packages/pytorch_lightning/loggers/tensorboard.py", line 145, in experiment
self._experiment = SummaryWriter(log_dir=self.log_dir, **self._kwargs)
File "/home/unix/miniconda3/lib/python3.8/site-packages/torch/utils/tensorboard/writer.py", line 220, in __init__
self._get_file_writer()
File "/home/unix/miniconda3/lib/python3.8/site-packages/torch/utils/tensorboard/writer.py", line 250, in _get_file_writer
self.file_writer = FileWriter(self.log_dir, self.max_queue,
File "/home/unix/miniconda3/lib/python3.8/site-packages/torch/utils/tensorboard/writer.py", line 60, in __init__
self.event_writer = EventFileWriter(
File "/home/unix/miniconda3/lib/python3.8/site-packages/tensorboard/summary/writer/event_file_writer.py", line 72, in __init__
if not tf.io.gfile.exists(logdir):
File "/home/unix/miniconda3/lib/python3.8/site-packages/tensorboard/lazy.py", line 65, in __getattr__
return getattr(load_once(self), attr_name)
File "/home/unix/miniconda3/lib/python3.8/site-packages/tensorboard/lazy.py", line 97, in wrapper
cache[arg] = f(arg)
File "/home/unix/miniconda3/lib/python3.8/site-packages/tensorboard/lazy.py", line 50, in load_once
module = load_fn()
File "/home/unix/miniconda3/lib/python3.8/site-packages/tensorboard/compat/__init__.py", line 45, in tf
import tensorflow
File "/home/unix/miniconda3/lib/python3.8/site-packages/tensorflow/__init__.py", line 41, in <module>
from tensorflow.python.tools import module_util as _module_util
File "/home/unix/miniconda3/lib/python3.8/site-packages/tensorflow/python/__init__.py", line 84, in <module>
from tensorflow.python import keras
File "/home/unix/miniconda3/lib/python3.8/site-packages/tensorflow/python/keras/__init__.py", line 27, in <module>
from tensorflow.python.keras import models
File "/home/unix/miniconda3/lib/python3.8/site-packages/tensorflow/python/keras/models.py", line 24, in <module>
from tensorflow.python.keras import metrics as metrics_module
File "/home/unix/miniconda3/lib/python3.8/site-packages/tensorflow/python/keras/metrics.py", line 37, in <module>
from tensorflow.python.keras.engine import base_layer
File "/home/unix/miniconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 51, in <module>
from tensorflow.python.keras import initializers
File "/home/unix/miniconda3/lib/python3.8/site-packages/tensorflow/python/keras/initializers/__init__.py", line 127, in <module>
populate_deserializable_objects()
File "/home/unix/miniconda3/lib/python3.8/site-packages/tensorflow/python/keras/initializers/__init__.py", line 85, in populate_deserializable_objects
generic_utils.populate_dict_with_module_objects(
AttributeError: module 'tensorflow.python.keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/unix/miniconda3/lib/python3.8/site-packages/autoalbument/cli/search.py", line 55, in main
searcher.search()
File "/home/unix/miniconda3/lib/python3.8/site-packages/autoalbument/faster_autoaugment/search.py", line 65, in search
self.trainer.fit(self.model, datamodule=self.datamodule)
File "/home/unix/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 510, in fit
results = self.accelerator_backend.train()
File "/home/unix/miniconda3/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 56, in train
self.trainer.setup_trainer(self.trainer.model)
File "/home/unix/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 442, in setup_trainer
self.logger.log_hyperparams(ref_model.hparams_initial)
File "/home/unix/miniconda3/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py", line 40, in wrapped_fn
return fn(*args, **kwargs)
File "/home/unix/miniconda3/lib/python3.8/site-packages/pytorch_lightning/loggers/tensorboard.py", line 183, in log_hyperparams
self.log_metrics(metrics, 0)
File "/home/unix/miniconda3/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py", line 40, in wrapped_fn
return fn(*args, **kwargs)
File "/home/unix/miniconda3/lib/python3.8/site-packages/pytorch_lightning/loggers/tensorboard.py", line 207, in log_metrics
type(e)(e.message + m)
AttributeError: 'AttributeError' object has no attribute 'message'
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.