lightning-flash
lightning-flash copied to clipboard
NameError: name 'T' is not defined
🐛 Bug
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
Input In [6], in <cell line: 1>()
----> 1 datamodule = AudioClassificationData.from_folders(
2 train_folder="/workspaces/audio-ml/data/i/urban8k_images/train",
3 val_folder="/workspaces/audio-ml/data/i/urban8k_images/val",
4 transform_kwargs=dict(spectrogram_size=(64, 64)),
5 batch_size=4,
6 )
File /usr/local/lib/python3.10/site-packages/flash/audio/classification/data.py:286, in AudioClassificationData.from_folders(cls, train_folder, val_folder, test_folder, predict_folder, train_transform, val_transform, test_transform, predict_transform, target_formatter, input_cls, transform_kwargs, **data_module_kwargs)
181 """Load the :class:`~flash.audio.classification.data.AudioClassificationData` from folders containing
182 spectrogram images.
183
(...)
277 >>> shutil.rmtree("predict_folder")
278 """
280 ds_kw = dict(
281 target_formatter=target_formatter,
282 transform_kwargs=transform_kwargs,
283 input_transforms_registry=cls.input_transforms_registry,
284 )
--> 286 train_input = input_cls(RunningStage.TRAINING, train_folder, transform=train_transform, **ds_kw)
287 ds_kw["target_formatter"] = getattr(train_input, "target_formatter", None)
289 return cls(
290 train_input,
291 input_cls(RunningStage.VALIDATING, val_folder, transform=val_transform, **ds_kw),
(...)
294 **data_module_kwargs,
295 )
File /usr/local/lib/python3.10/site-packages/flash/core/data/io/input.py:134, in _wrap_init.<locals>.wrapper(self, *args, **kwargs)
132 @functools.wraps(fn)
133 def wrapper(self, *args, **kwargs):
--> 134 fn(self, *args, **kwargs)
135 _validate_input(self)
File /usr/local/lib/python3.10/site-packages/flash/core/data/io/input.py:182, in InputBase.__init__(self, running_stage, transform, transform_kwargs, input_transforms_registry, *args, **kwargs)
171 def __init__(
172 self,
173 running_stage: RunningStage,
(...)
178 **kwargs: Any,
179 ) -> None:
180 from flash.core.data.io.input_transform import create_transform
--> 182 self.transform = create_transform(
183 transform,
184 running_stage,
185 input_transforms_registry or self.input_transforms_registry,
186 transform_kwargs,
187 )
188 super().__init__(running_stage=running_stage)
190 self.data = None
File /usr/local/lib/python3.10/site-packages/flash/core/data/io/input_transform.py:1052, in create_transform(transform, running_stage, input_transforms_registry, transform_kwargs)
1049 return transform
1051 if inspect.isclass(transform) and issubclass(transform, InputTransform):
-> 1052 return transform(running_stage=running_stage, **transform_kwargs)
1054 if isinstance(transform, partial):
1055 return transform(running_stage=running_stage, **transform_kwargs)
File <string>:7, in __init__(self, running_stage, spectrogram_size, time_mask_param, freq_mask_param)
File /usr/local/lib/python3.10/site-packages/flash/core/data/io/input_transform.py:93, in InputTransform.__post_init__(self)
91 self._collate_in_worker_from_transform: Optional[bool] = None
92 self._transform = None
---> 93 self._transform = self._check_transforms(self._resolve_transforms(self.running_stage), self.running_stage)
95 # Hack
96 Properties.__init__(self, running_stage=self.running_stage)
File /usr/local/lib/python3.10/site-packages/flash/core/data/io/input_transform.py:935, in InputTransform._resolve_transforms(self, running_stage)
933 # get associated transform
934 try:
--> 935 fn = getattr(self, method_name)()
936 except AttributeError as e:
937 raise AttributeError(str(e) + ". Hint: Call super().__init__(...) after setting all attributes.")
File /usr/local/lib/python3.10/site-packages/flash/audio/classification/input_transform.py:44, in AudioClassificationInputTransform.train_input_per_sample_transform(self)
41 if self.freq_mask_param is not None:
42 transforms.append(TAudio.FrequencyMasking(freq_mask_param=self.freq_mask_param))
---> 44 transforms += [T.ToTensor(), T.Resize(self.spectrogram_size)]
45 return T.Compose(transforms)
NameError: name 'T' is not defined
To Reproduce
https://lightning-flash.readthedocs.io/en/latest/reference/audio_classification.html Step 2 with the datamodul dosn't work
Code sample
python notebook: https://github.com/Modius22/audio-ml/blob/main/src/flash-prototyping.ipynb
For reproducing: clone github repo; open the project with vscode and devcontainer extension; in the devcontainer use the ./start_jupyter.sh to start the jupyter server
Environment
python:3.10 docker images as devcontainer in vs code
general project: https://github.com/Modius22/audio-ml python notebook: https://github.com/Modius22/audio-ml/blob/main/src/flash-prototyping.ipynb
Hi, @Modius22 - Thanks for raising the issue. Can you check if you have torchvision
installed? Try installing using: pip install torchvision
, and try running the script again?
From the user experience perspective, we should also raise an error if the library is not installed. I'll take a note of it. Thanks!
Hi @krshrimali,
Thx for the hint. That resolved my problem =)
Awesome, thanks for letting me know. 😄
Hi @krshrimali, ran into this problem as well and was confused until finding this issue. Am I correct in understanding that #1318 fixes this, in terms of providing a more clear error?
Hi, @jayqi - Thank you for the question. In terms of providing a more clear error, this hasn't been fixed. I'll re-open this issue, and I think a refactor will be good in Flash to add better errors if the module is not found. In case you are interested to pick this up, please feel free (also let me know so that I can assign you the issue). Else, I'll try to scope this for the next week, and create an issue and a PR around this. 🎉
Sorry that you had to face this issue!
cc: @ethanwharris - What do you think about the refactor to raise errors instead of running into: `NameError "T" not defined``.
Hey @krshrimali Yeah it looks like we're just missing some coverage with the requires
decorator here: https://github.com/Lightning-AI/lightning-flash/blob/master/flash/audio/classification/input_transform.py
Probably we also need to add torchvision to the audio requirements since the AudioClassificationData
won't work without it