create import protection for torchvision
in utils, it imports torchvision even if the recipe doesnt need it. We should protect this import, in case the user hasnt installed it and doesnt need it.
What's your implementation plan @felipemello1 ? I'd prefer not to add a tag "good first issue" without really clear instructions.
This is the error:
File "/data/users/felipemello/torchtune/torchtune/data/__init__.py", line 7, in <module>
from torchtune.data._chat_formats import (
File "/data/users/felipemello/torchtune/torchtune/data/_chat_formats.py", line 10, in <module>
from torchtune.data._messages import Message, Role
File "/data/users/felipemello/torchtune/torchtune/data/_messages.py", line 9, in <module>
from torchtune.modules.transforms import Transform
File "/data/users/felipemello/torchtune/torchtune/modules/transforms/__init__.py", line 17, in <module>
from torchtune.modules.transforms.vision_utils.resize_with_pad import ( # noqa
File "/data/users/felipemello/torchtune/torchtune/modules/transforms/vision_utils/resize_with_pad.py", line 14, in <module>
import torchvision
ModuleNotFoundError: No module named 'torchvision'
It happens because in transforms.init we expose vision transforms.
A simple solution is to not import chat transforms from the same place as vision transforms. This can be done by renaming /vision_utils to /vision_transforms, and importing from there.
And putting the transforms.py (chat related) under /text_transforms, and importing from there.
cc: @RdoubleA @pbontrager
I agree this needs to be better thought out how we structure our imports here. However another stopgap we could do is the following:
try:
import torchvision
except ImportError:
log.warning("Attempted to import torchvision. Feel free to ignore if you are not trying to do multimodal training. Otherwise, please install ``torchvision`` package via conda or pip")
This would immediately allow users to use our package without the pre-requisite of another heavy PyTorch library.
I acknowledge it's hackiness, but in my mind this is a bad starting experience and it's not immediately clear ATM why someone would need torchvision when we don't (yet) support MM training.
Thoughts @felipemello1 @pbontrager ?
wouldnt you have to add it in all files that import torchvision? If you want a quick solution, I think that moving these functions from transforms/init --> transforms/vision_transforms/init will be best.
wouldnt you have to add it in all files that import torchvision? If you want a quick solution, I think that moving these functions from transforms/init --> transforms/vision_transforms/init will be best.
Fair enough - I still consider that a stopgap, but I see your point that it's less intrusive than a try/except.