FeatUp
FeatUp copied to clipboard
Import error adaptive_conv_cuda
Amazing work! I followed the installation steps but I am still getting this error. Am I missing something?
ImportError: cannot import name 'cuda_impl' from 'featup.adaptive_conv_cuda' (/home/bastian/.cache/torch/hub/mhamilton723_FeatUp_main/featup/adaptive_conv_cuda/__init__.py)
So for me i could resolve the issue by renaming the extension in setup.py
to
from setuptools import setup, find_packages
from torch.utils.cpp_extension import BuildExtension, CUDAExtension
setup(
name="featup",
version="0.1.2",
packages=find_packages(),
install_requires=[
"torch",
"kornia",
"omegaconf",
"pytorch-lightning",
"torchvision",
"tqdm",
"torchmetrics",
"scikit-learn",
"numpy",
"matplotlib",
],
author="Mark Hamilton, Stephanie Fu",
author_email="[email protected], [email protected]",
description='Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024',
long_description=open("README.md").read(),
long_description_content_type="text/markdown",
url="https://github.com/mhamilton723/FeatUp",
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
],
python_requires=">=3.6",
ext_modules=[
CUDAExtension(
"adaptive_conv_cuda_cuda_impl",
[
"featup/adaptive_conv_cuda/adaptive_conv_cuda.cpp",
"featup/adaptive_conv_cuda/adaptive_conv_kernel.cu",
],
),
],
cmdclass={"build_ext": BuildExtension},
)
then i patched the import in .cache/torch/hub/mhamilton723_FeatUp_main/featup/adaptive_conv_cuda/adaptive_conv.py
from from featup.adaptive_conv_cuda import cuda_impl
to import adaptive_conv_cuda_cuda_impl as cuda_impl
then run pip install -e ../FeatUp/ --force-reinstall
Thank you very much for sharing. I'll give it a try
Hi.
I had the same issue.
It was caused by a mismatch between the CUDA version used by Pytorch and the native CUDA version in my computer. I got the following message when running setup.py
:
The detected CUDA version (11.7) mismatches the version that was used to compile
PyTorch (12.1). Please make sure to use the same CUDA versions.
Hence I had to change my pytorch version to 2.0.1 to get one with cuda 11.7 and run again setup.py
.
So for me i could resolve the issue by renaming the extension in
setup.py
tofrom setuptools import setup, find_packages from torch.utils.cpp_extension import BuildExtension, CUDAExtension setup( name="featup", version="0.1.2", packages=find_packages(), install_requires=[ "torch", "kornia", "omegaconf", "pytorch-lightning", "torchvision", "tqdm", "torchmetrics", "scikit-learn", "numpy", "matplotlib", ], author="Mark Hamilton, Stephanie Fu", author_email="[email protected], [email protected]", description='Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024', long_description=open("README.md").read(), long_description_content_type="text/markdown", url="https://github.com/mhamilton723/FeatUp", classifiers=[ "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", ], python_requires=">=3.6", ext_modules=[ CUDAExtension( "adaptive_conv_cuda_cuda_impl", [ "featup/adaptive_conv_cuda/adaptive_conv_cuda.cpp", "featup/adaptive_conv_cuda/adaptive_conv_kernel.cu", ], ), ], cmdclass={"build_ext": BuildExtension}, )
then i patched the import in
.cache/torch/hub/mhamilton723_FeatUp_main/featup/adaptive_conv_cuda/adaptive_conv.py
from
from featup.adaptive_conv_cuda import cuda_impl
toimport adaptive_conv_cuda_cuda_impl as cuda_impl
then run
pip install -e ../FeatUp/ --force-reinstall
@mhamilton723 Great job on the repo! Would it be possible to get this change in the torch hub model too?
Encountering this issue as well, running the Colab notebook linked on the README, as is!
Edit: it seems the the pip install cell wasn't correctly finishing, and after fixing that, issue seemed to go away! 🎉
So for me i could resolve the issue by renaming the extension in
setup.py
tofrom setuptools import setup, find_packages from torch.utils.cpp_extension import BuildExtension, CUDAExtension setup( name="featup", version="0.1.2", packages=find_packages(), install_requires=[ "torch", "kornia", "omegaconf", "pytorch-lightning", "torchvision", "tqdm", "torchmetrics", "scikit-learn", "numpy", "matplotlib", ], author="Mark Hamilton, Stephanie Fu", author_email="[email protected], [email protected]", description='Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024', long_description=open("README.md").read(), long_description_content_type="text/markdown", url="https://github.com/mhamilton723/FeatUp", classifiers=[ "Programming Language :: Python :: 3", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", ], python_requires=">=3.6", ext_modules=[ CUDAExtension( "adaptive_conv_cuda_cuda_impl", [ "featup/adaptive_conv_cuda/adaptive_conv_cuda.cpp", "featup/adaptive_conv_cuda/adaptive_conv_kernel.cu", ], ), ], cmdclass={"build_ext": BuildExtension}, )
then i patched the import in
.cache/torch/hub/mhamilton723_FeatUp_main/featup/adaptive_conv_cuda/adaptive_conv.py
from
from featup.adaptive_conv_cuda import cuda_impl
toimport adaptive_conv_cuda_cuda_impl as cuda_impl
then run
pip install -e ../FeatUp/ --force-reinstall
Having the same issue, but cant resolve it neither with adapted cuda version nor with the above trick.
@JennySeidenschwarz in another issue the author mentioned that it's possible to remove the CUDAExtension part and replacing the custom layer with another default implementation of torch, which is a bit slower. I tried it and it works fine if you want to avoid using the custom layers
https://github.com/mhamilton723/FeatUp/issues/4#issuecomment-2007591205
Hi all, thanks for wrestling with this. Ive asked the CUDA master @axelfeldmann to take a look at this.
The sanity checks i would go through are:
- Uninstall featup and re-install it via the GITHub or local installation path. The installation process should compile the ops for your architecture
- My torch is
torch
:2.0.0+cu118
and cuda 11.8. Try getting pytorch following the appropriate commands here: https://pytorch.org/get-started/locally/
Hey All, we took the change proposed by @HO4X and added it to the setup.py. Please let us know if this fixes the issues! Appreciate the patience
Hoping this fixed you, closing this issue for now feel free to open a new one if needed