FeatUp icon indicating copy to clipboard operation
FeatUp copied to clipboard

Import error adaptive_conv_cuda

Open BasDiaz opened this issue 11 months ago • 7 comments

Amazing work! I followed the installation steps but I am still getting this error. Am I missing something?

ImportError: cannot import name 'cuda_impl' from 'featup.adaptive_conv_cuda' (/home/bastian/.cache/torch/hub/mhamilton723_FeatUp_main/featup/adaptive_conv_cuda/__init__.py)

BasDiaz avatar Mar 20 '24 01:03 BasDiaz

So for me i could resolve the issue by renaming the extension in setup.py to

from setuptools import setup, find_packages
from torch.utils.cpp_extension import BuildExtension, CUDAExtension

setup(
    name="featup",
    version="0.1.2",
    packages=find_packages(),
    install_requires=[
        "torch",
        "kornia",
        "omegaconf",
        "pytorch-lightning",
        "torchvision",
        "tqdm",
        "torchmetrics",
        "scikit-learn",
        "numpy",
        "matplotlib",
    ],
    author="Mark Hamilton, Stephanie Fu",
    author_email="[email protected], [email protected]",
    description='Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024',
    long_description=open("README.md").read(),
    long_description_content_type="text/markdown",
    url="https://github.com/mhamilton723/FeatUp",
    classifiers=[
        "Programming Language :: Python :: 3",
        "License :: OSI Approved :: MIT License",
        "Operating System :: OS Independent",
    ],
    python_requires=">=3.6",
    ext_modules=[
        CUDAExtension(
            "adaptive_conv_cuda_cuda_impl",
            [
                "featup/adaptive_conv_cuda/adaptive_conv_cuda.cpp",
                "featup/adaptive_conv_cuda/adaptive_conv_kernel.cu",
            ],
        ),
    ],
    cmdclass={"build_ext": BuildExtension},
)

then i patched the import in .cache/torch/hub/mhamilton723_FeatUp_main/featup/adaptive_conv_cuda/adaptive_conv.py

from from featup.adaptive_conv_cuda import cuda_impl to import adaptive_conv_cuda_cuda_impl as cuda_impl

then run pip install -e ../FeatUp/ --force-reinstall

HO4X avatar Mar 20 '24 09:03 HO4X

Thank you very much for sharing. I'll give it a try

BasDiaz avatar Mar 20 '24 14:03 BasDiaz

Hi.

I had the same issue.

It was caused by a mismatch between the CUDA version used by Pytorch and the native CUDA version in my computer. I got the following message when running setup.py :

The detected CUDA version (11.7) mismatches the version that was used to compile
 PyTorch (12.1). Please make sure to use the same CUDA versions.

Hence I had to change my pytorch version to 2.0.1 to get one with cuda 11.7 and run again setup.py .

tcourat avatar Mar 21 '24 13:03 tcourat

So for me i could resolve the issue by renaming the extension in setup.py to

from setuptools import setup, find_packages
from torch.utils.cpp_extension import BuildExtension, CUDAExtension

setup(
    name="featup",
    version="0.1.2",
    packages=find_packages(),
    install_requires=[
        "torch",
        "kornia",
        "omegaconf",
        "pytorch-lightning",
        "torchvision",
        "tqdm",
        "torchmetrics",
        "scikit-learn",
        "numpy",
        "matplotlib",
    ],
    author="Mark Hamilton, Stephanie Fu",
    author_email="[email protected], [email protected]",
    description='Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024',
    long_description=open("README.md").read(),
    long_description_content_type="text/markdown",
    url="https://github.com/mhamilton723/FeatUp",
    classifiers=[
        "Programming Language :: Python :: 3",
        "License :: OSI Approved :: MIT License",
        "Operating System :: OS Independent",
    ],
    python_requires=">=3.6",
    ext_modules=[
        CUDAExtension(
            "adaptive_conv_cuda_cuda_impl",
            [
                "featup/adaptive_conv_cuda/adaptive_conv_cuda.cpp",
                "featup/adaptive_conv_cuda/adaptive_conv_kernel.cu",
            ],
        ),
    ],
    cmdclass={"build_ext": BuildExtension},
)

then i patched the import in .cache/torch/hub/mhamilton723_FeatUp_main/featup/adaptive_conv_cuda/adaptive_conv.py

from from featup.adaptive_conv_cuda import cuda_impl to import adaptive_conv_cuda_cuda_impl as cuda_impl

then run pip install -e ../FeatUp/ --force-reinstall

@mhamilton723 Great job on the repo! Would it be possible to get this change in the torch hub model too?

federico-ferlito avatar Mar 21 '24 14:03 federico-ferlito

Encountering this issue as well, running the Colab notebook linked on the README, as is!

Edit: it seems the the pip install cell wasn't correctly finishing, and after fixing that, issue seemed to go away! 🎉

skoppula avatar Mar 22 '24 00:03 skoppula

So for me i could resolve the issue by renaming the extension in setup.py to

from setuptools import setup, find_packages
from torch.utils.cpp_extension import BuildExtension, CUDAExtension

setup(
    name="featup",
    version="0.1.2",
    packages=find_packages(),
    install_requires=[
        "torch",
        "kornia",
        "omegaconf",
        "pytorch-lightning",
        "torchvision",
        "tqdm",
        "torchmetrics",
        "scikit-learn",
        "numpy",
        "matplotlib",
    ],
    author="Mark Hamilton, Stephanie Fu",
    author_email="[email protected], [email protected]",
    description='Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024',
    long_description=open("README.md").read(),
    long_description_content_type="text/markdown",
    url="https://github.com/mhamilton723/FeatUp",
    classifiers=[
        "Programming Language :: Python :: 3",
        "License :: OSI Approved :: MIT License",
        "Operating System :: OS Independent",
    ],
    python_requires=">=3.6",
    ext_modules=[
        CUDAExtension(
            "adaptive_conv_cuda_cuda_impl",
            [
                "featup/adaptive_conv_cuda/adaptive_conv_cuda.cpp",
                "featup/adaptive_conv_cuda/adaptive_conv_kernel.cu",
            ],
        ),
    ],
    cmdclass={"build_ext": BuildExtension},
)

then i patched the import in .cache/torch/hub/mhamilton723_FeatUp_main/featup/adaptive_conv_cuda/adaptive_conv.py

from from featup.adaptive_conv_cuda import cuda_impl to import adaptive_conv_cuda_cuda_impl as cuda_impl

then run pip install -e ../FeatUp/ --force-reinstall

Having the same issue, but cant resolve it neither with adapted cuda version nor with the above trick.

JennySeidenschwarz avatar Mar 22 '24 14:03 JennySeidenschwarz

@JennySeidenschwarz in another issue the author mentioned that it's possible to remove the CUDAExtension part and replacing the custom layer with another default implementation of torch, which is a bit slower. I tried it and it works fine if you want to avoid using the custom layers

https://github.com/mhamilton723/FeatUp/issues/4#issuecomment-2007591205

federico-ferlito avatar Mar 22 '24 14:03 federico-ferlito

Hi all, thanks for wrestling with this. Ive asked the CUDA master @axelfeldmann to take a look at this.

The sanity checks i would go through are:

  1. Uninstall featup and re-install it via the GITHub or local installation path. The installation process should compile the ops for your architecture
  2. My torch is torch:2.0.0+cu118 and cuda 11.8. Try getting pytorch following the appropriate commands here: https://pytorch.org/get-started/locally/

mhamilton723 avatar Mar 26 '24 17:03 mhamilton723

Hey All, we took the change proposed by @HO4X and added it to the setup.py. Please let us know if this fixes the issues! Appreciate the patience

mhamilton723 avatar Mar 26 '24 18:03 mhamilton723

Hoping this fixed you, closing this issue for now feel free to open a new one if needed

mhamilton723 avatar Apr 02 '24 18:04 mhamilton723