Google colab not working even with PRO and high-RAM
Traceback (most recent call last):
File "/content/StableSR/scripts/sr_val_ddpm_text_T_vqganfin_old.py", line 319, in
color correction>>>>>>>>>>> Use adain color correction
Loading model from ./vqgan_cfw_00011.ckpt
Global Step: 18000
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 1.13.1 with CUDA None (you have 2.0.1+cu117)
Python 3.10.11 (you have 3.10.10)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
/usr/local/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be removed in 0.17. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
warnings.warn(
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 64, 64) = 16384 dimensions.
making attention of type 'vanilla' with 512 in_channels
/usr/local/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
/usr/local/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=VGG16_Weights.IMAGENET1K_V1. You can also use weights=VGG16_Weights.DEFAULT to get the most up-to-date weights.
warnings.warn(msg)
loaded pretrained LPIPS loss from taming/modules/autoencoder/lpips/vgg.pth
missing>>>>>>>>>>>>>>>>>>>
Your env is not correct. You should run following the demo step by step and make sure each line finishes correctly.
Hmm.. I just ran the colab as it is.. didn't change anything
The log info shows that your pytorch ver is not correct. I guess u didn't run some lines successfully.
Ok. I have installed the correct versions of dependencies. Now I get:
File "/content/StableSR/scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas_tile.py", line 19, in
when running the this cell:
VQGANTILE_STRIDE = int(VQGANTILE_SIZE * 0.9) if Enable_Tile: !python scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas_tile.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt './stablesr_000117.ckpt' --vqgan_ckpt './vqgan_cfw_00011.ckpt' --init-img 'inputs/user_upload' --outdir 'outputs/user_upload' --ddpm_steps {DDPM_STEPS} --dec_w {FIDELITY_WEIGHT} --upscale {UPSCALE} --tile_overlap {TILE_OVERLAP} --seed {SEED} --vqgantile_stride {VQGANTILE_STRIDE} --vqgantile_size {VQGANTILE_SIZE} --colorfix_type 'adain' elif Aggregation_Sampling: !python scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt './stablesr_000117.ckpt' --vqgan_ckpt './vqgan_cfw_00011.ckpt' --init-img 'inputs/user_upload' --outdir 'outputs/user_upload' --ddpm_steps {DDPM_STEPS} --dec_w {FIDELITY_WEIGHT} --upscale {UPSCALE} --tile_overlap {TILE_OVERLAP} --seed {SEED} --colorfix_type 'adain' else: !python scripts/sr_val_ddpm_text_T_vqganfin_old.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt './stablesr_000117.ckpt' --vqgan_ckpt './vqgan_cfw_00011.ckpt' --init-img 'inputs/user_upload' --outdir 'outputs/user_upload' --ddpm_steps {DDPM_STEPS} --dec_w {FIDELITY_WEIGHT} --seed {SEED} --colorfix_type 'adain'
It is still env problem. You need to make sure all this part works properly:
Am.. I click the button to run this cell.. Are there any other ways to make this part work properly?
I have the ldm folder in my colab runtum, but for some reason the !python scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas_tile.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt './stablesr_000117.ckpt' --vqgan_ckpt './vqgan_cfw_00011.ckpt' --init-img 'inputs/user_upload' --outdir 'outputs/user_upload' --ddpm_steps {DDPM_STEPS} --dec_w {FIDELITY_WEIGHT} --upscale {UPSCALE} --tile_overlap {TILE_OVERLAP} --seed {SEED} --vqgantile_stride {VQGANTILE_STRIDE} --vqgantile_size {VQGANTILE_SIZE} --colorfix_type 'adain'
can't find it or access it.. but it's colab, so it can't be administrator access stuff..
btw, I use Windows, if that matters..
Am.. I click the button to run this cell.. Are there any other ways to make this part work properly? I have the ldm folder in my colab runtum, but for some reason the
!python scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas_tile.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt './stablesr_000117.ckpt' --vqgan_ckpt './vqgan_cfw_00011.ckpt' --init-img 'inputs/user_upload' --outdir 'outputs/user_upload' --ddpm_steps {DDPM_STEPS} --dec_w {FIDELITY_WEIGHT} --upscale {UPSCALE} --tile_overlap {TILE_OVERLAP} --seed {SEED} --vqgantile_stride {VQGANTILE_STRIDE} --vqgantile_size {VQGANTILE_SIZE} --colorfix_type 'adain'can't find it or access it.. but it's colab, so it can't be administrator access stuff..
That is because you did not successfully pip install -e . all the things. You can pay attention to the command line to make sure there is no error info. You also need to make sure the cell has been totally finished before you run the next one.
btw, I use Windows, if that matters..
Colab runs on the website so that does not matter.
Am.. I click the button to run this cell.. Are there any other ways to make this part work properly? I have the ldm folder in my colab runtum, but for some reason the
!python scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas_tile.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt './stablesr_000117.ckpt' --vqgan_ckpt './vqgan_cfw_00011.ckpt' --init-img 'inputs/user_upload' --outdir 'outputs/user_upload' --ddpm_steps {DDPM_STEPS} --dec_w {FIDELITY_WEIGHT} --upscale {UPSCALE} --tile_overlap {TILE_OVERLAP} --seed {SEED} --vqgantile_stride {VQGANTILE_STRIDE} --vqgantile_size {VQGANTILE_SIZE} --colorfix_type 'adain'can't find it or access it.. but it's colab, so it can't be administrator access stuff..That is because you did not successfully
pip install -e .all the things. You can pay attention to the command line to make sure there is no error info. You also need to make sure the cell has been totally finished before you run the next one.
Ok.. but I have clicked to run the colab code provided. I did not change anything. Please double check, if the colab is correctly written.
I suppose it should work since I used to run it successfully and I did not change env settings. I don't have colab pro now so I am sry I couldn't do further checking. Maybe someone else could help if possible:)
I have found, that I needed to restart the kernel. However, now I am getting a different error, despite having latest pytorch_lightning installed, as instructed by the colab:
Global seed set to 42
color correction>>>>>>>>>>> Use adain color correction
Loading model from ./stablesr_000117.ckpt
Global Step: 16500
Traceback (most recent call last):
File "/content/StableSR/scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas_tile.py", line 422, in
Hi~The pytorch lighting ver should be 1.4.2.... If you follow the demo, there should be no version errors.
I just ran the google colab without any changes, and I get:
NotImplementedError: No operator found for memory_efficient_attention_forward with inputs:
query : shape=(1, 25600, 1, 512) (torch.float16)
key : shape=(1, 25600, 1, 512) (torch.float16)
value : shape=(1, 25600, 1, 512) (torch.float16)
attn_bias : <class 'NoneType'>
p : 0.0
cutlassF is not supported because:
xFormers wasn't build with CUDA support
Operator wasn't built - see python -m xformers.info for more info
flshattF is not supported because:
xFormers wasn't build with CUDA support
max(query.shape[-1] != value.shape[-1]) > 128
Operator wasn't built - see python -m xformers.info for more info
tritonflashattF is not supported because:
xFormers wasn't build with CUDA support
max(query.shape[-1] != value.shape[-1]) > 128
requires A100 GPU
smallkF is not supported because:
xFormers wasn't build with CUDA support
dtype=torch.float16 (supported: {torch.float32})
max(query.shape[-1] != value.shape[-1]) > 32
has custom scale
Operator wasn't built - see python -m xformers.info for more info
unsupported embed per head: 512
It seems like the problem of the version of xformer. Sry that I do not know what is wrong on your side. The version info is all included in the env file and there should be no problem.
I tried again today, opened the colab in my colab pro, made sure it's set on high ram, but am still getting the same error. I didn't change anything.
Replacing xformer installation or building with pre-built library xformers==0.0.16rc425, works for me:
!pip install xformers==0.0.16rc425
用预构建的库xformers==0.0.16rc425替换 xformer 安装或构建,对我有用:
!pip install xformers==0.0.16rc425
Are you use windows?
顺便说一句,我使用Windows,如果这很重要的话..
Colab 在网站上运行,所以这并不重要。 Is ok work in windows? triton and xformers cannot install, I am currently working on Windows platform and encountered challenges while attempting to install Triton and Xformers. Could you please provide guidance or suggestions on how to install these components on non-Linux systems? Any assistance or insights you can offer would be greatly appreciated.