Qing

Results 394 comments of Qing

For stable-diffusion model, you need to [accepting the terms to access](https://huggingface.co/runwayml/stable-diffusion-inpainting), and get an access token from here [huggingface access token](https://huggingface.co/docs/hub/security-tokens). Then start the server: ```bash lama-cleaner --model=sd1.5 --device=cpu --hf_access_token=you_token...

Install the virtualenvironment according to the guidelines in the blog, then you can install Lama Cleaner (pip3 install lama-cleaner) after `Activate the virtualenv`

Sorry for the annoying log, actually you can safely ignore that CUDA warning. After the first time you run `lama-cleaner --model=sd1.5 --device=cpu --hf_access_token=you_token`, you can remove `--hf_access_token`, and add `--run-sd-local`...

> Sorry for the annoying log, actually you can safely ignore that CUDA warning. > > After the first time you run `lama-cleaner --model=sd1.5 --device=cpu --hf_access_token=you_token`, you can remove `--hf_access_token`,...

You can check [pack.sh](https://github.com/Sanster/lama-cleaner/blob/main/scripts/pack.sh) script, the key is the use of the `conda pack` command, it makes distribution really easy. But the disadvantage is also obvious, the installation package itself...

Fixed in https://github.com/Sanster/lama-cleaner/commit/e2e2f5f853bb9bc28b4da2487f57b0105afa71dc . Please try `0.24.2`

By default, installer will install pytorch-cu113, it’s looks like it didn’t match your GPU and NVIDIA driver version. If you can provide the following information, I can suggest the correct...

There is something wrong with the `win_config.bat` script, it didn't install a CUDA version of pytorch 😅 , please change it to the follow content and re-execute it ``` @echo...

No, dependencies are not uninstalled, it's how pip works.

Sorry, there was a problem with the previous version, I rewrote the colab code today and lama and sd1.5 are working fine, please use the latest version and try again