Dreambooth-Stable-Diffusion
Dreambooth-Stable-Diffusion copied to clipboard
Ubuntu Running Error
I installed joepenna dream booth on ubuntu. And run following this .
But There is a error
Exception(f"Model Path Not Found : '{self.model_path}'.")
folder name was correct and ckpt file was in it.
Can you help me?
There is a problem with your model path,You need to copy the model path you downloaded to the local system and replace "training_models/sd_v1–5_vae.ckpt"
training_models:将要训练的稳定扩散模型放入此文件夹中。您可以从 HuggingFace 或 civitai 下载许多模型。本指南中使用的模型是sd_v1-5_vae.ckpt
--training_model "training_models/sd_v1–5_vae.ckpt" \
it means absolute path?
There is a problem with your model path,You need to copy the model path you downloaded to the local system and replace "training_models/sd_v1–5_vae.ckpt"
training_models:将要训练的稳定扩散模型放入此文件夹中。您可以从 HuggingFace 或 civitai 下载许多模型。本指南中使用的模型是sd_v1-5_vae.ckpt
--training_model "training_models/sd_v1–5_vae.ckpt" \
I solve this with absolute path . but
There was torch.cuda.OutOfMemoryError
Sanity Checking: 0it [00:00, ?it/s]/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:236: PossibleUserWarning: The dataloader, val_dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the
num_workers
argument(try 20 which is the number of cpus on this machine) in the
DataLoaderinit to improve performance. rank_zero_warn( /root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:236: PossibleUserWarning: The dataloader, train_dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the
num_workersargument
(try 20 which is the number of cpus on this machine) in theDataLoader
init to improve performance. rank_zero_warn( Epoch 0: 0%| | 0/2121 [00:00<?, ?it/s]/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/utilities/data.py:98: UserWarning: Trying to infer thebatch_size
from an ambiguous collection. The batch size we found is 1. To avoid any miscalculations, useself.log(..., batch_size=batch_size)
. warning_cache.warn( /root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:231: UserWarning: You calledself.log('global_step', ...)
in yourtraining_step
but the value needs to be floating point. Converting it to torch.float32. warning_cache.warn( Error training at step 0. CUDA out of memory. Tried to allocate 146.00 MiB (GPU 0; 23.67 GiB total capacity; 21.98 GiB already allocated; 81.31 MiB free; 22.21 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Error training at step 0 Traceback (most recent call last): File "/home/alfa_members/kimsanghyun/new/by_env/Dreambooth-Stable-Diffusion/main.py", line 226, intrainer.fit(model, data) File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 696, in fit self._call_and_handle_interrupt( File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 650, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 735, in _fit_impl results = self._run(model, ckpt_path=self.ckpt_path) File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1166, in _run results = self._run_stage() File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1252, in _run_stage return self._run_train() File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1283, in _run_train self.fit_loop.run() File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 200, in run self.advance(*args, **kwargs) File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py", line 271, in advance self._outputs = self.epoch_loop.run(self._data_fetcher) File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 200, in run self.advance(*args, **kwargs) File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 203, in advance batch_output = self.batch_loop.run(kwargs) File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 200, in run self.advance(*args, **kwargs) File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 87, in advance outputs = self.optimizer_loop.run(optimizers, kwargs) File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 200, in run self.advance(*args, **kwargs) File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 201, in advance result = self._run_optimization(kwargs, self._optimizers[self.optim_progress.optimizer_position]) File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 248, in _run_optimization self._optimizer_step(optimizer, opt_idx, kwargs.get("batch_idx", 0), closure) File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 358, in _optimizer_step self.trainer._call_lightning_module_hook( File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1550, in _call_lightning_module_hook output = fn(*args, **kwargs) File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/core/module.py", line 1705, in optimizer_step optimizer.step(closure=optimizer_closure) File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/core/optimizer.py", line 168, in step step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs) File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 216, in optimizer_step return self.precision_plugin.optimizer_step(model, optimizer, opt_idx, closure, **kwargs) File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 153, in optimizer_step return optimizer.step(closure=closure, **kwargs) File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/torch/optim/optimizer.py", line 280, in wrapper out = func(*args, **kwargs) File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/torch/optim/optimizer.py", line 33, in _use_grad ret = func(self, *args, **kwargs) File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/torch/optim/adamw.py", line 171, in step adamw( File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/torch/optim/adamw.py", line 321, in adamw func( File "/root/miniconda3/envs/dreambooth_joepenna/lib/python3.10/site-packages/torch/optim/adamw.py", line 566, in _multi_tensor_adamw denom = torch._foreach_add(exp_avg_sq_sqrt, eps) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 146.00 MiB (GPU 0; 23.67 GiB total capacity; 21.98 GiB already allocated; 81.31 MiB free; 22.21 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
try rebooting and then start training, but don't run anything else. kill any processes that might be running on the GPU. command nvidia-smi
will bring up a list. if you have a lot of applications running and the gpu is also driving your display - it might be a bit tight. if available, use onboard GPU to drive the display, and leave the main GPU just for training.
I killed all other process before run, but it occured out of memories.
+-----------------------------------------------------------------------------+ | NVIDIA-SMI 470.182.03 Driver Version: 470.182.03 CUDA Version: 11.4 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce ... Off | 00000000:01:00.0 Off | N/A | | 0% 52C P8 32W / 350W | 19MiB / 24234MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| +-----------------------------------------------------------------------------+
Here's my output for the 3090. The only thing i can see is that you are running an older driver that only supports up to CUDA 11.4
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 520.61.05 Driver Version: 520.61.05 CUDA Version: 11.8 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... On | 00000000:05:00.0 Off | N/A |
| 80% 69C P2 340W / 370W | 8900MiB / 24576MiB | 99% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 2127 G /usr/lib/xorg/Xorg 4MiB |
| 0 N/A N/A 3003830 C python3 8892MiB |
+-----------------------------------------------------------------------------+
below is my env listing for running this repo using conda list
# packages in environment at /home/user/anaconda3/envs/db-joepenna:
#
# Name Version Build Channel
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_gnu conda-forge
absl-py 1.4.0 pypi_0 pypi
accelerate 0.18.0 pypi_0 pypi
aiohttp 3.8.4 pypi_0 pypi
aiosignal 1.3.1 pypi_0 pypi
albumentations 1.1.0 pypi_0 pypi
antlr4-python3-runtime 4.8 pypi_0 pypi
anyio 3.5.0 py310h06a4308_0
argon2-cffi 21.3.0 pyhd3eb1b0_0
argon2-cffi-bindings 21.2.0 py310h7f8727e_0
asttokens 2.0.5 pyhd3eb1b0_0
async-timeout 4.0.2 pypi_0 pypi
attrs 23.1.0 pypi_0 pypi
babel 2.11.0 py310h06a4308_0
backcall 0.2.0 pyhd3eb1b0_0
beautifulsoup4 4.12.2 py310h06a4308_0
blas 1.0 mkl
bleach 4.1.0 pyhd3eb1b0_0
brotlipy 0.7.0 py310h7f8727e_1002
bzip2 1.0.8 h7b6447c_0
ca-certificates 2023.01.10 h06a4308_0
cachetools 5.3.0 pypi_0 pypi
captionizer 1.0.1 pypi_0 pypi
certifi 2022.12.7 py310h06a4308_0
cffi 1.15.1 py310h5eee18b_3
charset-normalizer 2.0.4 pyhd3eb1b0_0
clip 1.0 dev_0 <develop>
comm 0.1.2 py310h06a4308_0
cryptography 39.0.1 py310h9ce1e76_0
csv-logger 1.3.0 pypi_0 pypi
cuda-cudart 11.7.99 0 nvidia
cuda-cupti 11.7.101 0 nvidia
cuda-libraries 11.7.1 0 nvidia
cuda-nvrtc 11.7.99 0 nvidia
cuda-nvtx 11.7.91 0 nvidia
cuda-runtime 11.7.1 0 nvidia
cudatoolkit 11.8.0 h37601d7_11 conda-forge
datasets 2.11.0 pypi_0 pypi
debugpy 1.5.1 py310h295c915_0
decorator 5.1.1 pyhd3eb1b0_0
defusedxml 0.7.1 pyhd3eb1b0_0
diffusers 0.3.0 pypi_0 pypi
dill 0.3.6 pypi_0 pypi
dreambooth-stable-diffusion 1.0.0 dev_0 <develop>
einops 0.4.1 pypi_0 pypi
entrypoints 0.4 py310h06a4308_0
executing 0.8.3 pyhd3eb1b0_0
ffmpeg 4.3 hf484d3e_0 pytorch
filelock 3.9.0 py310h06a4308_0
freetype 2.12.1 h4a9f257_0
frozenlist 1.3.3 pypi_0 pypi
fsspec 2023.4.0 pypi_0 pypi
ftfy 6.1.1 pypi_0 pypi
giflib 5.2.1 h5eee18b_3
gmp 6.2.1 h295c915_3
gmpy2 2.1.2 py310heeb90bb_0
gnutls 3.6.15 he1e5248_0
google-auth 2.17.3 pypi_0 pypi
google-auth-oauthlib 1.0.0 pypi_0 pypi
grpcio 1.54.0 pypi_0 pypi
huggingface-hub 0.13.4 pypi_0 pypi
icu 58.2 he6710b0_3
idna 3.4 py310h06a4308_0
imageio 2.27.0 pypi_0 pypi
importlib-metadata 6.5.1 pypi_0 pypi
intel-openmp 2021.4.0 h06a4308_3561
ipykernel 6.19.2 py310h2f386ee_0
ipython 8.12.0 py310h06a4308_0
ipython_genutils 0.2.0 pyhd3eb1b0_1
jedi 0.18.1 py310h06a4308_1
jinja2 3.1.2 py310h06a4308_0
joblib 1.2.0 pypi_0 pypi
jpeg 9e h5eee18b_1
json5 0.9.6 pyhd3eb1b0_0
jsonschema 4.17.3 py310h06a4308_0
jupyter_client 8.1.0 py310h06a4308_0
jupyter_core 5.3.0 py310h06a4308_0
jupyter_server 1.23.4 py310h06a4308_0
jupyterlab 3.5.3 py310h06a4308_0
jupyterlab_pygments 0.1.2 py_0
jupyterlab_server 2.22.0 py310h06a4308_0
kornia 0.6.7 pypi_0 pypi
lame 3.100 h7b6447c_0
lazy-loader 0.2 pypi_0 pypi
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.38 h1181459_1
lerc 3.0 h295c915_0
libcublas 11.10.3.66 0 nvidia
libcufft 10.7.2.124 h4fbf590_0 nvidia
libcufile 1.6.1.9 0 nvidia
libcurand 10.3.2.106 0 nvidia
libcusolver 11.4.0.1 0 nvidia
libcusparse 11.7.4.91 0 nvidia
libdeflate 1.17 h5eee18b_0
libffi 3.4.2 h6a678d5_6
libgcc-ng 12.2.0 h65d4601_19 conda-forge
libgomp 12.2.0 h65d4601_19 conda-forge
libiconv 1.16 h7f8727e_2
libidn2 2.3.2 h7f8727e_0
libnpp 11.7.4.75 0 nvidia
libnvjpeg 11.8.0.2 0 nvidia
libpng 1.6.39 h5eee18b_0
libsodium 1.0.18 h7b6447c_0
libstdcxx-ng 12.2.0 h46fd767_19 conda-forge
libtasn1 4.19.0 h5eee18b_0
libtiff 4.5.0 h6a678d5_2
libunistring 0.9.10 h27cfd23_0
libuuid 1.41.5 h5eee18b_0
libwebp 1.2.4 h11a3e52_1
libwebp-base 1.2.4 h5eee18b_1
libxml2 2.10.3 hcbfbd50_0
libxslt 1.1.37 h2085143_0
lxml 4.9.2 py310h5eee18b_0
lz4-c 1.9.4 h6a678d5_0
markdown 3.4.3 pypi_0 pypi
markupsafe 2.1.1 py310h7f8727e_0
matplotlib-inline 0.1.6 py310h06a4308_0
mistune 0.8.4 py310h7f8727e_1000
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py310h7f8727e_0
mkl_fft 1.3.1 py310hd6ae3a3_0
mkl_random 1.2.2 py310h00e6091_0
modelcards 0.1.6 pypi_0 pypi
mpc 1.1.0 h10f8cd9_1
mpfr 4.0.2 hb69a4c5_1
mpmath 1.2.1 pypi_0 pypi
multidict 6.0.4 pypi_0 pypi
multiprocess 0.70.14 pypi_0 pypi
nbclassic 0.5.5 py310h06a4308_0
nbclient 0.5.13 py310h06a4308_0
nbconvert 6.5.4 py310h06a4308_0
nbformat 5.7.0 py310h06a4308_0
ncurses 6.4 h6a678d5_0
nest-asyncio 1.5.6 py310h06a4308_0
nettle 3.7.3 hbbd107a_1
networkx 2.8.4 py310h06a4308_1
notebook 6.5.4 py310h06a4308_0
notebook-shim 0.2.2 py310h06a4308_0
numpy 1.23.1 py310h1794996_0
numpy-base 1.23.1 py310hcba007f_0
oauthlib 3.2.2 pypi_0 pypi
omegaconf 2.1.1 pypi_0 pypi
opencv-python 4.7.0.72 pypi_0 pypi
opencv-python-headless 4.7.0.72 pypi_0 pypi
openh264 2.1.1 h4ff587b_0
openssl 1.1.1t h7f8727e_0
packaging 23.1 pypi_0 pypi
pandas 2.0.0 pypi_0 pypi
pandocfilters 1.5.0 pyhd3eb1b0_0
parso 0.8.3 pyhd3eb1b0_0
pexpect 4.8.0 pyhd3eb1b0_3
pickleshare 0.7.5 pyhd3eb1b0_1003
pillow 9.4.0 py310h6a678d5_0
pip 22.2.2 py310h06a4308_0
platformdirs 2.5.2 py310h06a4308_0
prometheus_client 0.14.1 py310h06a4308_0
prompt-toolkit 3.0.36 py310h06a4308_0
protobuf 4.22.3 pypi_0 pypi
psutil 5.9.5 pypi_0 pypi
ptyprocess 0.7.0 pyhd3eb1b0_2
pure_eval 0.2.2 pyhd3eb1b0_0
pyarrow 11.0.0 pypi_0 pypi
pyasn1 0.5.0 pypi_0 pypi
pyasn1-modules 0.3.0 pypi_0 pypi
pycparser 2.21 pyhd3eb1b0_0
pydeprecate 0.3.2 pypi_0 pypi
pygments 2.11.2 pyhd3eb1b0_0
pyopenssl 23.0.0 py310h06a4308_0
pyrsistent 0.18.0 py310h7f8727e_0
pysocks 1.7.1 py310h06a4308_0
python 3.10.11 h7a1cb2a_2
python-dateutil 2.8.2 pyhd3eb1b0_0
python-fastjsonschema 2.16.2 py310h06a4308_0
pytorch 2.0.0 py3.10_cuda11.7_cudnn8.5.0_0 pytorch
pytorch-cuda 11.7 h778d358_3 pytorch
pytorch-lightning 1.7.6 pypi_0 pypi
pytorch-mutex 1.0 cuda pytorch
pytz 2023.3 pypi_0 pypi
pywavelets 1.4.1 pypi_0 pypi
pyyaml 6.0 pypi_0 pypi
pyzmq 23.2.0 py310h6a678d5_0
qudida 0.0.4 pypi_0 pypi
readline 8.2 h5eee18b_0
regex 2023.3.23 pypi_0 pypi
requests 2.28.1 py310h06a4308_1
requests-oauthlib 1.3.1 pypi_0 pypi
responses 0.18.0 pypi_0 pypi
rsa 4.9 pypi_0 pypi
scikit-image 0.20.0 pypi_0 pypi
scikit-learn 1.2.2 pypi_0 pypi
scipy 1.10.1 pypi_0 pypi
send2trash 1.8.0 pyhd3eb1b0_1
setuptools 67.7.1 pypi_0 pypi
six 1.16.0 pyhd3eb1b0_1
sniffio 1.2.0 py310h06a4308_1
soupsieve 2.4 py310h06a4308_0
sqlite 3.41.2 h5eee18b_0
stack_data 0.2.0 pyhd3eb1b0_0
sympy 1.11.1 py310h06a4308_0
taming-transformers 0.0.1 dev_0 <develop>
tensorboard 2.12.2 pypi_0 pypi
tensorboard-data-server 0.7.0 pypi_0 pypi
tensorboard-plugin-wit 1.8.1 pypi_0 pypi
terminado 0.17.1 py310h06a4308_0
threadpoolctl 3.1.0 pypi_0 pypi
tifffile 2023.4.12 pypi_0 pypi
tinycss2 1.2.1 py310h06a4308_0
tk 8.6.12 h1ccaba5_0
tokenizers 0.13.3 pypi_0 pypi
tomli 2.0.1 py310h06a4308_0
torch-fidelity 0.3.0 pypi_0 pypi
torchmetrics 0.11.1 pypi_0 pypi
torchtriton 2.0.0 py310 pytorch
torchvision 0.15.0 py310_cu117 pytorch
tornado 6.2 py310h5eee18b_0
tqdm 4.65.0 pypi_0 pypi
traitlets 5.7.1 py310h06a4308_0
transformers 4.25.1 pypi_0 pypi
typing-extensions 4.5.0 py310h06a4308_0
typing_extensions 4.5.0 py310h06a4308_0
tzdata 2023.3 pypi_0 pypi
urllib3 1.26.15 py310h06a4308_0
wcwidth 0.2.6 pypi_0 pypi
webencodings 0.5.1 py310h06a4308_1
websocket-client 0.58.0 py310h06a4308_4
werkzeug 2.2.3 pypi_0 pypi
wheel 0.38.4 py310h06a4308_0
xxhash 3.2.0 pypi_0 pypi
xz 5.2.10 h5eee18b_1
yarl 1.9.1 pypi_0 pypi
zeromq 4.3.4 h2531618_0
zipp 3.15.0 pypi_0 pypi
zlib 1.2.13 h5eee18b_0
zstd 1.5.5 hc292b87_0
Here's my output for the 3090. The only thing i can see is that you are running an older driver that only supports up to CUDA 11.4
+-----------------------------------------------------------------------------+ | NVIDIA-SMI 520.61.05 Driver Version: 520.61.05 CUDA Version: 11.8 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce ... On | 00000000:05:00.0 Off | N/A | | 80% 69C P2 340W / 370W | 8900MiB / 24576MiB | 99% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 2127 G /usr/lib/xorg/Xorg 4MiB | | 0 N/A N/A 3003830 C python3 8892MiB | +-----------------------------------------------------------------------------+
below is my env listing for running this repo using
conda list
# packages in environment at /home/user/anaconda3/envs/db-joepenna: # # Name Version Build Channel _libgcc_mutex 0.1 conda_forge conda-forge _openmp_mutex 4.5 2_gnu conda-forge absl-py 1.4.0 pypi_0 pypi accelerate 0.18.0 pypi_0 pypi aiohttp 3.8.4 pypi_0 pypi aiosignal 1.3.1 pypi_0 pypi albumentations 1.1.0 pypi_0 pypi antlr4-python3-runtime 4.8 pypi_0 pypi anyio 3.5.0 py310h06a4308_0 argon2-cffi 21.3.0 pyhd3eb1b0_0 argon2-cffi-bindings 21.2.0 py310h7f8727e_0 asttokens 2.0.5 pyhd3eb1b0_0 async-timeout 4.0.2 pypi_0 pypi attrs 23.1.0 pypi_0 pypi babel 2.11.0 py310h06a4308_0 backcall 0.2.0 pyhd3eb1b0_0 beautifulsoup4 4.12.2 py310h06a4308_0 blas 1.0 mkl bleach 4.1.0 pyhd3eb1b0_0 brotlipy 0.7.0 py310h7f8727e_1002 bzip2 1.0.8 h7b6447c_0 ca-certificates 2023.01.10 h06a4308_0 cachetools 5.3.0 pypi_0 pypi captionizer 1.0.1 pypi_0 pypi certifi 2022.12.7 py310h06a4308_0 cffi 1.15.1 py310h5eee18b_3 charset-normalizer 2.0.4 pyhd3eb1b0_0 clip 1.0 dev_0 <develop> comm 0.1.2 py310h06a4308_0 cryptography 39.0.1 py310h9ce1e76_0 csv-logger 1.3.0 pypi_0 pypi cuda-cudart 11.7.99 0 nvidia cuda-cupti 11.7.101 0 nvidia cuda-libraries 11.7.1 0 nvidia cuda-nvrtc 11.7.99 0 nvidia cuda-nvtx 11.7.91 0 nvidia cuda-runtime 11.7.1 0 nvidia cudatoolkit 11.8.0 h37601d7_11 conda-forge datasets 2.11.0 pypi_0 pypi debugpy 1.5.1 py310h295c915_0 decorator 5.1.1 pyhd3eb1b0_0 defusedxml 0.7.1 pyhd3eb1b0_0 diffusers 0.3.0 pypi_0 pypi dill 0.3.6 pypi_0 pypi dreambooth-stable-diffusion 1.0.0 dev_0 <develop> einops 0.4.1 pypi_0 pypi entrypoints 0.4 py310h06a4308_0 executing 0.8.3 pyhd3eb1b0_0 ffmpeg 4.3 hf484d3e_0 pytorch filelock 3.9.0 py310h06a4308_0 freetype 2.12.1 h4a9f257_0 frozenlist 1.3.3 pypi_0 pypi fsspec 2023.4.0 pypi_0 pypi ftfy 6.1.1 pypi_0 pypi giflib 5.2.1 h5eee18b_3 gmp 6.2.1 h295c915_3 gmpy2 2.1.2 py310heeb90bb_0 gnutls 3.6.15 he1e5248_0 google-auth 2.17.3 pypi_0 pypi google-auth-oauthlib 1.0.0 pypi_0 pypi grpcio 1.54.0 pypi_0 pypi huggingface-hub 0.13.4 pypi_0 pypi icu 58.2 he6710b0_3 idna 3.4 py310h06a4308_0 imageio 2.27.0 pypi_0 pypi importlib-metadata 6.5.1 pypi_0 pypi intel-openmp 2021.4.0 h06a4308_3561 ipykernel 6.19.2 py310h2f386ee_0 ipython 8.12.0 py310h06a4308_0 ipython_genutils 0.2.0 pyhd3eb1b0_1 jedi 0.18.1 py310h06a4308_1 jinja2 3.1.2 py310h06a4308_0 joblib 1.2.0 pypi_0 pypi jpeg 9e h5eee18b_1 json5 0.9.6 pyhd3eb1b0_0 jsonschema 4.17.3 py310h06a4308_0 jupyter_client 8.1.0 py310h06a4308_0 jupyter_core 5.3.0 py310h06a4308_0 jupyter_server 1.23.4 py310h06a4308_0 jupyterlab 3.5.3 py310h06a4308_0 jupyterlab_pygments 0.1.2 py_0 jupyterlab_server 2.22.0 py310h06a4308_0 kornia 0.6.7 pypi_0 pypi lame 3.100 h7b6447c_0 lazy-loader 0.2 pypi_0 pypi lcms2 2.12 h3be6417_0 ld_impl_linux-64 2.38 h1181459_1 lerc 3.0 h295c915_0 libcublas 11.10.3.66 0 nvidia libcufft 10.7.2.124 h4fbf590_0 nvidia libcufile 1.6.1.9 0 nvidia libcurand 10.3.2.106 0 nvidia libcusolver 11.4.0.1 0 nvidia libcusparse 11.7.4.91 0 nvidia libdeflate 1.17 h5eee18b_0 libffi 3.4.2 h6a678d5_6 libgcc-ng 12.2.0 h65d4601_19 conda-forge libgomp 12.2.0 h65d4601_19 conda-forge libiconv 1.16 h7f8727e_2 libidn2 2.3.2 h7f8727e_0 libnpp 11.7.4.75 0 nvidia libnvjpeg 11.8.0.2 0 nvidia libpng 1.6.39 h5eee18b_0 libsodium 1.0.18 h7b6447c_0 libstdcxx-ng 12.2.0 h46fd767_19 conda-forge libtasn1 4.19.0 h5eee18b_0 libtiff 4.5.0 h6a678d5_2 libunistring 0.9.10 h27cfd23_0 libuuid 1.41.5 h5eee18b_0 libwebp 1.2.4 h11a3e52_1 libwebp-base 1.2.4 h5eee18b_1 libxml2 2.10.3 hcbfbd50_0 libxslt 1.1.37 h2085143_0 lxml 4.9.2 py310h5eee18b_0 lz4-c 1.9.4 h6a678d5_0 markdown 3.4.3 pypi_0 pypi markupsafe 2.1.1 py310h7f8727e_0 matplotlib-inline 0.1.6 py310h06a4308_0 mistune 0.8.4 py310h7f8727e_1000 mkl 2021.4.0 h06a4308_640 mkl-service 2.4.0 py310h7f8727e_0 mkl_fft 1.3.1 py310hd6ae3a3_0 mkl_random 1.2.2 py310h00e6091_0 modelcards 0.1.6 pypi_0 pypi mpc 1.1.0 h10f8cd9_1 mpfr 4.0.2 hb69a4c5_1 mpmath 1.2.1 pypi_0 pypi multidict 6.0.4 pypi_0 pypi multiprocess 0.70.14 pypi_0 pypi nbclassic 0.5.5 py310h06a4308_0 nbclient 0.5.13 py310h06a4308_0 nbconvert 6.5.4 py310h06a4308_0 nbformat 5.7.0 py310h06a4308_0 ncurses 6.4 h6a678d5_0 nest-asyncio 1.5.6 py310h06a4308_0 nettle 3.7.3 hbbd107a_1 networkx 2.8.4 py310h06a4308_1 notebook 6.5.4 py310h06a4308_0 notebook-shim 0.2.2 py310h06a4308_0 numpy 1.23.1 py310h1794996_0 numpy-base 1.23.1 py310hcba007f_0 oauthlib 3.2.2 pypi_0 pypi omegaconf 2.1.1 pypi_0 pypi opencv-python 4.7.0.72 pypi_0 pypi opencv-python-headless 4.7.0.72 pypi_0 pypi openh264 2.1.1 h4ff587b_0 openssl 1.1.1t h7f8727e_0 packaging 23.1 pypi_0 pypi pandas 2.0.0 pypi_0 pypi pandocfilters 1.5.0 pyhd3eb1b0_0 parso 0.8.3 pyhd3eb1b0_0 pexpect 4.8.0 pyhd3eb1b0_3 pickleshare 0.7.5 pyhd3eb1b0_1003 pillow 9.4.0 py310h6a678d5_0 pip 22.2.2 py310h06a4308_0 platformdirs 2.5.2 py310h06a4308_0 prometheus_client 0.14.1 py310h06a4308_0 prompt-toolkit 3.0.36 py310h06a4308_0 protobuf 4.22.3 pypi_0 pypi psutil 5.9.5 pypi_0 pypi ptyprocess 0.7.0 pyhd3eb1b0_2 pure_eval 0.2.2 pyhd3eb1b0_0 pyarrow 11.0.0 pypi_0 pypi pyasn1 0.5.0 pypi_0 pypi pyasn1-modules 0.3.0 pypi_0 pypi pycparser 2.21 pyhd3eb1b0_0 pydeprecate 0.3.2 pypi_0 pypi pygments 2.11.2 pyhd3eb1b0_0 pyopenssl 23.0.0 py310h06a4308_0 pyrsistent 0.18.0 py310h7f8727e_0 pysocks 1.7.1 py310h06a4308_0 python 3.10.11 h7a1cb2a_2 python-dateutil 2.8.2 pyhd3eb1b0_0 python-fastjsonschema 2.16.2 py310h06a4308_0 pytorch 2.0.0 py3.10_cuda11.7_cudnn8.5.0_0 pytorch pytorch-cuda 11.7 h778d358_3 pytorch pytorch-lightning 1.7.6 pypi_0 pypi pytorch-mutex 1.0 cuda pytorch pytz 2023.3 pypi_0 pypi pywavelets 1.4.1 pypi_0 pypi pyyaml 6.0 pypi_0 pypi pyzmq 23.2.0 py310h6a678d5_0 qudida 0.0.4 pypi_0 pypi readline 8.2 h5eee18b_0 regex 2023.3.23 pypi_0 pypi requests 2.28.1 py310h06a4308_1 requests-oauthlib 1.3.1 pypi_0 pypi responses 0.18.0 pypi_0 pypi rsa 4.9 pypi_0 pypi scikit-image 0.20.0 pypi_0 pypi scikit-learn 1.2.2 pypi_0 pypi scipy 1.10.1 pypi_0 pypi send2trash 1.8.0 pyhd3eb1b0_1 setuptools 67.7.1 pypi_0 pypi six 1.16.0 pyhd3eb1b0_1 sniffio 1.2.0 py310h06a4308_1 soupsieve 2.4 py310h06a4308_0 sqlite 3.41.2 h5eee18b_0 stack_data 0.2.0 pyhd3eb1b0_0 sympy 1.11.1 py310h06a4308_0 taming-transformers 0.0.1 dev_0 <develop> tensorboard 2.12.2 pypi_0 pypi tensorboard-data-server 0.7.0 pypi_0 pypi tensorboard-plugin-wit 1.8.1 pypi_0 pypi terminado 0.17.1 py310h06a4308_0 threadpoolctl 3.1.0 pypi_0 pypi tifffile 2023.4.12 pypi_0 pypi tinycss2 1.2.1 py310h06a4308_0 tk 8.6.12 h1ccaba5_0 tokenizers 0.13.3 pypi_0 pypi tomli 2.0.1 py310h06a4308_0 torch-fidelity 0.3.0 pypi_0 pypi torchmetrics 0.11.1 pypi_0 pypi torchtriton 2.0.0 py310 pytorch torchvision 0.15.0 py310_cu117 pytorch tornado 6.2 py310h5eee18b_0 tqdm 4.65.0 pypi_0 pypi traitlets 5.7.1 py310h06a4308_0 transformers 4.25.1 pypi_0 pypi typing-extensions 4.5.0 py310h06a4308_0 typing_extensions 4.5.0 py310h06a4308_0 tzdata 2023.3 pypi_0 pypi urllib3 1.26.15 py310h06a4308_0 wcwidth 0.2.6 pypi_0 pypi webencodings 0.5.1 py310h06a4308_1 websocket-client 0.58.0 py310h06a4308_4 werkzeug 2.2.3 pypi_0 pypi wheel 0.38.4 py310h06a4308_0 xxhash 3.2.0 pypi_0 pypi xz 5.2.10 h5eee18b_1 yarl 1.9.1 pypi_0 pypi zeromq 4.3.4 h2531618_0 zipp 3.15.0 pypi_0 pypi zlib 1.2.13 h5eee18b_0 zstd 1.5.5 hc292b87_0
There was similar issue on comvis
https://github.com/CompVis/stable-diffusion/issues/485
I have it running on a 3090 and 4090 without problems. Mostly training on SD1.5 based models, 512x512. I don't use the GPU to drive the display or graphics.