Dreambooth-Stable-Diffusion
Dreambooth-Stable-Diffusion copied to clipboard
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper__index_select)`
C:\Users\Urban\anaconda3\envs\ldm\lib\site-packages\pytorch_lightning\loggers\test_tube.py:104: LightningDeprecationWarning: The TestTubeLogger is deprecated since v1.5 and will be removed in v1.7. We recommend switching to the
pytorch_lightning.loggers.TensorBoardLoggeras an alternative. rank_zero_deprecation( Monitoring val/loss_simple_ema as checkpoint metric. Merged modelckpt-cfg: {'target': 'pytorch_lightning.callbacks.ModelCheckpoint', 'params': {'dirpath': 'logs\\SUBJECT2022-10-04T06-25-48_DSU90\\checkpoints', 'filename': '{epoch:06}', 'verbose': True, 'save_last': True, 'monitor': 'val/loss_simple_ema', 'save_top_k': 1, 'every_n_train_steps': 500}} GPU available: True, used: False TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs C:\Users\Urban\anaconda3\envs\ldm\lib\site-packages\pytorch_lightning\trainer\trainer.py:1584: UserWarning: GPU available but not used. Set the gpus flag in your trainer
Trainer(gpus=1)or script
--gpus=1`.
rank_zero_warn(
Data
train, PersonalizedBase, 1500
reg, PersonalizedBase, 15000
validation, PersonalizedBase, 15
accumulate_grad_batches = 1
++++ NOT USING LR SCALING ++++
Setting learning rate to 1.00e-06
C:\Users\Urban\anaconda3\envs\ldm\lib\site-packages\pytorch_lightning\trainer\configuration_validator.py:275: LightningDeprecationWarning: The on_keyboard_interrupt
callback hook was deprecated in v1.5 and will be removed in v1.7. Please use the on_exception
callback hook instead.
rank_zero_deprecation(
C:\Users\Urban\anaconda3\envs\ldm\lib\site-packages\pytorch_lightning\trainer\configuration_validator.py:284: LightningDeprecationWarning: Base LightningModule.on_train_batch_start
hook signature has changed in v1.5. The dataloader_idx
argument will be removed in v1.7.
rank_zero_deprecation(
C:\Users\Urban\anaconda3\envs\ldm\lib\site-packages\pytorch_lightning\trainer\configuration_validator.py:291: LightningDeprecationWarning: Base Callback.on_train_batch_end
hook signature has changed in v1.5. The dataloader_idx
argument will be removed in v1.7.
rank_zero_deprecation(
C:\Users\Urban\anaconda3\envs\ldm\lib\site-packages\pytorch_lightning\core\datamodule.py:469: LightningDeprecationWarning: DataModule.setup has already been called, so it will not be called again. In v1.6 this behavior will change to always call DataModule.setup.
rank_zero_deprecation(
LatentDiffusion: Also optimizing conditioner params!
Project config
model:
base_learning_rate: 1.0e-06
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
reg_weight: 1.0
linear_start: 0.00085
linear_end: 0.012
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: image
cond_stage_key: caption
image_size: 64
channels: 4
cond_stage_trainable: true
conditioning_key: crossattn
monitor: val/loss_simple_ema
scale_factor: 0.18215
use_ema: false
embedding_reg_weight: 0.0
unfreeze_model: true
model_lr: 1.0e-06
personalization_config:
target: ldm.modules.embedding_manager.EmbeddingManager
params:
placeholder_strings:
- '*'
initializer_words:
- sculpture
per_image_tokens: false
num_vectors_per_token: 1
progressive_words: false
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32
in_channels: 4
out_channels: 4
model_channels: 320
attention_resolutions:
- 4
- 2
- 1
num_res_blocks: 2
channel_mult:
- 1
- 2
- 4
- 4
num_heads: 8
use_spatial_transformer: true
transformer_depth: 1
context_dim: 768
use_checkpoint: true
legacy: false
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 512
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.FrozenCLIPEmbedder
ckpt_path: C:\Users\Urban\Desktop\textual_inversion-main\models\ldm\sd-v1-4-full-ema.ckpt
data:
target: main.DataModuleFromConfig
params:
batch_size: 1
num_workers: 1
wrap: false
train:
target: ldm.data.personalized.PersonalizedBase
params:
size: 512
set: train
per_image_tokens: false
repeats: 100
placeholder_token: dog
reg:
target: ldm.data.personalized.PersonalizedBase
params:
size: 512
set: train
reg: true
per_image_tokens: false
repeats: 10
placeholder_token: dog
validation:
target: ldm.data.personalized.PersonalizedBase
params:
size: 512
set: val
per_image_tokens: false
repeats: 10
placeholder_token: dog
Lightning config modelcheckpoint: params: every_n_train_steps: 500 callbacks: image_logger: target: main.ImageLogger params: batch_frequency: 200 max_images: 8 increase_log_steps: false trainer: benchmark: true max_steps: 800 gpus: 0
| Name | Type | Params
0 | model | DiffusionWrapper | 859 M 1 | first_stage_model | AutoencoderKL | 83.7 M 2 | cond_stage_model | FrozenCLIPEmbedder | 123 M
982 M Trainable params
83.7 M Non-trainable params
1.1 B Total params
4,264.941 Total estimated model params size (MB)
Validation sanity check: 0it [00:00, ?it/s]C:\Users\Urban\anaconda3\envs\ldm\lib\site-packages\pytorch_lightning\trainer\data_loading.py:132: UserWarning: The dataloader, val_dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the num_workers
argument(try 8 which is the number of cpus on this machine) in the
DataLoader` init to improve performance.
rank_zero_warn(
Validation sanity check: 0%| | 0/2 [00:00<?, ?it/s]Summoning checkpoint.
Traceback (most recent call last):
File "main.py", line 838, in
I'm in need of any perspective anybody can give on making this compatible for the right calls on a windows environment where WSL is not an option.
i have the same problem, do you solve it?
add --gpus=1 , it works
same problem
same problem
There are a couple known fixes depending on your specific env. I can compile some links later but use the search function too.
On Wed, Jan 11, 2023, 11:53 PM howardgriffin @.***> wrote:
same problem
— Reply to this email directly, view it on GitHub https://github.com/XavierXiao/Dreambooth-Stable-Diffusion/issues/53#issuecomment-1379887964, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALL43Q4IJ3SA4RAKBINEEHLWR6S7BANCNFSM6AAAAAAQ4RJOIA . You are receiving this because you authored the thread.Message ID: @.***>
@xzdong-2019, may I ask how did you solve that? I mean where should us add gpus=1?
@xzdong-2019, may I ask how did you solve that? I mean where should us add gpus=1?
python main.py --gpus 0, --prompt ....
it works to me