lama
lama copied to clipboard
ValueError: `Dataloader` returned 0 length. Please make sure that it returns at least 1 batch
Can you help me ?
The number of my datasets : trainset:968 valset:323 visual_test:322
The training parameter setting: location: data_root_dir: /home/jddx/wxp/lama/my_dataset/ out_root_dir: /home/jddx/wxp/lama/experiments/ tb_dir: /home/jddx/wxp/lama/tb_logs/ data: batch_size: 2 val_batch_size: 2 num_workers: 2 train: indir: my_dataset/train img_suffix: .png out_size: 1280 mask_gen_kwargs: irregular_proba: 1 irregular_kwargs: max_angle: 4 max_len: 200 max_width: 100 max_times: 5 min_times: 1 box_proba: 1 box_kwargs: margin: 10 bbox_min_size: 30 bbox_max_size: 150 max_times: 4 min_times: 1 segm_proba: 0 transform_variant: distortions dataloader_kwargs: batch_size: 2 shuffle: true num_workers: 2 val: indir: my_dataset/val/random_medium_512 img_suffix: .png dataloader_kwargs: batch_size: 2 shuffle: false num_workers: 2 visual_test: indir: my_dataset/visual_test/random_medium_512 img_suffix: .png pad_out_to_modulo: 32 dataloader_kwargs: batch_size: 1 shuffle: false num_workers: 2 generator: kind: ffc_resnet input_nc: 4 output_nc: 3 ngf: 64 n_downsampling: 3 n_blocks: 9 add_out_act: sigmoid init_conv_kwargs: ratio_gin: 0 ratio_gout: 0 enable_lfu: false downsample_conv_kwargs: ratio_gin: 0 ratio_gout: 0 enable_lfu: false resnet_conv_kwargs: ratio_gin: 0.75 ratio_gout: 0.75 enable_lfu: false discriminator: kind: pix2pixhd_nlayer input_nc: 3 ndf: 64 n_layers: 4 optimizers: generator: kind: adam lr: 0.001 discriminator: kind: adam lr: 0.0001 visualizer: kind: directory outdir: /home/jddx/wxp/lama/experiments/jddx_2022-10-15_09-54-05_train_lama-fourier_/samples key_order:
- image
- predicted_image
- discr_output_fake
- discr_output_real
- inpainted rescale_keys:
- discr_output_fake
- discr_output_real evaluator: kind: default inpainted_key: inpainted integral_kind: ssim_fid100_f1 trainer: kwargs: gpus: 0 accelerator: ddp max_epochs: 40 gradient_clip_val: 1 log_gpu_memory: None limit_train_batches: 500 val_check_interval: 50 log_every_n_steps: 250 precision: 32 terminate_on_nan: false check_val_every_n_epoch: 1 num_sanity_val_steps: 8 limit_val_batches: 170 replace_sampler_ddp: false checkpoint_kwargs: verbose: true save_top_k: 5 save_last: true period: 1 monitor: val_ssim_fid100_f1_total_mean mode: max run_title: '' training_model: kind: default visualize_each_iters: 100 concat_mask: true store_discr_outputs_for_vis: true losses: l1: weight_missing: 0 weight_known: 10 perceptual: weight: 0 adversarial: kind: r1 weight: 10 gp_coef: 0.001 mask_as_fake_target: true allow_scale_mask: true feature_matching: weight: 100 resnet_pl: weight: 30 weights_path: /home/jddx/wxp/lama
hi,I met the same problem.Have u solve it ?thx!
indir: my_dataset/val/random_medium_512
I solve this problem by changing the path behind indir to an absolute path,such as :home/jddx/wxp/lama/my_dataset/val/random_medium_512
indir: my_dataset/val/random_medium_512
I solve this problem by changing the path behind indir to an absolute path,such as :home/jddx/wxp/lama/my_dataset/val/random_medium_512
求问大佬,我将abl-04-256-mh-dist.yaml中的val、train以及visual_test的indir都改成了绝对路径,在打印的config中显示例如indir为: /DATA/jupyter/personal/project/cover_diff/lama/my_datatset/train 但还是依然报这个错误,请问我哪里错了吗,谢谢!
indir: my_dataset/val/random_medium_512
I solve this problem by changing the path behind indir to an absolute path,such as :home/jddx/wxp/lama/my_dataset/val/random_medium_512
求问大佬,我将abl-04-256-mh-dist.yaml中的val、train以及visual_test的indir都改成了绝对路径,在打印的config中显示例如indir为: /DATA/jupyter/personal/project/cover_diff/lama/my_datatset/train 但还是依然报这个错误,请问我哪里错了吗,谢谢!
这个问题可能就是没有读取到你的数据,你再仔细检查一下: 看你的训练集是不是jpg格式; 路径都检查下; 训练数据数量 < limit_train_batches乘train_batch_size乘gpu数量 val_check_interval < limit_train_batches
indir: my_dataset/val/random_medium_512
I solve this problem by changing the path behind indir to an absolute path,such as :home/jddx/wxp/lama/my_dataset/val/random_medium_512
求问大佬,我将abl-04-256-mh-dist.yaml中的val、train以及visual_test的indir都改成了绝对路径,在打印的config中显示例如indir为: /DATA/jupyter/personal/project/cover_diff/lama/my_datatset/train 但还是依然报这个错误,请问我哪里错了吗,谢谢!
这个问题可能就是没有读取到你的数据,你再仔细检查一下: 看你的训练集是不是jpg格式; 路径都检查下; 训练数据数量 < limit_train_batches乘train_batch_size乘gpu数量 val_check_interval < limit_train_batches
谢谢!感谢您的解答!
indir: my_dataset/val/random_medium_512
I solve this problem by changing the path behind indir to an absolute path,such as :home/jddx/wxp/lama/my_dataset/val/random_medium_512
求问大佬,我将abl-04-256-mh-dist.yaml中的val、train以及visual_test的indir都改成了绝对路径,在打印的config中显示例如indir为: /DATA/jupyter/personal/project/cover_diff/lama/my_datatset/train 但还是依然报这个错误,请问我哪里错了吗,谢谢!
这个问题可能就是没有读取到你的数据,你再仔细检查一下: 看你的训练集是不是jpg格式; 路径都检查下; 训练数据数量 < limit_train_batches乘train_batch_size乘gpu数量 val_check_interval < limit_train_batches
你好 请问你复现出来这篇代码了吗?大概训练要多久呀?非常期待您的回复,可以加个联系方式交流一下吗?qq:355964095
我在训练自己的数据集还有celebA数据集都出现了这个问题,求大佬帮解决: This problem occurs when training either my own dataset or the CeleA dataset, and I would like to ask how to solve it. (lama) w@w:~/lama$ python3 bin/train.py -cn lama-fourier location=my_dataset data.batch_size=10 Detectron v2 is not installed /home/w/miniconda/envs /lama/lib/python3.6/site-packages /hydra/core/default_element.py:127: Userwarning: In 'hydra/overrides': Usage of deprecated keyword in package header '#@Package group'. See https://hydra.cc/docs/next/upgrades/1.0_to_1.1/changes_to_package_header for more infornation See {url} for more information" " "
你好,我还没搞定,因为实验室电脑现在用不了,等开学了我试试能成功的话再告诉你
------------------ 原始邮件 ------------------
发件人: @.>;
发送时间: 2022年12月8日(星期四) 下午4:17
收件人: @.>;
抄送: @.>; @.>;
主题: Re: [saic-mdal/lama] ValueError: Dataloader
returned 0 length. Please make sure that it returns at least 1 batch (Issue #169)
、
你好 请问你复现出来这篇代码了吗?大概训练要多久呀?非常期待您的回复,可以加个联系方式交流一下吗?qq:355964095 加不上你啊 我qq是371034565 -
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
Is the error fixed?
Is the error fixed?
They didn't say they fixed it. I found the same problem too. I tried 6060 pictures, and my own masks. But they should be the same format. Now I'm trying 6464 pictures, resizing them to maybe 256*256.
The problem is most probably related to filenames and their extensions. Could you please share the structure of your dataset folder?
我是将训练自己的数据集,然后将celeba.yaml替换my_dataset.yaml。
------------------ 原始邮件 ------------------
发件人: "saic-mdal/lama" @.>;
发送时间: 2023年2月10日(星期五) 下午3:27
@.>;
@.@.>;
主题: Re: [saic-mdal/lama] ValueError: Dataloader
returned 0 length. Please make sure that it returns at least 1 batch (Issue #169)
Is the error fixed?
They didn't say they fixed it. I found the same problem too. I tried 6060 pictures, and my own masks. But they should be the same format. Now I'm trying 6464 pictures, resizing them to maybe 256*256.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
Thanks, it has been solved.
机械工程-18172658344 @.***
------------------ 原始邮件 ------------------
发件人: @.>;
发送时间: 2023年2月13日(星期一) 下午3:10
收件人: @.>;
抄送: @.>; @.>;
主题: Re: [saic-mdal/lama] ValueError: Dataloader
returned 0 length. Please make sure that it returns at least 1 batch (Issue #169)
The problem is most probably related to filenames and their extensions. Could you please share the structure of your dataset folder?
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
val_check_interval
请问你能说一下数据集每个文件的数量(比例)吗,我调整了之后还是报错~谢谢
请问各位解决了这个问题吗?
Have you solved this problem?
val_check_interval
请问你能说一下数据集每个文件的数量(比例)吗,我调整了之后还是报错~谢谢
请问你解决这个问题了嘛?
请问各位解决了这个问题吗?
Have you solved this problem?
请问你解决这个问题了嘛?
The problem is most probably related to filenames and their extensions. Could you please share the structure of your dataset folder? Hello, I also encountered the same problem. I configured the environment and prediction according to the process you provided, but there was an error in training the CelebA dataset Detectron v2 is not installed /root/miniconda3/envs/lama/lib/python3.6/site-packages/hydra/core/default_element.py:127: UserWarning: In 'hydra/overrides': Usage of deprecated keyword in package header '# @package group'. See https://hydra.cc/docs/next/upgrades/1.0_to_1.1/changes_to_package_header for more information See {url} for more information""" The process is run according to what you provide. The data set runs the code bash fetch_data/celebahq_dataset_prepare.sh bash fetch_data/celebahq_gen_masks.sh No error was reported, what is the reason?