Inference using the weights
Hello, Is there a way to perform inference on a single image/ directory of images using the provided weights?
Yes, I was also searching for this.
Hello, Is there a way to perform inference on a single image/ directory of images using the provided weights?
have you solved this issue? I want to know why are the images in the gen folder generated by evaluate.py during the sampling stage colorful and noisy images?
Yes, I was also searching for this.
have you solved this issue? I want to know why are the images in the gen folder generated by evaluate.py during the sampling stage colorful and noisy images?
Hello, Is there a way to perform inference on a single image/ directory of images using the provided weights?
have you solved this issue? I want to know why are the images in the gen folder generated by evaluate.py during the sampling stage colorful and noisy images?
No, I didn't look at it anymore after raising the issue
Hello, Is there a way to perform inference on a single image/ directory of images using the provided weights?
have you solved this issue? I want to know why are the images in the gen folder generated by evaluate.py during the sampling stage colorful and noisy images?
No, I didn't look at it anymore after raising the issue
can you use this command?
python3 -m torch.distributed.launch --nproc_per_node=8
--node_rank 0
--master_addr=${MASTER_ADDR:-127.0.0.1}
--master_port=${MASTER_PORT:-46123}
evaluate.py --target_expansion 0.25 0.25 0.25 0.25 --eval_dir ./eval_dir/scenery/1x/ --size 128
--config flickr192_large
In the previous issue, we discovered that this checkpoint might not be successfully loaded during the evaluation process. This could be due to some structural errors that occurred when uploading from local storage to the server. If possible, please retrain it from scratch (8 V100 GPUs or 4 A100 GPUs should be sufficient to complete the task).
In the previous issue, we discovered that this checkpoint might not be successfully loaded during the evaluation process. This could be due to some structural errors that occurred when uploading from local storage to the server. If possible, please retrain it from scratch (8 V100 GPUs or 4 A100 GPUs should be sufficient to complete the task).
when I run :accelerate launch --multi_gpu --num_processes 8 --mixed_precision fp16 train_ldm.py --config=configs/flickr192_large.py, I get the error: packaging.version.InvalidVersion: Invalid version: '0.10.1,<0.11', I tend to solve this issue by: pip install packaging==21.3 and pip install 'torchmetrics<0.8', but this error still arises.
You can follow the installation of U-ViT https://github.com/baofff/U-ViT.
You can follow the installation of U-ViT https://github.com/baofff/U-ViT.
Thank you for your prompt reply. When the environment is ok, I run accelerate launch --multi_gpu --num_processes 8 --mixed_precision fp16 train_ldm.py --config=configs/flickr192_large.py, and it prints distributed_c10d.py:450] Waiting in store based barrier to initialize process group for rank: 4, key: store_based_barrier_key:1 (world_size=8, worker_count=19, timeout=0:30:00) all the time. How to solve it?
You can follow the installation of U-ViT https://github.com/baofff/U-ViT.
你好,还有一个问题,target_expansion具体如何设置,例如我想将输入图outpainting到2倍大小,target_expansion应该设置为多少?是否支持只在某个方向outpainting?