DDPM-Pytorch
DDPM-Pytorch copied to clipboard
what changes would we need to do if we used our own dataset?
Thanks for the awesome explanation. Could you tell me which changes we need before training the model on our data?
Hello,
Thanks for the appreciation. I apologize that should have been part of the README , I have updated it now. Can you take a look - https://github.com/explainingai-code/DDPM-Pytorch/blob/main/README.md#training-on-your-own-images and let me know in case you face any issues
thanks for the modification in the repo for training the model on custom dataset. however i am facing this issue when i triad the model on my own data. my dataset with the name of ultrasound256CH1 contain train and test images. all the images have 256*256 size, channel 1.
Can you tell me the im_path value you used in the config ? And also the directory structure of your dataset. Is it $REPO_ROOT/ultrasound256CH1/train/*.png ?
The error basically means that the code wasn't able to find any png files in the location it was searching.
Can you tell me the im_path value you used in the config ? And also the directory structure of your dataset. Is it $REPO_ROOT/ultrasound256CH1/train/*.png ?
The error basically means that the code wasn't able to find any png files in the location it was searching.
yes the dataset is in the repo root, still getting this error, kindly how can i solve it.
Got it. Create a subfolder 'images' inside train directory and put all training png files in there. So $REPO_ROOT/ultrasound256CH1/train/images/*.png
Leave the config as it is to point to "ultrasound256CH1/train" . Can you try that and let me know if it works?
yes i tried, unfortunately it does not work.
Can you print the directory and path the code is searching at https://github.com/explainingai-code/DDPM-Pytorch/blob/main/dataset/mnist_dataset.py#L40 and share that.
print(d_name, os.path.join(im_path, d_name, '*.{}'.format(self.im_ext)))
Also comment line https://github.com/explainingai-code/DDPM-Pytorch/blob/main/dataset/mnist_dataset.py#L42
that error has been resolved. that error was occurring because of the arrangement of the dataset. i created 5 classes of the different images for the train data and used "data/train/" as a path and its worked.
now encountering this error:
You are training on cpu as of now right ? Also can you confirm if your conda environment has python3.8 and have the requirements installed as mentioned in https://github.com/explainingai-code/DDPM-Pytorch/tree/main?tab=readme-ov-file#quickstart
hi sir, i kept the batch size 10, just want to run for 40 epochs and the total images are only 828. could you pleases tell me why the model required so heavy computational power(memory) and how can i handle this issue?
RuntimeError: CUDA out of memory. Tried to allocate 640.00 GiB (GPU 0; 14.75 GiB total capacity; 2.16 GiB already allocated; 11.63 GiB free; 2.17 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF.
Its because the images are 256x256 and by default the model config does downsampling only twice. Few things you can try to ensure you are able to train.
- Resize the images to 64x64 in the loader and train diffusion model on these 64x64 images
- Have all three down blocks to downsample by placing down_sample : [True, True, True] in the config
- Try with num_mid_layers : 1 in the config
- Reduce number of Midblocks by changing mid_channels : [256, 128] in config
I think that should reduce the model size considerably and should allow you to train.
Thanks, the model works very well i trained on my datasets. I would request you to add a few more things to the repo for better results and comparative analytical analysis between the original and the generated images like classifier free guidance, exponential moving average, IS and FID score. looking forward to your outstanding and easy implementation!
Yes, I wanted to have this repo as a intro to diffusion which is why didn't want to add those and leave this as a bare minimum diffusion repo. I do plan to create a stable diffusion repo which should have some of these incorporated. Once that is done I will try to put the parts you mentioned here as well (if I am able to do that without adding too much complexity to the current implementation ).
Hello,
I made all the relevant changes mentioned in the readme and in this thread but after my images are loaded I get an AttributeError.
Hello @thatdev6 , this code expects the path to have png files. But seems like thats not the case for the path you have provided. Is it npy file? Cause In that case you would have to change this line
Hello @thatdev6 , this code expects the path to have png files. But seems like thats not the case for the path you have provided. Is it npy file? Cause In that case you would have to change this line
No my path has png files
Are you using the same code or have you made some modifications ? Your list at the end of dataset initialization is a list of numpy.ndarray objects(according to the error), which cannot be because the dataset class during initialization just fetches the filenames. Also only 19 training images?
Are you using the same code or have you made some modifications ? Your list at the end of dataset initialization is a list of numpy.ndarray objects(according to the error), which cannot be because the dataset class during initialization just fetches the filenames.
Yes, I modified the loader function to load and downsample my images. They are rectangular and have a jpg format I figured out my mistake and it has corrected
This is how modified the loader function
I also changed the im channels to 3, Now I get a runtime error while training
The shapes of two images that your dataset returns are different (3x3264x2448 and 3x2448x3264).
Before converting to tensor by any chance did you forget to convert the numpy arrays from HxWx3 to 3xWxH ?
How would i fix that?
I also modified the sample function for rectangular images
I dont think the 3xwxh is an issue because the error says that your image shapes are 3xWxh so thats fine. But I think your path does not have all same size images. Some images are 3264x2448 and some are 2448x3264 . Can you check this.
I dont think the 3xwxh is an issue because the error says that your image shapes are 3xWxh so thats fine. But I think your path does not have all same size images. Some images are 3264x2448 and some are 2448x3264 . Can you check this.
Yes, I think your right so the solution would be to downsample all of them to 64x64?
Yes center square crop of (2448x2448) and then resize to 64x64. How many images are there in your dataset?
Yes center square crop of (2448x2448) and then resize to 64x64. How many images are there in your dataset?
Around 600 images
These are changes i made to the loader and getitem function, I assume there is no problem here but for some reason the training gets interrupted (^C)
Couple of things. Move the image reading to the data loader get_item method just like the code in repo. Simply collect the filenames from load_images method and nothing else. You can do the cropping and resize also in get_item method. Secondly can you check why its printing "Found 19 images" when actually it should be 600.
Couple of things. Move the image reading to the data loader get_item method just like the code in repo. Simply collect the filenames from load_images method and nothing else. You can do the cropping and resize also in get_item method. Secondly can you check why its printing "Found 19 images" when actually it should be 600.
Okay so first of all i should leave the loader function as it is just modify for the jpg images, secondly i should do the image formatting in the get item function It says found 19 images because at the moment i have only uploaded a subset of the dataset, It was quite annoying to wait for the images to load only to encounter an error in training
How do you suggest i fix this?
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 4.75 GiB. GPU 0 has a total capacity of 14.75 GiB of which 57.06 MiB is free. Process 15150 has 14.69 GiB memory in use. Of the allocated memory 11.21 GiB is allocated by PyTorch, and 3.35 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
These are the modifications i made
Reduce the batch size to 16. That should work I think.