improved-diffusion
improved-diffusion copied to clipboard
1D images
Has anyone made the code work for 1D images? I changed in_channels and out_channels to 1 in the creation of the model in script_util.py and removed the conversion to RGB in image_datasets.py. Afterwards, I have been training a model for 80 000 iterations on 14 000 black-white medical images with a batch size of 8 and minibatch of 1 but the samples are pure noise.
hi,bro!have you solved this problem? Would you mind sharing the parameters for training 1D images?
Please share the hyper parameters! That would be helpful
hi,bro!have you solved this problem? Would you mind sharing the parameters for training 1D images?
So well I managed to start 1D U-Net
model = UNetModel(
in_channels=198, # should be equal to num_features (input features)
dims=1, #this states, that we are using 1D U-Net
model_channels=256, # inner model features
out_channels=198, # should be equal to num_features (input features)
num_res_blocks=10, # idk
attention_resolutions=("16",) # idk
)
x_0 = torch.rand(23, 64, 198) # our input [batch_size, num_atoms, num_features]
#num_atoms should be a 2 to some power
t = torch.rand(23) # our time [batch_size]
model(
x=x_0,
timesteps=t,
).shape # torch.Size([23, 64, 198]), which matches x.shape torch.Size([23, 64, 198])
Hi, leffff, could you share more about your codes? I also started training with black and white images. I got a write_line error with the default hyperparameters. Also, how is the diffusion model work compared to StyleGAN?