Strange Results From W2A8 Model
Hi,
I executed the W2A8 ImageNet finetuing using the script by directly use n_bits_w = 2, n_bits_a = 8. But it produces unexpected results in the W2A8 setting. Could you please advise if there are any specific hyperparameters or configurations that need adjustment in the default code to address this problem?
Here are my results.
I also have the same problem.
@ThisisBillhe Sorry to bother, but I still can not reproduce the result of W2A8 setting, is there any way to fix this?
Hi, I will look into this when I am available..working on other project now BTW, have you successfully trained w4a8 or w4a4 model with EfficientDM? Just want to make sure you setup the project correctly.
Hi, I will look into this when I am available..working on other project now BTW, have you successfully trained w4a8 or w4a4 model with EfficientDM? Just want to make sure you setup the project correctly.
Thanks for reply so soon! Yes, I successfully trained the w4a4 model and the results look good. But for W2A8, I really dont know why the result is that strange.
what about using more steps and epochs during training, e.g., 250 ddim_steps and more epochs?
what about using more steps and epochs during training, e.g., 250 ddim_steps and more epochs?
The best result I got is by directly training 20 ddim_steps with 800 epoch, and I got FID 21 and sFID 12, still far away from the paper results. When I double the training epoch, the model training crashes at 1200 epoch. I must doing something wrong, any idea how to reproduce the paper result?
Hi! I am reproducing the w4a4 results using the setting 'n_samples_per_class = 2, ddim_steps = 20, ddim_eta = 0, scale = 3, Epoch=160' but I cannot have the right images. I wonder what setting do you use