一些问题的解答/Answers to some questions for training and sampling
看到这里很多issue没有人回答,作者也没怎么维护,我结合我自己实验的结果回答一下
-
为什么采样生成的图案是一片漆黑? 训练出了问题,大概率training过程的步数不够,多训练一会
-
training的epoch在多少比较合适 我个人实验的结果显示epoch大概在60000次左右效果开始变好
-
代码没有设置收敛标准,怎么让training停下来 需要手动停止,停止的标准可以参考上一步我实验得出的结果
-
代码生成了三个模型,用哪一个采用会比较好 emasavemodel.pt做采用的效果会比较好
Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
I see that many issues here have not been answered, and the author does not do much to maintain them. If you have obsessive-compulsive disorder, I will answer them by myself based on experiments.
-
Why is the pattern generated by sampling completely black? There is a problem with the training. There is a high probability that the number of steps in the training process is not enough. Please train for a while.
-
What is the appropriate epoch for training? The results of my personal experiments show that the effect starts to get better around 60,000 epochs.
-
The code does not set a convergence standard. How to stop training? It needs to be stopped manually. The standard for stopping can refer to the results of my experiment in the previous step.
-
The code generates three models, which one is better to use? The effect of using emasavemodel.pt will be better
Greatly appreciate your contribution.
@MilkTeaAddicted Thank You for clarifying. can you mention what dataset you worked on? I trained BRATS2020 for 85K steps and still got black samples.
@MilkTeaAddicted 感谢您的澄清。您能提到您研究的是什么数据集吗?我对 BRATS2020 进行了 85K 步训练,但仍然得到了黑色样本。
Hello, my dataset is ISIC2016, 60000 epoch, the integration effect of multiple pictures is as follows
The pictures generated each time are different. The one on the far right is the integrated picture.
你好,请问可以帮我解决一下,我的这个问题吗?不胜感激
Traceback (most recent call last):
File "E:\deep_learning\Segmentation\MedSegDiff-master\scripts\segmentation_sample.py", line 214, in
你好,请问可以帮我解决一下,我的这个问题吗?不胜感激 Traceback (most recent call last): File "E:\deep_learning\Segmentation\MedSegDiff-master\scripts\segmentation_sample.py", line 214, in main() File "E:\deep_learning\Segmentation\MedSegDiff-master\scripts\segmentation_sample.py", line 123, in main sample, x_noisy, org, cal, cal_out = sample_fn( File "E:\deep_learning\Segmentation\MedSegDiff-master\guided_diffusion\gaussian_diffusion.py", line 565, in p_sample_loop_known for sample in self.p_sample_loop_progressive( File "E:\deep_learning\Segmentation\MedSegDiff-master\guided_diffusion\gaussian_diffusion.py", line 650, in p_sample_loop_progressive out = self.p_sample( File "E:\deep_learning\Segmentation\MedSegDiff-master\guided_diffusion\gaussian_diffusion.py", line 444, in p_sample out = self.p_mean_variance( File "E:\deep_learning\Segmentation\MedSegDiff-master\guided_diffusion\respace.py", line 90, in p_mean_variance return super().p_mean_variance(self._wrap_model(model), *args, **kwargs) File "E:\deep_learning\Segmentation\MedSegDiff-master\guided_diffusion\gaussian_diffusion.py", line 324, in p_mean_variance self._predict_xstart_from_eps(x_t=x, t=t, eps=model_output) File "E:\deep_learning\Segmentation\MedSegDiff-master\guided_diffusion\gaussian_diffusion.py", line 348, in _predict_xstart_from_eps assert x_t.shape == eps.shape AssertionError
不太懂你的报错,我在ISIC上没有出现过这种错误,而且check了一下你的调用栈,和我跑的代码一样
我是在DRIVE数据集上跑的采样,然后我尝试打印他们的形状,分别为x_t: torch.Size([1, 1, 64, 64]) eps: torch.Size([1, 2, 64, 64]),然后我打印出eps中的数据,发现他的两个通道的数据是一样的,我很疑惑这一点
Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
I ran the sampling on the DRIVE data set, and then I tried to print their shapes, which are x_t: torch.Size([1, 1, 64, 64]) eps: torch.Size([1, 2, 64, 64]), then I printed out the data in eps and found that the data of his two channels were the same. I was very confused about this.
唔,没做过血管分割相关的工作,不过你的eps的shap是torch.Size([1, 2, 64, 64]) 要不试试沿着第二个维度拆分?调一下输入应该问题不大
Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
Well, I have never done any work related to blood vessel segmentation, but the shap of your eps is torch.Size([1, 2, 64, 64]). How about trying to split it along the second dimension? It shouldn't be a big problem if you adjust the input.
我在DRIVE数据集上跑的采样,然后我尝试打印它们的形状,分别为x_t: torch.Size([1, 1, 64, 64]) eps: torch.Size([1, 2, 64, 64]),然后我打印出eps中的数据,发现他的两个通道的数据是一样的,我很怀疑这一点
请问您实现在DRIVE上的分割了吗,是否可以交流一下
Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
The samples I ran on the DRIVE dataset, and then I tried to print their shapes, respectively x_t: torch.Size([1, 1, 64, 64]) eps: torch.Size([1, 2, 64, 64]) and then I printed out the data in eps and found that the data for both his channels was the same, which I doubted
Have you implemented the division on DRIVE? Can you communicate with me?
Bot 检测到 issue body 的语言不是英文,自动翻译。👯👭🏻🧑🤝🧑👫🧑🏿🤝🧑🏻👩🏾🤝👨🏿👬🏿
我看到这里的很多问题都没有得到解答,作者也没有做太多事情来维护它们。如果你有强迫症,我会根据实验自己回答。
- 为什么采样生成的图案是全黑的?培训有问题。训练过程中的步骤数很可能不够。请训练一段时间。
- 训练的合适纪元是什么?我个人的实验结果表明,这种效果在 60,000 个 epoch 左右开始变得更好。
- 该规范没有设置收敛标准。如何停止训练?它需要手动停止。停止的标准可以参考我上一步的实验结果。
- 代码生成了三个模型,哪个更好用呢?使用 emasavemodel.pt 的效果会更好
hello,I would like to know how did you solve the problem that the sampled images are all black after you trained the model? The model you used is v1 or v2, thank you very much for answering my question, I've been troubled by this problem for a long time.