Supzekun

Results 9 issues of Supzekun

when python test.py --conf_path confs/face_example.yml got error Traceback (most recent call last): File "test.py", line 29, in from guided_diffusion import dist_util File "/data/123/szk/RePaint-main/RePaint-main/guided_diffusion/dist_util.py", line 23, in import blobfile as bf...

I trained the classifier on my own dataset, which has a total of 100 images divided into 5 classes with 20 images in each class. Each image is in the...

我有一个想法,这个项目里的分类器引导方法,仅仅使用了一个分类器。 能否再引入第二个分类器,更细致地控制生成的图像,以满足更细致的要求?

![image](https://github.com/yandex-research/ddpm-segmentation/assets/53253522/dff32410-e361-4b10-b2bf-c49a6e88e2db) What's the reason for this?

本人在写论文,要与U2fusion做对比,直接把model文件夹里的模型用在本人的自建数据集上,是否可以作为论文中U2fusion模型的融合结果

作者您好,在3.2节中,您对CGRP和FlowNet以及VoxelMorph的配准结果进行了定量比较,这一部分代码在您提供的/Evaluation/metrics.py文件。这个文件中有两个路径,分别是: root_in = '/home/zongzong/WD/Fusion/JointRegFusion/results_Road/Reg/220507_Deformable_2*Fe_10*Grad/ir_reg/' root_gt = '../dataset/raw/ctest/Road/ir_121/' 我想问问这两个路径分别代表什么?root_in是对齐后的红外图片吗?root_gt是可见光图片还是对其前的红外图片,或者是其他? 我想运行metrics.py,得到您论文中的评估结果,及MSE为0.004,NCC为0.926,MI为1.648

Hello, I have trained an unconditional DDPM model on my own dataset using the P2-weight codebase. The model produces clear images, however when using it for ilvr, the resulting images...

### **Curve 1:** Figure III left (Signal-to-noise ratio (SNR) of linear and cosine noise schedules for reference.) The relationship is given in your paper: ![image](https://github.com/jychoi118/P2-weighting/assets/53253522/87619335-0bc3-4bdf-a43f-ef9cad378a22) However when I use linear...

Thank you for your excellent work! I want to train the model using my own dataset, how does this modify the configuration file? How should I train.