7W7W7W

Results 10 comments of 7W7W7W

> When I try to run the training code with --input_nc 1 and --output_nc 1 (grayscale img to img conversion), the script fails with: > > `RuntimeError: Given groups=1, weight...

> What is your command line ? Hello, it's like this:python train.py --workers 8 --device 0 --batch 4 --data data/VisDrone.yaml --img 640 --cfg models/detect/yolov9-c.yaml --weights '' --name yolov9-c --hyp hyp.scratch-high.yaml...

> Use train_dual.py instead of tran.py for yolov9 models Check #1 for more informations Thank you for your answer. It seems like I still encountered the same mistake: D:\MiniConda\envs\yolov5\python.exe F:\CNN\deep-learning-for-image-processing-master\pytorch_object_detection\yolov9\train_dual.py...

> @7W7W7W In your promp, you didn't use weights? Try using the weights given in the ReadMe (--cfg models/detect/yolov9-c.yaml --weights path/to/yolov9-e.pt) Is weight necessary? I still made the same mistake...

> It seems that you have changed the yaml file of the model. I think that you should not modify the size of the model. Yes, that's the problem. I...

> > > It seems that you have changed the yaml file of the model. I think that you should not modify the size of the model. > > >...

@Kiumb1223 yes,that is very importent on windows! i lost lots of time until i see the issue.thanks!

> Trick: modify the value of a pixel to force the RGB settings. For example, replace the value of the first pixel with [0,255,0]. 您好,请问具体哪里修改呢?谢谢

CUT can not run unaligned dataset?

> `可以试试只保留置信度最高的一个框,utils_bbox.py文件中修改为: `@torch.no_grad() def forward(self, outputs, target_sizes, confidence): out_logits, out_bbox = outputs['pred_logits'], outputs['pred_boxes'] assert len(out_logits) == len(target_sizes) assert target_sizes.shape[1] == 2 prob = F.softmax(out_logits, -1) scores, labels = prob[..., :-1].max(-1)...