Chilicyy
Chilicyy
@QiqLiang Hi, if applying fuse_ab mode, the cls_preds_ab and reg_preds_ab branch are thrown away when you export onnx model for later application. When you run inference with pytorch checkpoint model,...
Hi, you can refer to [this line](https://github.com/meituan/YOLOv6/blob/e9656c307ae62032f40b39c7a7a5ccc31c2f0242/yolov6/models/heads/effidehead_fuseab.py#L174).
@pragadeeshraju Hi, we train the models on COCO Dataset, which you can download from [COCO](http://cocodataset.org/), [YOLO format coco labels](https://github.com/meituan/YOLOv6/releases/download/0.1.0/coco2017labels.zip) .
@BouchikhiYousra Hi, did you run inference with the pytorch model checkpoint or tensorRT model?
@Claraeunice 您好,输出日志中提示了`WARNING: No labels found in /root/autodl-tmp/visdrone/VisDrone2019-DET-val/images.`,表示没读到图像对应的标签信息,猜测是图像和标签文件夹的组织结构有问题,这边可以参考readme的提示重新组织下数据的文件件结构。 
hi @angela804 @cayuso-skylark Are you using the latest code from the official YOLOv6 project? You can refer to this line : https://github.com/meituan/YOLOv6/blob/d512ce7c4f103e8887960198505518bed404abdc/yolov6/core/engine.py#L189C51-L189C51
@hjg12345 It means that the images are resized to size (640-X) * (640-X) and then padded to 640*640, which is helpful for the performance.
您好,网络训练前会进行在线数据增强,代码可参考https://github.com/meituan/YOLOv6/blob/main/yolov6/data/data_augment.py
@wudizuixiaosa 您好,建议参考README教程上的格式要求转换标签文件再重试。
您好,如果这边使用windows系统运行项目代码,可以将数据集的路径中 '/' 换成 ’\‘ 或者 ’\\‘ 试试。