YOLOv3v4-ModelCompression-MultidatasetTraining-Multibackbone
YOLOv3v4-ModelCompression-MultidatasetTraining-Multibackbone copied to clipboard
YOLO ModelCompression MultidatasetTraining
 i have trainning it by using tiny-yolov3. why the precision down sofast ? and i python test.py /or detect.py ,报错为load_state_dict,  是不是Torch版本造成的? 我的版本是1.4.0. 按照网上说的办法尝试修改了,还是不行。 期待您的回复,谢谢
Hi. I'm trying to do pruning on a yolov4 model. I've done all the steps. However, I keep going out of memory. I'm using a T4 card with 15 GB...
由于v3 和 V4 的 cfg 文件中, yolo 层出现的顺序正好相反。
Hi thanks for this project I use command below for training with quantization but get problem. python train.py -pt --data cfg/obj.data --batch-size 4 --weights weights/yolov3-tiny.weights --cfg cfg/yolov3-tiny.cfg -sr --s .001...
怎么才能使feature_s和feature_t长度相同
训练命令是: python train.py --data data/dior.data --batch-size 4 --weights weights/yolov3-dior-best.weights -pt --cfg cfg/yolov3/yolov3-onDIOR.cfg --epochs 10 --img-size 608 报错: Traceback (most recent call last): File "train.py", line 956, in train(hyp) # train...
It can be referred to lutzroeder/netron#544
请问这是因为环境版本出现的问题吗
大神您好,请问下面几种方法对应的是哪些量化方法,有没有具体的论文,谢谢! if quantized == 1: modules.add_module('Conv2d', BNFold_QuantizedConv2d_For_FPGA() elif quantized == 2: modules.add_module('Conv2d', TPSQ_BNFold_QuantizedConv2d_For_FPGA()) elif quantized == 3: modules.add_module('Conv2d', BNFold_COSPTQuantizedConv2d_For_FPGA(), 您的readme中quantized 2 对应Dorefa,但在代码中没有调用quantized_dorefa.py, 这个对应是对的吗? quantized=3 时是哪种量化方法? FPGA参数是如何使用,起到什么作用呢? 非常感谢
TypeError: forward() missing 1 required positional argument: 'x'