FairMOTVehicle
FairMOTVehicle copied to clipboard
No param id.0.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
你好! 我按照你readme中步骤去训练,报了如下错误: `/home/ckq/anaconda3/envs/FairMOT/bin/python /home/ckq/git/FairMOTVehicle/src/train.py Using tensorboardX Fix size testing. training chunk_sizes: [1, 2, 1] The output will be saved to /home/ckq/git/FairMOTVehicle/src/lib/../../exp/mot/default Setting up data... Dataset root: /home/ckq/data/even/dataset
dataset summary OrderedDict([('detrac', 5952.0)]) total # identities: 5952 start index OrderedDict([('detrac', 0)])
heads: {'hm': 1, 'wh': 2, 'id': 128, 'reg': 2} opt: Namespace(K=128, arch='dla_34', batch_size=4, cat_spec_wh=False, chunk_sizes=[1, 2, 1], conf_thres=0.4, data_cfg='../src/lib/cfg/detrac.json', data_dir='/home/ckq/data/even/dataset', dataset='jde', debug_dir='/home/ckq/git/FairMOTVehicle/src/lib/../../exp/mot/default/debug', dense_wh=False, det_thres=0.3, down_ratio=4, exp_dir='/home/ckq/git/FairMOTVehicle/src/lib/../../exp/mot', exp_id='default', fix_res=True, gpus=[0, 5, 6], gpus_str='0, 5, 6', head_conv=256, heads={'hm': 1, 'wh': 2, 'id': 128, 'reg': 2}, hide_data_time=False, hm_weight=1, id_loss='ce', id_weight=1, img_size=(1088, 608), input_h=1088, input_res=1088, input_video='../videos/test3.mp4', input_w=608, is_debug=True, keep_res=False, load_model='../models/ctdet_coco_dla_2x.pth', lr=0.0001, lr_step=[20, 27], master_batch_size=1, mean=None, metric='loss', min_box_area=200, mse_loss=False, nID=5952, nms_thres=0.4, norm_wh=False, not_cuda_benchmark=False, not_prefetch_test=False, not_reg_offset=False, num_classes=1, num_epochs=30, num_iters=-1, num_stacks=1, num_workers=8, off_weight=1, output_format='video', output_h=272, output_res=272, output_root='../results', output_w=152, pad=31, print_iter=0, reg_loss='l1', reg_offset=True, reid_dim=128, resume=False, root_dir='/home/ckq/git/FairMOTVehicle/src/lib/../..', save_all=False, save_dir='/home/ckq/git/FairMOTVehicle/src/lib/../../exp/mot/default', seed=317, std=None, task='mot', test=False, test_mot15=False, test_mot16=False, test_mot17=False, test_mot20=False, track_buffer=30, trainval=False, val_intervals=10, val_mot15=False, val_mot16=False, val_mot17=False, val_mot20=False, vis_thresh=0.5, wh_weight=0.1) Creating model... loaded ../models/ctdet_coco_dla_2x.pth, epoch 230 Skip loading parameter hm.2.weight, required shapetorch.Size([1, 256, 1, 1]), loaded shapetorch.Size([80, 256, 1, 1]). If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset. Skip loading parameter hm.2.bias, required shapetorch.Size([1]), loaded shapetorch.Size([80]). If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset. No param id.0.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset. No param id.0.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset. No param id.2.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset. No param id.2.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset. Starting training...`
请问我是哪里错了?
@Ckq-Sugar 这不是报错,是warning,忽略就行了,能不能出来
类似这样的
训练日志信息?
@CaptainEven 你好我训练的时候我只有这些内容
你说的日志信息要训练完之后才会生成吗?
@Ckq-Sugar 训练成功运行,会实时打印类似这样的信息: mot/default |########################## | train: [1][15176/18030]|Tot: 5:50:16 |ETA: 1:05:55 |loss 0.7844 |hm_loss 0.4438 |wh_loss 1.3069 |off_loss 0.2037 |Data 0.002smot/default |########################## | train: [1][15177/18030]|Tot: 5:50:18 |ETA: 1:05:50 |loss 0.7844 |hm_loss 0.4438 |wh_loss 1.3069 |off_loss 0.2037 |Data 0.002smot/default |########################## | train: [1][15178/18030]|Tot: 5:50:19 |ETA: 1:05:48 |loss 0.7844 |hm_loss 0.4438 |wh_loss 1.3069 |off_loss 0.2037 |Data 0.002smot/default |########################## | train: [1][15179/18030]|Tot: 5:50:21 |ETA: 1:05:38 |loss 0.7843 |hm_loss 0.4438 |wh_loss 1.3069 |off_loss 0.2037 |Data 0.002smot/default |########################## | train: [1][15180/18030]|Tot: 5:50:22 |ETA: 1:05:44 |loss 0.7843 |hm_loss 0.4438 |wh_loss 1.3069 |off_loss 0.2037 |Data 0.002smot/default |########################## | train: [1][15181/18030]|Tot: 5:50:23 |ETA: 1:05:53 |loss 0.7843 |hm_loss 0.4438 |wh_loss 1.3069 |off_loss 0.2037 |Data 0.002smot/default |########################## | train: [1][15182/18030]|Tot: 5:50:25 |ETA: 1:05:52 |loss 0.7843 |hm_loss 0.4438 |wh_loss 1.3069 |off_loss 0.2037 |Data 0.002smot/default |########################## | train: [1][15183/18030]|Tot: 5:50:26 |ETA: 1:05:49 |loss 0.7843 |hm_loss 0.4438 |wh_loss 1.3069 |off_loss 0.2037 |Data 0.002smot/default |########################## | train: [1][15184/18030]|Tot: 5:50:27 |ETA: 1:05:50 |loss 0.7842 |hm_loss 0.4438 |wh_loss 1.3069 |off_loss 0.2037 |Data 0.002smot/default |########################## | train: [1][15185/18030]|Tot: 5:50:29 |ETA: 1:05:45 |loss 0.7842 |hm_loss 0.4438 |wh_loss 1.3069 |off_loss 0.2037 |Data 0.002smot/default |########################## | train: [1][15186/18030]|Tot: 5:50:30 |ETA: 1:05:42 |loss 0.7842 |hm_loss 0.4438 |wh_loss 1.3069 |off_loss 0.2037 |Data 0.002smot/default |########################## | train: [1][15187/18030]|Tot: 5:50:32 |ETA: 1:05:37 |loss 0.7842 |hm_loss 0.4438 |wh_loss 1.3070 |off_loss 0.2037 |Data 0.002smot/default |########################## | train: [1][15188/18030]|Tot: 5:50:33 |ETA: 1:05:33 |loss 0.7841 |hm_loss 0.4438 |wh_loss 1.3069 |off_loss 0.2037 |Data 0.002smot/default |########################## | train: [1][15189/18030]|Tot: 5:50:34 |ETA: 1:05:30 |loss 0.7841 |hm_loss 0.4438 |wh_loss 1.3070 |off_loss 0.2037 |Data 0.002smot/default |########################## | train: [1][15190/18030]|Tot: 5:50:36 |ETA: 1:05:14 |loss 0.7841 |hm_loss 0.4437 |wh_loss 1.3069 |off_loss 0.2037 |Data 0.002smot/default |########################## | train: [1][15191/18030]|Tot: 5:50:37 |ETA: 1:05:03 |loss 0.7840 |hm_loss 0.4437 |wh_loss 1.3070 |off_loss 0.2037 |Data 0.002smot/default |########################## | train: [1][15192/18030]|Tot: 5:50:38 |ETA: 1:05:01 |loss 0.7840 |hm_loss 0.4437 |wh_loss 1.3070 |off_loss 0.2037 |Data 0.002smot/default |########################## | train: [1][15193/18030]|Tot: 5:50:40 |ETA: 1:04:59 |loss 0.7840 |hm_loss 0.4437 |wh_loss 1.3070 |off_loss 0.2037 |Data 0.002smot/default |########################## | train: [1][15194/18030]|Tot: 5:50:41 |ETA: 1:04:58 |loss 0.7840 |hm_loss 0.4437 |wh_loss 1.3070 |off_loss 0.2037 |Data 0.002smot/default |########################## | train: [1][15195/18030]|Tot: 5:50:43 |ETA: 1:04:53 |loss 0.7839 |hm_loss 0.4437 |wh_loss 1.3070 |off_loss 0.2037 |Data 0.002smot/default |########################## | train: [1][15196/18030]|Tot: 5:50:44 |ETA: 1:04:53 |loss 0.7839 |hm_loss 0.4437 |wh_loss 1.3071 |off_loss 0.2037 |Data 0.002smot/default |########################## | train: [1][15197/18030]|Tot: 5:50:45 |ETA: 1:04:53 |loss 0.7839 |hm_loss 0.4437 |wh_loss 1.3071 |off_loss 0.2037 |Data 0.002smot/default |########################## | train: [1][15198/18030]|Tot: 5:50:48 |ETA: 1:04:50 |loss 0.7839 |hm_loss 0.4437 |wh_loss 1.3071 |off_loss 0.2037 |Data 1.062smot/default |########################## | train: [1][15199/18030]|Tot: 5:50:49 |ETA: 1:10:02 |loss 0.7838 |hm_loss 0.4437 |wh_loss 1.3071 |off_loss 0
上面的排版有点问题,总之会打印epoch, batch, loss等信息
@CaptainEven 好的,非常感谢,我在看看是否哪里出了问题。
@Ckq-Sugar 请问你解决了吗,我也是这个问题
@Jintopfy 我已经解决了,我建议你对照他的readme和fairmot的readme要求在看一下,应该是有个步骤错误了。时间有点久了,具体怎么改好的我已经忘了。
@Ckq-Sugar 好的谢谢,我再去看看