Eason Leo

Results 6 comments of Eason Leo

This may be due to hardware reasons. On some hardware, the quantization model is not compatible with fp16. You can try set fp16=False. It works for me.

This may be due to hardware reasons. On some hardware, the quantization model is not compatible with fp16. You can try set fp16=False. It works for me.

This may be due to hardware reasons. On some hardware, the quantization model is not compatible with fp16. You can try set fp16=False.

This may be due to hardware reasons. On some hardware, the quantization model is not compatible with fp16. You can try set fp16=False.

U may should resize the embedding layer that the size is right.

The error traceback: Traceback (most recent call last): File "demo_opt.py", line 470, in eval_results = _evaluate_predictions_on_crowdhuman(gt_path, fpath) File "demo_opt.py", line 448, in _evaluate_predictions_on_crowdhuman database = Database(gt_path, dt_path, target_key, None, mode)...