edpzou

Results 10 comments of edpzou

Hi @mancomao , The prototxt of each model has the specified mean at the beginning, just minus the mean operation, and the label file is **ILSVRC2017_val.txt**

这个结果应该配合 CAISA-B 数据集的结构来看,附上一个讲解链接 https://blog.csdn.net/qq_42191914/article/details/105473925

I also encountered this problem, and use torch-1.4.0-gpu

After use https://github.com/princeton-vl/CornerNet/pull/65, I compiled successfully

> > After use #65, I compiled successfully > > Thank you very much! Could you give me your address of your e-mail? I think we can talk this algorithm...

Hi, see https://github.com/princeton-vl/CornerNet-Lite/blob/6a54505d830a9d6afe26e99f0864b5d06d0bbbaf/core/test/cornernet_saccade.py#L320 From the inferred code, attention map is used by default when using image inference for the first time. Secondly, attention map is no longer used. https://github.com/princeton-vl/CornerNet-Lite/blob/6a54505d830a9d6afe26e99f0864b5d06d0bbbaf/core/models/py_utils/modules.py#L272

> Hello, have you solved this problem? I take the same question. @aksenventwo no, i gave up

> > > 88.9四舍五入不就是89么,我拿作者公布的权重测试在total-text上的f分数为88.69 > > 额,是88.69,四舍五入也应该是88.7 : 他说的是你的 88.9 四舍五入是 89,哈哈,而且他说作者的预训练权重的分数比你的还低

最简单的做法就是把 demo.py 移到跟 siamfc 同一级目录

cpu 下不能导出 fp16 类型的模型,可以指定 device 为 auto,然后修改脚本里面输入的 device 设备,可以支持多卡下导出 onnx ,避免单卡放不下模型的情况: ```python from transformers.modeling_utils import get_parameter_device device = get_parameter_device(lm_head_model) input_data = torch.randn(input_shape, dtype=dtype).to(device) ``` 脚本里面还有好几个需要修改的,自己可以参考上面的修改下。 然后 decoder 合并太多层也会 OOM,可以尝试减小合并 decoder...