insightface
insightface copied to clipboard
why not use fp16 these two lines?
https://github.com/deepinsight/insightface/blob/786c4a8327398aecb4cad0cb83ebcefc12b9d3cb/recognition/arcface_torch/backbones/iresnet.py#L160
why not use fp16 these two lines?
probably because this is the most critical layer, it converts the output of the convolutions to a class probability so it needs high precision