insightface
insightface copied to clipboard
[MFR21] ICCV21 masked face recognition challenge discussion
How to participate: https://github.com/deepinsight/insightface/tree/master/challenges/iccv21-mfr
Submission server: http://iccv21-mfr.com/
Workshop homepage: https://ibug.doc.ic.ac.uk/resources/masked-face-recognition-challenge-workshop-iccv-21/
Any issue can be left in this thread.
有谁测试过baseline 模型在IJBC上的准确率吗?我测试时一直报错。
Hello and thank you for organizing the challenge this year! In your rules, you prohibit the use of pre-trained models and any external data, but there is no mention of how you'd enforce this rule. Will you demand training pipelines and cooperation to reproduce the results from top teams?
@vuvko Yes, top-rank players should send their solutions to organizers after submission closed.
Thank you for your quick response! Will you check the reproducibility of the training pipeline? If so, what will you do in case of a different result model (it can be hardware-dependent)?
@vuvko We will strictly check the reproducibility. There should not be any hardware-dependent issue.
有谁测试过baseline 模型在IJBC上的准确率吗?我测试时一直报错。
Please join the QQ group or WeChat group, let’s talk in detail.
Hello Uploading the model on the submission page is very very slow. I have checked my internet connection and it is good with an upload speed greater than 25 MB/s. Upload to the submission portal is happening at <50 KB/s speed. Can someone please help me with this?
Same problem here, I cannot upload the baseline iresnet50 model due to very slow upload speed.
Looks like the speed is even low now, less than 10 KB/sec is being uploaded from my system. @nttstar Can you please look into this? Thanks in advance 🙂
@manideep2510 @vuvko The submission server is located in China and it seems to have bandwidth problems for other countries. We will try to find a solution soon.
Is there any preprocessing like normalization before load to model ? iccv21-mfr only mentioned that the input shape should equal to 3x112x112 (RGB order)
@nutsam Normalization setting will be auto-detected in our onnx_helper.py, please check it.
@nttstar is there any update on the uploading problem?
@vuvko We're trying!
Hello @nttstar , I just submitted a model and I got: "
2021-06-24 21:39:44 | ms1m | ∅ | ∅ | ∅ | FAILED | ∅ | - |
---|---|---|---|---|---|---|---|
" | |||||||
The error message just said "load_onnx_failed". But as I checked my model using your onnx_helper.py, there is no error message. The output is just like in bellow, on my PC. | |||||||
So, I would like to ask what error message appears if my model's inference time larger than 10 ms? |
Thank you,
file_: ms1m_groupface_resnet101.onnx use onnx-model: onnx/ms1m_groupface_resnet101/ms1m_groupface_resnet101.onnx input-shape: ['batch_size', 3, 112, 112] 0 Shape_0 1 Constant_1 2 Gather_2 3 Conv_3 4 PRelu_4 5 MaxPool_5 6 BatchNormalization_6 7 Conv_7 max time cost exceed, given 77.5860
@nqkhai1706 Have you checked with onnxruntime==1.6 ?
@nqkhai1706 Have you checked with onnxruntime==1.6 ?
No, my onnxruntime version: 1.8.0. May it different on 1.6? I will check it. Thank you,
@nttstar I have checked again with onnxruntime==1.6, the result is same with onnxruntume==1.8.0. Do you have any other idea why my model got error on your side? Thank you,
@nttstar I have checked again with onnxruntime==1.6, the result is same with onnxruntume==1.8.0. Do you have any other idea why my model got error on your side? Thank you,
This is our error:
>>> import onnxruntime
>>> session = onnxruntime.InferenceSession("ms1m_groupface_resnet101.onnx", None) Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/face/miniconda3/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 206, in __init__
self._create_inference_session(providers, provider_options)
File "/home/face/miniconda3/lib/python3.7/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 226, in _create_inference_session
sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from /mnt/data/models/1624534624858999668/ms1m_groupface_resnet101.onnx failed:Protobuf parsing failed.
The environment of our system is as follows:
Driver Version: 450.80.02
Cuda compilation tools, release 10.2, V10.2.89
onnx==1.8.0
onnx-simplifier==0.3.5
onnxoptimizer==0.2.5
onnxruntime-gpu==1.6.0
测试的时候能否知道当前的图片是否有mask
@nttstar Even though I have not submitted any model today or yesterday. The portal is still showing that
Today, you have submitted 3 times, 0 remaining.
Can you please look into this? Thanks in advance.
@manideep2510 What is your username?
测试的时候能否知道当前的图片是否有mask
@zhanglaplace 目前不能
@manideep2510 What is your username?
My user name is sherlock
.
@nttstar Even though I have not submitted any model today or yesterday. The portal is still showing that
Today, you have submitted 3 times, 0 remaining.
Can you please look into this? Thanks in advance.
It's probably a bug about jet lag, and we'll fix that in no time.
@nttstar Even though I have not submitted any model today or yesterday. The portal is still showing that
Today, you have submitted 3 times, 0 remaining.
Can you please look into this? Thanks in advance.
Hi, please provide some information related to the browser to help better debug. Like which browser you use and the version of the browser.
@manideep2510 Hi, @JesseEisen is helping us build the evaluation system, please let him know your detailed issue.
I am using Mozilla Firefox Version 89.02. The portal is now showing
Today, you have submitted 8 times, -5 remaining.
But I am able to upload and make submissions.
@manideep2510 Can you try Chrome?
Yeah, exact same thing is happening on chrome as well. Now I am not able to upload or submit again.