Shang Wang
Shang Wang
> Maybe it causes [NVIDIA/DeepLearningExamples#1091](https://github.com/NVIDIA/DeepLearningExamples/issues/1091) ? That's right.
Yeah it's already supported. Below is an example for resnet, depending on which optimizer is used, different rules are checked against: https://github.com/mlcommons/logging/blob/master/mlperf_logging/compliance_checker/1.0.0/closed_resnet.yaml#L11 https://github.com/mlcommons/logging/blob/master/mlperf_logging/compliance_checker/1.0.0/closed_resnet_lars.yaml https://github.com/mlcommons/logging/blob/master/mlperf_logging/compliance_checker/1.0.0/closed_resnet_sgd.yaml You could also add a "BEGIN"...
I'll be addressing this issue, and this is scheduled to be put in for v2.0.
Looks like the training WG no longer needs this. Closing this issue now.
Sorry, clarification - this issue should be deferred after v1.1, not discarded. I'm removing the `v1.1` label.
I like (1). We can stitch the encryption/decryption engine, package_checker, rcp_checker and result_summarizer in a single client. When the user click "submit", the client will run the checkers and result_summarizer...
Hmmm... If one submission has two implementations, how do we know which implementation is the "real" implementation? And why would the "unused" implementation submitted in the first place?
Maybe the title should be change to "[Seed Checker] Unable to handle multiple implementation." "Bug on seed checker" is not very informative.
Do you know if it is a rule that says both the results dir and the implementation dir end with a "-{framework}" suffix? If so, maybe I can perform some...
https://github.com/mlcommons/training_results_v0.7/tree/master/Fujitsu/benchmarks/resnet/implementations/implementation_closed is an example that I'm worried about. There's no way to figure out what implementation it is by looking at the path alone.