neural-compressor
neural-compressor copied to clipboard
What kind of accuracy logic is used in neural_compressor.experimental.Benchmark?
The Reference Kit here
https://github.com/oneapi-src/visual-quality-inspection/blob/main/src/intel_neural_compressor/neural_compressor_inference.py
does the following import..
from neural_compressor.experimental import Benchmark
then it runs the following code...
evaluator = Benchmark(config_path)
evaluator.model = int8_model
# create benchmark dataloader like examples/tensorflow/qat/benchmark.py
evaluator.b_dataloader = test_loader
evaluator('accuracy')
print("*"*50)
print("Evaluating the FP32 Model")
print("*"*50)
evaluator = Benchmark(config_path)
evaluator.model = model
# create benchmark dataloader like examples/tensorflow/qat/benchmark.py
evaluator.b_dataloader = test_loader
evaluator('accuracy')
What kind of accuracy logic is being run behind the scenes?