mlos_bench: if a result.___ value is -1 register trial as FAILED
sometimes benchbase will put -1 for latency throughput etc but we register the trial as succeeded
This is more an error in the client side benchbase output processing scripts than MLOS. There are some related improvements for reporting success/failure of a trial, but I think the client side should be investigated first.
See Also:
- #523
- #671
- #464
To put it another way, there's not a great way for MLOS to know the semantics of what the "Latency (microseconds) 99th Percentile" metric from benchbase is.
I suppose one thing we could do is extend the objective_targets descriptor to include some "bounds" info (e.g., "min": 1, "max": null) and reject the trial if those aren't satisfied, but that would mostly just help in discovering the error in the benchbase scripts.