NimbusML
NimbusML copied to clipboard
Improve inference performance with loaded TransformerChain ML.NET model
Fix #370
PR #230 introduced ability to load and score ML.NET models trained in the new ML.NET TransformerChain
serialization format. This is done by checking whether "TransformerChain" exists in the archive members. Currently, this is done every time test
, predict
, predict_proba
, and decision_function
methods call _predict
. ThisPR improves the performance by checking for "TransformerChain" only once when the model is loaded.
@ganik should I add a small ML.NET model file to the project and write a test for inferencing with a loaded ML.NET model?