NimbusML icon indicating copy to clipboard operation
NimbusML copied to clipboard

Improve inference performance with loaded TransformerChain ML.NET model

Open najeeb-kazmi opened this issue 5 years ago • 2 comments

Fix #370

PR #230 introduced ability to load and score ML.NET models trained in the new ML.NET TransformerChain serialization format. This is done by checking whether "TransformerChain" exists in the archive members. Currently, this is done every time test, predict, predict_proba, and decision_function methods call _predict. ThisPR improves the performance by checking for "TransformerChain" only once when the model is loaded.

najeeb-kazmi avatar Nov 20 '19 23:11 najeeb-kazmi

@ganik should I add a small ML.NET model file to the project and write a test for inferencing with a loaded ML.NET model?

najeeb-kazmi avatar Nov 20 '19 23:11 najeeb-kazmi

yes, that would be great


In reply to: 556551197 [](ancestors = 556551197)

ganik avatar Nov 22 '19 21:11 ganik