Xavier Dupré
Xavier Dupré
This is a tricky case. Names must be different and they are almost always different. We did not have any unit test covering that specific case and that's why it...
You should write `convert_xgboost(clf, initial_types=initial_type, target_opset=13)`.
Are you using MultiOutputClassifier? In that case, you should specify option zipmap=False.
Sorry for the later answer. FeaureBagging is outside ot scikit-learn and that's out of the library scope. It could be possible to add it to the documentation as an example....
There was limited work on coremltols and the package is still tested against version 3.1. It still fails with more recent versions.
A couple of functions use a static counter to give a unique names to variables. The structure is the same but some names are different. There is no easy way...
The converter needs to give a unique name to every intermediate results. If a pipeline contain two scalers, the second one must have a different result name. The static count...
In our case, we assume the pipeline remains alive until the conversion ends. The use of id should not produce any conflicts. But I agree, id(.) should be removed.
No, the bug is still here. When the model is unpickled, xgboost does not restore exactly the same object and the converter does not find one attribute it is using.