Alex Rogozhnikov

Results 187 comments of Alex Rogozhnikov

it should be possible. Just try and see. hep_ml has more general loss format, see here: https://github.com/arogozhnikov/hep_ml/blob/master/hep_ml/losses.py#L88-L138 you need init, fit, and prepare_tree_params within xgboost. Difference with other methods is...

Hi Tommy, maybe I misunderstand what you do, but it looks that you look at individual predictions of UboostBDTs, each for a specific efficiency. If that's true, I'm a bit...

Yeah, the thinking is that you use full model (that is uBoost, that is ensemble of ensembles). It is notoriously slow, but that's how it was designed. > all the...

I see, the reason is this squashing function, https://github.com/arogozhnikov/hep_ml/blob/master/hep_ml/uboost.py#L540 it's not necessary and you can remove that if you don't like the range (just return score / self.efficiency_steps) An important...

> As they could be particularly important, I'm wondering why they are not included. Is there any particular reason ? Not really, they can be exposed. Just at that time...

See the deprecation message from sklearn documentation: ``` Deprecated since version 1.0: Criterion “mse” was deprecated in v1.0 and will be removed in version 1.2. Use criterion="squared_error" which is equivalent....

Can't think about any other issue. Make sure to restart kernel, and verify sklearn by ```python print(sklearn.__version__) ```

Hi @jalvear2dxc, I'm not completely following which classifiers you compare, but large difference you report is possible. Naturally, reweighing would remove discrepancies that are picked by models with tree configuration...

@jalvear2dxc yes, seems to match with what I suggested

Poor picklability of keras is a long-known issue (you can google keras with the same mistake error). You may be fortunate to have some of variables being passed e.g. in...