Anand Avati
Anand Avati
You would want to do at least two changes to your code: 1. The base learner needs to be a Python constructor, so that each boosting stage gets its own...
Here's one way. Instead of: ``` learner = xgb.XGBRegressor(...) ``` do: ``` learner = lambda args: xgb.XGBRegressor(args) ```
The same suggestion as my previous comment. Use learner with a 'lambda' as shown, whether it is for XGB or GBR.
@yutayamazaki could you describe what error you got with the two classes?
A good starting point might be https://github.com/stanfordmlgroup/ngboost/blob/master/scripts/run_empirical_survival.sh, which currently works for right-censoring.
Both your questions revolve around using `\mu`, \`\sigma^` parametrization for the distribution (for gradients and Fisher information). In NGBoost the parametrization is `\mu`, `\log \sigma^2`. If you work out the...