Raphael Sonabend-Friend
Raphael Sonabend-Friend
Possibly. Unsure. It isn't yet clear whether we would need to deliberately duplicate measures as they may require a particular prediction type.
I actually don't think it's a problem. If you look at how measures are named, e.g. `surv.graf`, `classif.mmce`; then it seems sensible to have `surv.logloss`, `density.logloss`, `regr.logloss`
(one can always inherit from the other - efficient although maybe slightly messy)
There's no reason why someone can't use `regr.logloss` on `PredictionDensity` as long as they both have the same predict type. But this will just be confusing to the user.
> Just a general comment, is the further inheritance step i.e. PipeOpTransformer necessary? Strictly? No. Again can just collect these using `@family` but the current implementation is similar to `PipeOpEnsemble`...
> "simulated predictions"? What do you mean with that, i.e., what is being simulated? > Don´t you simply just want to plot the actual predictions (the distributions) in a distr...
Hi, yes every learner that has a `distr` predict type returns distributions as [distr6](github.com/alan-turing-institute/distr6) objects which have methods for plotting, you could also use `matplot` to do this manually (we'll...
No...but I agree this is a very good idea
Now I'm second guessing myself. Is that actually useful? What does the mean (or median) of predicted survival with confidence intervals actually tell you? ML in general is about individual...
> That makes sense! Which option makes sense? > How can we add interaction effects? For what? > Suppose we perform a benchmark experiment, then how should we use the...