Anthony Blaom, PhD

Results 815 comments of Anthony Blaom, PhD

Very sorry @juliohm, but I just don't have the bandwidth right now to work on LossFunctions.jl. My post includes a MWE, which should be easy enough to check.

Decision: What do we do about measures like `mape` where we want to drop some terms where the computation is unstable? That is, what do we do if we are...

Some other improvements on my wish list: 1. Export all measure *types* (such as `RMS`, `CrossEntropy`, and so forth) and always use the explicit instantiatioins (such as `RMS()`, `CrossEntropy(eps=1e-7)`, `BrierScore(distribution=Normal)`,...

> A few features that would be nice: > > 1. similar to `models(matching(X,y))` I would like `measures()` to give me all measures available for regression (continuous y). Good idea:...

> There is also an issue w/ asymmetry for percent errors: mape(ŷ, y) != mape(y,ŷ) Right. The API specifies that `yhat` goes first. We could give `MAPE` a field `compare_with_prediction`...

Regarding having a hierarchy of types. We don't really need this. You can do queries based on the traits. I think there is a tendency towards traits, because other packages...

You can see all the traits in the current API with this example: ```julia julia> info(rms) root mean squared; aliases: `rms`. (name = "rms", target_scitype = Union{AbstractArray{Continuous,1}, AbstractArray{Count,1}}, supports_weights =...

Sure. Measures for a `Finite` univariate target (a.k.a. "classification"): ```julia julia> measures(m -> AbstractVector{Finite} AbstractVector{Continuous}

There is also EvalMetrics.jl to look at; see https://github.com/alan-turing-institute/MLJBase.jl/issues/316