LossFunctions.jl icon indicating copy to clipboard operation
LossFunctions.jl copied to clipboard

Multivariate Loss

Open Evizero opened this issue 7 years ago • 8 comments

One bridge we have to cross sooner or later are multiclass problems. There are mutliclass extensions or formulations for a couple of losses that we have.

A particular interesting example is the hinge loss. In a multinomial setting the targets could be indices, such as [1,2,3,1], which would violate our current idea of a targetdomain being [-1,1].

One solution could be to think of the multivariate version as separate of the binary one. For example we could lift it like this: Multivariate{L1HingeLoss}. This could then have it's own targetdomain and other properties. It would also avoid potential ambiguities when it comes to dispatching on the types targets and output. I am not sure we could be certain of dealing with a multivariate vs binary case just based on the parameter types

Evizero avatar Aug 20 '16 22:08 Evizero

Would that make softmax Multivariate{LogitMarginLoss}? Implementing that seems like a reasonable place to start. If I'm not mistaken it would look a bit like this?

function value{T<:Number}(::Softmax, target::Int, output::AbstractVector{T})
    return logsumexp(output) - output[target]
end

We should have MLDataUtils or somewhere define log-sum-exp with the standard trick:

"""
    logsumexp(x)

Computes `log(sum(exp(x)))` of a vector `x` in a numerically stable manner
"""
function logsumexp{T<:Number}(x::AbstractVector{T})
    m = maximum(x) # subtracting m prevents overflow
    sumexp = zero(T)
    for i in eachindex(x)
        sumexp += exp(x[i]-m)
    end
    return log(sumexp) + m
end

All of this seems a bit different (incompatible?) from what you had in mind for the multivariate hinge loss, could you expand on your thoughts there?

ahwillia avatar Aug 24 '16 01:08 ahwillia

Actually, that sounds like a good idea. The multinomial loss doesn't require the outputs to be produced with sigmoid(..) as far as I know, so it should work with a linear or affine prediction function.

We should have MLDataUtils or somewhere define log-sum-exp with the standard trick

If it is not in StatsBase yet, then we should define it in LearnBase in my opinion

Evizero avatar Aug 24 '16 04:08 Evizero

If it is not in StatsBase yet, then we should define it in LearnBase in my opinion

Found it!

https://github.com/JuliaStats/StatsFuns.jl/blob/f63bc3dc55e1ffbe3edeaeb910c4034c529a003a/src/basicfuns.jl#L123

ahwillia avatar Aug 24 '16 04:08 ahwillia

We could also consider a shorter Mv{L1HingeLoss}, but maybe 2 letters is cutting it a bit short

Evizero avatar Aug 24 '16 05:08 Evizero

Partially done. For distance based losses this can now be achieved with the average modes. No support for multinomial classification losses yet, though

Evizero avatar Jan 05 '17 13:01 Evizero

https://github.com/madeleineudell/LowRankModels.jl has implementations of several multivariate loss functions for ordinal and categorical variables (OvALoss, BvSLoss, etc.). Those implementations should probably be moved over here since they're more versatile than only being used in one package.

mihirparadkar avatar Jun 24 '17 04:06 mihirparadkar

I'd like to start implementing the multi-category loss functions like MultinomialLoss, Multiclass SVM, and One-vs-all, but I'm not sure what the convention should be or how they play with LabelEncodings (i.e. output is some kind of vector, but is the target something encoded in a one-hot scheme or an index?).

LowRankModels uses something akin to the Indices encoding scheme for targets, but I think this would be a productive discussion to have.

cc: @madeleineudell

mihirparadkar avatar Jul 18 '17 16:07 mihirparadkar

@mihirparadkar @kvmanohar22 any progress regarding the multi class losses? I am currently needing them for a research paper. Can work on it right away if you guys allow.

juliohm avatar Mar 20 '20 14:03 juliohm