Ensemble-Pytorch icon indicating copy to clipboard operation
Ensemble-Pytorch copied to clipboard

Allowing the use of alternative loss functions

Open by256 opened this issue 3 years ago • 5 comments

Hi,

Thank you for this super useful library.

I've noticed that the ensemble modules are all restricted to either cross-entropy loss (in the case of classification) or mean squared error loss (in the case of regression). Is there a particular reason for this? It would be great if we could pass any objective function of our choosing to the ensemble modules, as this would provide much greater flexibility.

If there are no theoretical restrictions as to why we can't use alternative losses, I could potentially have a go at implementing this.

Batuhan

by256 avatar Jun 25 '21 14:06 by256

Hi Batuhan,

The use of cross entropy loss for classifier and mean squared error loss for regressor rooted in the early API design of torchensemble (following the same design of Scikit-Learn Ensemble). I agree with you that there should be no limitation on the specification of objective functions.


If there are no theoretical restrictions as to why we can't use alternative losses, I could potentially have a go at implementing this.

Sure, your contributions are highly welcomed. Here are some ideas coming from my mind:

  • Creat an abstract class BaseCustom in _base.py and implement the evaluate method for it, which simply returns the customized loss averaged over all batches in the dataloader
  • Create another class inherited from BaseCustom for each ensemble, for example, CustomVoting for voting.py and CustomSnapshotEnsemble for snapshot_ensemble.py
  • Implement set_criterion method for this new class, which sets the self.criterion_ attribute for this class
  • Modify the format of training logs (for example, print the training loss only)

Gradient boosting requires additional considerations, we could skip this ensemble first.

The ideas above are still very rough, feel free to comment below if you have better solutions. If you agree with this design, how about we start with Voting?

xuyxu avatar Jun 25 '21 14:06 xuyxu

That makes sense.

Thanks for the suggestions. Starting with Voting seems like a reasonable idea. I'll take an in-depth look at the current API on Monday and see if I have any ideas. I'll keep you posted here if I run into any trouble.

by256 avatar Jun 25 '21 15:06 by256

Great!

xuyxu avatar Jun 25 '21 15:06 xuyxu

Would it be simpler to just set the loss function as an instance variable with a set_criterion method, then access the method via a call to self.criterion in each ensemble module? For example:

model = VotingRegressor(
    estimator=MLP,
    n_estimators=10,
    cuda=True,
)

criterion = nn.L1Loss()
model.set_criterion(criterion)

I've tried this and it seems to work well for all of the ensemble modules except GradientBoosting. This would also circumvent the need to create additional custom classes for each ensemble module. Let me know what you think of this idea.

The additional considerations of GradientBoosting could potentially be addressed by calculating pseudo-residuals using torch.autograd.grad, although I may have missed something in the code that would not make this possible. What do you think?

by256 avatar Jun 28 '21 16:06 by256

Sorry for the late response @by256.

Would it be simpler to just set the loss function as an instance variable with a set_criterion method, then access the method via a call to self.criterion in each ensemble module?

Sure, this looks nice!

Meanwhile, I will also take a look at how to automatically calculate the first-order gradients with torch.autograd.grad ;-)

xuyxu avatar Jul 01 '21 14:07 xuyxu