robust_loss_pytorch icon indicating copy to clipboard operation
robust_loss_pytorch copied to clipboard

A pytorch port of google-research/google-research/robust_loss/

Results 14 robust_loss_pytorch issues
Sort by recently updated
recently updated
newest added

Jon, can you provide a tutorial (or code snippet) on how to use this tool for boosting algorithms like LightGBM

I want to set alpha = 0 and scale = 999 to the lossfun() but it shold be tensor, to i just coded like, torch.FloatTensor(0) and torch.FloatTensor(999) to alpha and...

Hi, Thanks for your amazing loss function. After reading your papers, I have the following question: ![image](https://user-images.githubusercontent.com/18031767/166656580-f7d3383a-f16c-425f-b369-bd6dc86afec2.png) ![image](https://user-images.githubusercontent.com/18031767/166657032-af12649b-12ff-4237-a51f-91cdc17d06ce.png) So how to get this equation? I find it looks like softmax...

Hi, I am trying to reimplement Unsupervised Learning of Depth and Ego-Motion from Video with Adaptive Loss function. The pytorch code for Sfm Learner from https://github.com/ClementPinard/SfmLearner-Pytorch if you take a...

Hi Jon, thank you for this amazing loss function! Any hopes for a Matlab implementation of the adaptive version? If you could get me started with the essential pseudocode, I...

Hi, I have a question about the implementation. In the Distribution().nllfun method, to regularize the scale to decrease, why you use the log function? I think the l2 or l1...

Hello, thank you very much for your elegant code. I want to use robust loss to train the model in the first stage and load the pre-trained model in the...

Hi, thanks for your wanderful work. As I use your AdaptiveLossFunction, I found the alpha did not decrease, it keeps the highest value through the training process. So, I used...

Hello, from the documentation of the library it is unclear whether the image variants of the loss function (e.g., `AdaptiveImageLossFunction`) expect images in NCHW or NHWC order. Furthermore, should the...

Hi, I encounter a weird nan error in general.py during training after multiple epochs. Any idea why this error occurs or how to fix it? ![Nan_](https://user-images.githubusercontent.com/34400551/105861036-91542c80-5fee-11eb-87b4-edc945968232.png) Error message of `torch.autograd.detect_anomaly()`....