Nale Raphael

Results 46 comments of Nale Raphael

Hi @sparkskapil, I was also dealing with the same problem as you and it's really hard to figure out the real cause of it currently. Popup context menu actually works...

OK, I found that there is a way more easy solution for this. Just put your desired components inside a `child` (defined by `imgui.begin_child()` + `imgui.end_child()`) or `group` (defined by...

Hi @afogarty85. It seems that `XLMRobertaForSequenceClassification.forward()` [returns a tuple object (in v3.0.2)][transformer_roberta_seqcls_forward], and that makes it failed to compute loss in [`LRFinder._train_batch()`][lrfinder_train_batch_loss]. Currently, it takes only [single item][lrfinder_train_batch_model] returned from...

Yeah, you are right. I missed the calling of `super()` to inherit its parent class. > In terms of (1), the central problem I have now is calculating the loss...

Thanks for that example. And no need to be sorry, your feedback helps us improve this package. > If I change the class to outputs[0], then I believe outputs[0] is...

That's totally fine. Take your time.

Hi @ma-batita ! In that colab notebook, I was using huggingface transformer v3.0.2. You can find that the returned value of `XLMRobertaForSequenceClassification.forward()` would be: 1. if `labels` is not given,...

Hi @ma-batita > ...why not adapt LRFinder to this major change? one can make the criterion as an option since it is already wrapped into the model (for the classification...

Hi @ma-batita Sure, no worries. :) Regarding the issue you ran into, it's recommended to run the model on CPU first. Some error messages might not be shown explicitly while...

Hi @YashRunwal! In this scenario, maybe you can try to use lr-finder with gradient accumulation to simulate a larger batch for training. This functionality has been integrated in this library...