razvancaramalau
razvancaramalau
Hey, there is no pre-trained model. You should obtain similar results. Try increasing the number of epochs or vary the batch size. If you are still not able to reproduce...
Hi, The addendum should increase the budget size and the performance should definitely change. You may want to increase the number of epochs maybe it doesn't fully converge.
you can definitely add the wrappers of nn.DataParallel() to the learner definition in main (and for the GCN in the selection_methods). Make sure to add your available devices and push...
Hi there, The input to the transformer bottleneck comes only from the confidences for all categories (C) as 8732 x (C B) where B is the batch size. So, the...
That's correct [B, (8732 x C)].
Yes, that looks correct. I've experienced the same in the beginning. Adjusting the learning rate and having a few runs stabilizes the gradients after the first 10 iterations. I was...
Hi, These numbers might vary depending on the randomization of the specific processor/os that you're running the experiments on. I've observed these changes when swapping to different machines.