Shaden Naif

Results 6 comments of Shaden Naif

1. It is hard to see the difference without seeing your implementation. But can you run my [notebook](https://github.com/ShadeAlsha/LTR-weight-balancing/blob/master/demo1_first-stage-training.ipynb) to see if it reproduces results. One notable difference is that my...

Yes, we used 200 epochs only for the first stage of training to produce the results in the paper.

In the demo, MaxNorm is applied in each epoch as you said. This corresponds to lines 140-142 in trainval.py. https://github.com/ShadeAlsha/LTR-weight-balancing/blob/0e9494cad5b4805642f05097a336096f780300ee/utils/trainval.py#L140-L142 You can move this code up in order to apply...

Yes, PGD is supposed to be applied in each iteration as in page 4 of the paper and as chenbinghui1 mentioned earlier :) I haven't updated the code yet, but...

Figure 3 illustrates the bias in weights at hidden layers with 512 filters, not the classifier. In that figure, we compare the naive model weights to the model with weight...

For CIFAR-100. You can set the imbalance factor by changing the value of `imb_factor` in [demo1_first-stage-training.py]( https://github.com/ShadeAlsha/LTR-weight-balancing/blob/ba0333510b00a9deb5503cceec1c57522b04263c/demo1_first-stage-training.py) line 81. For example, for imbalance factor 100, we set `imb_factor` to be...