Hang Su

Results 36 comments of Hang Su

The native implementation is using only standard pytorch layers/operations, while the other one uses the Function interface at places. The latter can have some memory advantage for larger input sizes.

There shouldn't be any noticeable differences in speed. The non-native implementation has advantages in terms of _peak memory usage_, but I have no statistics regarding the gap. The native version...

@Jerrypiglet That's right. I wasn't careful enough on this. The change is now reverted.

This is due to an incompatible pytorch version. The code was originally developed for pytorch 0.4. You can try the branch "th14". There were some related discussions in #14

Since you use `--eval pred`, predictions should be written as normal images, black indicating background. Not quite sure what went wrong. You can try using `--eval raw` and inspect the...

@Amose-Yao This is an error irrelevant to the original issue. If there are weight files in the `exp-root` during the evaluation, the code is confused about which weight file to...

Backbone weights are part of the full model so its updated weights are saved in weights_epoch* as well. An exception is CRF models with a frozen backbone (e.g. `fcn8sfrozen_crf*`), where...

You should use `--load-weights` instead of `--load-weights-backbone` since now fcn8s is the model itself, not just the backbone.

This is not normal. A possible reason is the incompatible pytorch/Cuda version you are using (the main branch was originally developed for pytorch 0.4 and cuda9). Check out the "th14"...

Depending on the size of your image, network, gpu mem, etc., you can definitely try higher batch size. A major advantage of our operation is exactly that -- different kernels...