pytorch-deep-image-matting
pytorch-deep-image-matting copied to clipboard
A question on data augmentation part
np.random.randint() returns coordinates of (h, w) instead of (w, h). However, in core/data.py, line 42 - 44, the coordinates are regarded as (w, h)? Is it a bug or ... ? Please correct me if there is any problem.
Thanks.
Thanks for your review! You are right. The coordinates of h and w is inversed. This mistake will result in that patches cropped from original image may not include any unknown region. I will fix this bug soon and conduct some new experiments later.
Hi Liang,
May I ask that how many epochs or iterations (batch_size == 1?) of training make the evaluation performance of 72.9 in SAD on val set? Also, what is the test mode (crop, resize, whole) for 72.9?
Thanks.
batch_size=1, epochs=22 and test mode is whole.
Thanks!
hi Thanks for your great work! I am confused about why you set the batch-size=1 in training process?