SSH-pytorch
SSH-pytorch copied to clipboard
About batchsize.
Hello, what is the batchsize of in your code? I find IMGS_PER_BATCH is 1 in config.py. Can i change it?
hi, currently it is not supported with batch size other than one. (I don’t have time to rewrite it) You need to rewrite some layers in order to support it. You may need to implement random crop or pad image. You can submit your pull request if you implement it.
@dechunwang ok. How do you select iters in training? I mean when the loss begin to converge. Thank u
If you are using one GPU, then you should able to get around 0.90 in easy set in about 84000 iterations. If you are using 4 GPU, 21000 is enough
@dechunwang Hello, sorry to bother you, could you tell me the tricks in face detection? i mean how to process images, such as data augumentation.
If you are using one GPU, then you should able to get around 0.90 in easy set in about 84000 iterations. If you are using 4 GPU, 21000 is enough
have you ever test in wider hard set? I run your code and test, AP on hard set is only 65%, which is much lower than that of authur(81%).
If you are using one GPU, then you should able to get around 0.90 in easy set in about 84000 iterations. If you are using 4 GPU, 21000 is enough
have you ever test in wider hard set? I run your code and test, AP on hard set is only 65%, which is much lower than that of authur(81%).
Are you using the pre-trained model and eval.py provide by this repo? I would able to get 0.809 AP from hard set.
I din‘t use eval.py, maybe something is wrong with my code. I have another question. We often have large batchsize in recognition task like imagenet, can you tell me if large batchsize in detection task.
I din‘t use eval.py, maybe something is wrong with my code. I have another question. We often have large batchsize in recognition task like imagenet, can you tell me if large batchsize in detection task.
Yes, that's also apply to detection task. We always use as large batch size as we can, bigger batch sizes not only speed up training process but also help to smooth the training loss on each iteration.