R2CNN_FPN_Tensorflow
R2CNN_FPN_Tensorflow copied to clipboard
About batch size
I find some problems. If the batch size is not 1, there will be something wrong. Is that right?
@rainofmine Do you meet the problems with changing batch size to other numbers? Is there a bug occuring at tf.squeeze()?
the batch_size must be 1.We did not consider the situation that batch_size larger than 1.
@yangJirui i don't understand the exact meaning of parameter 'SHORT_SIDE_LEN' in cfgs.py. How could i decide this parameter through dataset approximately?
The short side of the picture is scaled to the size of SHORT_SIDE_LEN while the long side is scaled in the same proportion.
@Bboysummer up to u. it is not fixed. In the faster-rcnn, we usually set the shortside as 600. However, in the fpn, the author applied 800 as shortside. In my opinion,larger side is beneficial for detecting small targets,but it will make the whole progress(training and testing) slow。
when the batch_size==1 i still meet a bug,like this: UnknownError (see above for traceback): error: /home/travis/miniconda/conda-bld/conda_1485299288502/work/opencv-3.2.0/modules/imgproc/src/rotcalipers.cpp:166: error: (-215) orientation != 0 in function rotatingCalipers
[[Node: get_batch/PyFunc = PyFunc[Tin=[DT_INT32], Tout=[DT_FLOAT], token="pyfunc_0", _device="/job:localhost/replica:0/task:0/cpu:0"](get_batch/Squeeze_1/_4517)]]
and also,when i run the inference1.py, it does not work, what's wrong ?!!!
Is it necessary setting batch_size to be 1? Could you please tell me the reason of that configuration? The first dim of the input tensor in your code must be 1 due to tf.squeeze(), I think it may cause loss oscillation during the train.
Can you give us some advice if we want to change batch size? What part of code should we change? Thanks in advance!