Training with images smaller than 30x30
You stated that "FasterRcnn does not like objects smaller than about 30*30 pixels". Is this a fact that can be altered somehow?
For example I would like to be able to detect object smaller than 30x30. If I provide my model with an extra smaller anchor scale that goes below 30x30 this would be enough (I mean even after the image resize application that my objects are smaller than 30x30)? Or should I change the receptive field of my model also (which is hardcoded for each model I think)
That is just my experience with the default receptive field, I did try some smaller anchor sizes but never got very good results with them. Intuitively it will be limited by how strong the small features on your objects are, for my applications I can't even tell what the ground truth should be once the objects get that small.