Jasper Brown

Results 28 comments of Jasper Brown

That is just my experience with the default receptive field, I did try some smaller anchor sizes but never got very good results with them. Intuitively it will be limited...

looks like you might be using a different version of train.py? did you pull the master branch from https://github.com/fizyr/keras-retinanet ? does the random-transform arg appear when you run `keras_retinanet/bin/train.py --help`...

the current keras version on pip is 2.3.1, so if you run `pip install keras` it should bump you up to that?

use the `--snapshot` argument and point it to the snapshot file you want to use. So something like `keras_retinanet/bin/train.py --tensorboard-dir ~/RetinanetTutorial/TrainingOutput --snapshot-path ~/RetinanetTutorial/TrainingOutput/snapshots --snapshot ~/RetinanetTutorial/TrainingOutput/snapshots/snapshotNumberX --random-transform --steps 100 pascal ~/RetinanetTutorial/PlumsVOC`...

A couple of questions: -are they the same images? so you are taking a 1024x1024 image and resizing it when you say 600x600? -did you resize them to 600x600 when...

It sounds like the objects might be too small after resizing, are they quite small to begin with? What i mean is that, because the model internally resizes images using...

You need to crop and resize the images for both training and testing, is that what you mean by 'not on the big images'? 1200x600 would be reasonable, you just...

hmm, are you using the same workflow for formatting both datasets, ie not accidentally swapping RGB to BGR or something? how are you inputting 8192x4096, are you cropping to the...

Most object detectors will struggle on such large images, at the very least you would need to increase the anchor sizes and possibly convolution windows to keep the receptive fields...

The two architectures use different methods for calculating the loss, so you can't compare the values between them. Faster RCNN uses a smoothed L1 norm loss on the box locations...