py-faster-rcnn icon indicating copy to clipboard operation
py-faster-rcnn copied to clipboard

[IndexError: Index out of range] While I training on ImageNet dataset.

Open JudeLee19 opened this issue 9 years ago • 3 comments

I tried to train on ImageNet dataset with pre-trained VGG16 model which provided by ImageNet model (solver.prototxt, VGG16.v2.caffemodel) following this post http://sunshineatnoon.github.io/Train-fast-rcnn-model-on-imagenet-without-matlab/

I created imagenet.py file in $FRCNN_ROOT/lib/datasets directory and follow all of the steps. But when I tried I got below error messages.

I0427 13:33:15.288508 8314 layer_factory.hpp:77] Creating layer input-data I0427 13:33:15.307318 8314 net.cpp:106] Creating Layer input-data I0427 13:33:15.307342 8314 net.cpp:411] input-data -> data I0427 13:33:15.307358 8314 net.cpp:411] input-data -> im_info I0427 13:33:15.307368 8314 net.cpp:411] input-data -> gt_boxes Top length : 3 XX: 3 <class 'caffe._caffe.RawBlobVec'> ['class', 'contains', 'delattr', 'delitem', 'dict', 'doc', 'format', 'getattribute', 'getitem', 'hash', 'init', 'instance_size', 'iter', 'len', 'module', 'new', 'reduce', 'reduce_ex', 'repr', 'setattr', 'setitem', 'sizeof', 'str', 'subclasshook', 'weakref', 'append', 'extend'] Error idx is 3 Traceback (most recent call last): File "./tools/train_net.py", line 113, in max_iters=args.max_iters) File "/home/dev/bcached/opensource/py-faster-rcnn/tools/../lib/fast_rcnn/train.py", line 134, in train_net pretrained_model=pretrained_model) File "/home/dev/bcached/opensource/py-faster-rcnn/tools/../lib/fast_rcnn/train.py", line 43, in init self.solver = caffe.SGDSolver(solver_prototxt) File "/home/dev/bcached/opensource/py-faster-rcnn/tools/../lib/roi_data_layer/layer.py", line 128, in setup top[idx].reshape(1, self._num_classes * 4) IndexError: Index out of range

When I printed the top length and idx which causes error the result is 3 so I thought the idx 3 is dosen't exists. And I already checked Issue #36, which has same error with me but I couldn't solved the problem.

This is my repository I edited from fork . (https://github.com/JudeLee19/py-faster-rcnn) Thank you in advance.

JudeLee19 avatar Apr 27 '16 04:04 JudeLee19

I have the same problem, did u solved that?

whq-hqw avatar Nov 17 '17 07:11 whq-hqw

Changing cfg.TRAIN.HAS_RPN to be True. Maybe some debugging could help...

Leonardyao avatar May 11 '18 03:05 Leonardyao

it is not config bug.it is muliti process data share bug.

cfg.TRAIN.SNAPSHOT_INFIX = 'stage1'

# mp_kwargs = dict(
#         queue=mp_queue,
#         imdb_name=args.imdb_name,
#         init_model=args.pretrained_model,
#         solver=solvers[0],
#         max_iters=max_iters[0],
#         cfg=cfg)
# p = mp.Process(target=train_rpn, kwargs=mp_kwargs)
# p.start()
# rpn_stage1_out = mp_queue.get()
# p.join()

cfg.TRAIN.SNAPSHOT_INFIX = 'stage1'
train_rpn(queue=mp_queue,
        imdb_name=args.imdb_name,
        init_model=args.pretrained_model,
        solver=solvers[0],
        max_iters=max_iters[0],
        cfg=cfg)
rpn_stage1_out = mp_queue.get()

stanley-king avatar Jun 28 '19 12:06 stanley-king