graph-rcnn.pytorch
graph-rcnn.pytorch copied to clipboard
need at least one array to stack
I run the code with python main.py --config-file configs/faster_rcnn_res101.yaml
command, and using Mini VG dataset. There is an error saying:
Traceback (most recent call last):
File "main.py", line 127, in
I checked the code in vg_hdf5.py
, in function ```
load_graphs(graphs_file, images_file, mode='train', num_im=-1, num_val_im=0, filter_empty_rels=True,
filter_non_overlap=False):
`split_mask = data_split == split`
#split_mask all false
`split_mask &= roi_h5['img_to_first_box'][:] >= 0`
#split_mask all false
```
if filter_empty_rels:
split_mask &= roi_h5['img_to_first_rel'][:] >= 0
``` #split_mask all false
`image_index = np.where(split_mask)[0]`
#np.where(split_mask) void
`for i in range(len(image_index)):`
#image_index void
`im_sizes = np.stack(im_sizes, 0)`
#im_sizes void
How to solve this problem?
The Mini VG dataset is only meant for testing, which is why the split
information in VG-SGG.h5
contains only 2's (test == 2, training == 0).
data_split = roi_h5['split'][:]
split = 2 if mode == 'test' else 0
split_mask = data_split == split
As you mentioned, consequently split_mask contains only False entries.
As a solution, you could download the full VG dataset. Alternatively, I guess you would have to fiddle with the mini vg files such as VG-SGG.h5 and mark some samples for training.
I can imagine that a pull request where you check for len(np.where(split_mask)) == 0
and raise a meaningful error would be a welcome contribution.
Is it possible to use mini vg for training? How could I modify the code to make it suitable for training? The original VG is too large. Should I use it, the image database would occupy more than 300G space!
I tried to download the models from "Benchmarking" section in this project and put them for test. I configured MODEL.WEIGHT_DET to the path of these models. The same problem happened again. I checked the "mode" and "split" in vg_hdf5.py, they're both "train". After checking main.py, I think the code tries to train a model first? def test(cfg, args, model=None): """ test scene graph generation model """ if model is None: arguments = {} arguments["iteration"] = 0 model = build_model(cfg, arguments, args.local_rank, args.distributed) model.test(visualize=args.visualize)
If I load the model here directly:
def test(cfg, args, model=torch.load("configs/sg_baseline_ckpt.pth")):
Another error happens
Traceback (most recent call last):
File "main.py", line 127, in
How can I solve it?
As stated in the documentation (https://github.com/jwyang/graph-rcnn.pytorch#evaluate) you have to pass the --inference
flag if you only want to do testing.
See this code snippet in main.py:
if not args.inference:
model = train(cfg, args)
else:
test(cfg, args)
As stated in the documentation (https://github.com/jwyang/graph-rcnn.pytorch#evaluate) you have to pass the
--inference
flag if you only want to do testing. See this code snippet in main.py:if not args.inference: model = train(cfg, args) else: test(cfg, args)
I did use --inference, but the error is still there.
Hi @chlorane, were you able to solve this issue?
I used a data set of 10 pictures, but it also showed "need at least one array to stack". Is there any solution?
I'm also getting the same error when trying to evaluate on mini_vg:
python main.py --config-file configs/sgg_res101_step.yaml --inference --visualize
I get the following error:
2021-04-08 12:12:43,731 scene_graph_generation INFO: Namespace(algorithm='sg_baseline', batchsize=0, config_file='configs/sgg_res101_step.yaml', distributed=False, inference=True, instance=-1, local_rank=0, resume=0, session=0, use_freq_prior=False, visualize=True)
2021-04-08 12:12:43,732 scene_graph_generation INFO: Loaded configuration file configs/sgg_res101_step.yaml
2021-04-08 12:12:43,732 scene_graph_generation INFO: Saving config into: logs/config.yml
Traceback (most recent call last):
File "main.py", line 92, in <module>
main()
File "main.py", line 89, in main
test(cfg, args)
File "main.py", line 37, in test
model = build_model(cfg, arguments, args.local_rank, args.distributed)
File ".../graph-rcnn.pytorch/lib/model.py", line 307, in build_model
return SceneGraphGeneration(cfg, arguments, local_rank, distributed)
File ".../graph-rcnn.pytorch/lib/model.py", line 31, in __init__
self.data_loader_train = build_data_loader(cfg, split="train", is_distributed=distributed)
File ".../graph-rcnn.pytorch/lib/data/build.py", line 60, in build_data_loader
dataset = vg_hdf5(cfg, split=split, transforms=transforms, num_im=num_im)
File ".../graph-rcnn.pytorch/lib/data/vg_hdf5.py", line 56, in __init__
filter_non_overlap=filter_non_overlap and split == "train",
File ".../graph-rcnn.pytorch/lib/data/vg_hdf5.py", line 289, in load_graphs
im_sizes = np.stack(im_sizes, 0)
File "<__array_function__ internals>", line 6, in stack
File ".../venvs/graph-rcnn/lib/python3.6/site-packages/numpy/core/shape_base.py", line 423, in stack
raise ValueError('need at least one array to stack')
ValueError: need at least one array to stack
Perhaps I missed something?