py-bottom-up-attention icon indicating copy to clipboard operation
py-bottom-up-attention copied to clipboard

Integration with LXMERT

Open johntiger1 opened this issue 4 years ago • 5 comments

If I want to use this repo to extract RCNN image features to train LXMERT, how can I do that? Do I just dump the features from

# Show the boxes, labels, and features
pred = instances.to('cpu')
v = Visualizer(im[:, :, :], MetadataCatalog.get("vg"), scale=1.2)
v = v.draw_instance_predictions(pred)
showarray(v.get_image()[:, :, ::-1])
print('instances:\n', instances)
print()
print('boxes:\n', instances.pred_boxes)
print()
print('Shape of features:\n', features.shape)

(from https://github.com/airsplay/py-bottom-up-attention/blob/master/demo/demo_feature_extraction_attr.ipynb)

into a .tsv file?

Btw, what is the difference between with and without attributes? Thanks!

johntiger1 avatar Jun 04 '20 15:06 johntiger1

Yea; It would work well (at least in my test).

But the NMS approach would be the best to use this one: https://github.com/airsplay/py-bottom-up-attention/blob/834fa8b8123657fe6fa6b27c069015b824e07646/demo/detectron2_mscoco_proposal_maxnms.py#L54-L65

airsplay avatar Jun 04 '20 17:06 airsplay

Thank you, I will try the Non Maximal Suppression. But, just curious, does this mean that other SOTA recurrent vision models could be used too in the future? rCNN is now several years old, I was wondering if you experimented with more modern vision models, and perhaps can get better performance

johntiger1 avatar Jun 04 '20 18:06 johntiger1

Hmmm... This code does not provide a training, just the weight converted. from the original CAFFE weight.

You could try this and switch the backbone: https://github.com/MILVLG/bottom-up-attention.pytorch

airsplay avatar Jun 04 '20 19:06 airsplay

If I want to use this repo to extract RCNN image features to train LXMERT, how can I do that? Do I just dump the features from

# Show the boxes, labels, and features
pred = instances.to('cpu')
v = Visualizer(im[:, :, :], MetadataCatalog.get("vg"), scale=1.2)
v = v.draw_instance_predictions(pred)
showarray(v.get_image()[:, :, ::-1])
print('instances:\n', instances)
print()
print('boxes:\n', instances.pred_boxes)
print()
print('Shape of features:\n', features.shape)

(from https://github.com/airsplay/py-bottom-up-attention/blob/master/demo/demo_feature_extraction_attr.ipynb)

into a .tsv file?

Btw, what is the difference between with and without attributes? Thanks!

Hi @johntiger1, before I finish coding my project:

How long does it take to extract NLPR2's 107,292 images when LXMERT takes around 5 to 6 hours for the training split and 1 to 2 hours for the valid and test splits?

Would you mind taking a time estimate? Thanks.

yezhengli-Mr9 avatar Jan 13 '21 04:01 yezhengli-Mr9

If I want to use this repo to extract RCNN image features to train LXMERT, how can I do that? Do I just dump the features from

# Show the boxes, labels, and features
pred = instances.to('cpu')
v = Visualizer(im[:, :, :], MetadataCatalog.get("vg"), scale=1.2)
v = v.draw_instance_predictions(pred)
showarray(v.get_image()[:, :, ::-1])
print('instances:\n', instances)
print()
print('boxes:\n', instances.pred_boxes)
print()
print('Shape of features:\n', features.shape)

(from https://github.com/airsplay/py-bottom-up-attention/blob/master/demo/demo_feature_extraction_attr.ipynb) into a .tsv file? Btw, what is the difference between with and without attributes? Thanks!

Hi @johntiger1, before I finish coding my project:

How long does it take to extract NLPR2's 107,292 images when LXMERT takes around 5 to 6 hours for the training split and 1 to 2 hours for the valid and test splits?

Would you mind taking a time estimate? Thanks.

Hi @johntiger1 , I get my solution for this question of time estimate and summarize it here. Thanks anyway.

yezhengli-Mr9 avatar Jan 14 '21 23:01 yezhengli-Mr9