Oscar
Oscar copied to clipboard
Error while running Image/Text retrieval inference task
I am trying to run the oscar/run_retrieval.py using the base-vg-label oscar model and 1k coco test set.
python oscar/run_retrieval.py
--do_test
--do_eval
--test_split test
--num_captions_per_img_val 5
--eval_img_keys_file datasets/coco_ir/test_img_keys_1k.tsv
--cross_image_eval
--per_gpu_eval_batch_size 64
--eval_model_dir pretrained_models/base-vg-labels/ep_67_588997
The following is the error raised(output on the terminal):-
2021-06-02 09:08:31,344 vlpretrain WARNING: Device: cpu, n_gpu: 0
2021-06-02 09:08:31,344 vlpretrain INFO: output_mode: classification, #Labels: 2
2021-06-02 09:08:31,370 vlpretrain INFO: Evaluate the following checkpoint: pretrained_models/base-vg-labels/ep_67_588997
Traceback (most recent call last):
File "oscar/run_retrieval.py", line 664, in
I checked the oscar/modeling/modeling_bert.py file where it tries to access the loss_type attribute of the BertConfig object. In this file, it imports BertConfig from transformers.pytorch_transformers.modeling_bert. So, I checked out the BertConfig object class in transformers/pytorch_transformers/modeling_bert.py and it did not have the loss_type attribute. Since there is no loss_type attribute, it seems that attribute error is bound to arise.
How to solve this problem? Please help.
Oscar/oscar/modeling/modeling_bert.py - https://github.com/microsoft/Oscar/blob/master/oscar/modeling/modeling_bert.py Oscar/transformers/pytorch_transformers/modeling_bert.py - https://github.com/huggingface/transformers/blob/067923d3267325f525f4e46f357360c191ba562e/pytorch_transformers/modeling_bert.py
Could you try
pip install --upgrade transformers
?
Could you try
pip install --upgrade transformers
?
@antocapp I am also facing similar issue. according to my understanding https://github.com/microsoft/Oscar is pointing to specific transformers branch. If we upgrade to latest transformers , it will break other code parts.
@SwatiTiwarii @antocapp Yes, the latest transformers repo is quite different. One clear observation is that the specific transformers branch which Oscar points, has a pytorch_transformers sub-folder which is also used for importing in many python codes in Oscar repo whereas the latest transformers doesn't have any pytroch_transformers sub-folder. So, it seems that the Oscar code is not compatible with the latest transformers branch.
Could you try
pip install --upgrade transformers
?@antocapp I am also facing similar issue. according to my understanding https://github.com/microsoft/Oscar is pointing to specific transformers branch. If we upgrade to latest transformers , it will break other code parts.
Please let me know if you have found a workaround to this problem, or when you find one. Also, if it's possible, could you describe the task in which you encountered a similar error?
Same problem here. Any news on the matter?
I see why.
You can track the source code run_ retrieval.py ,Line 600:
if args.do_ train:
This if condition has assign value to
config.loss_ type = args.loss_ Type
And we're do_test,
and the following else has no assign value to config.loss_ Type
so we just add the code
config.loss_ type = args.loss_ Type
after
config = config_class.from_pretrained(checkpoint)
@iacercalixto @Prat1510 @SwatiTiwarii