transformers icon indicating copy to clipboard operation
transformers copied to clipboard

Add dummy_inputs for pytorch_version of vision_models

Open kolonist-minjun opened this issue 2 years ago • 1 comments

What does this PR do?

from transformers import ViTForImageClassification
from transformers.utils.fx import symbolic_trace

model = ViTForImageClassification.from_pretrained("google/vit-base-patch16-224")
traced = symbolic_trace(model)
Traceback (most recent call last):
  File "bug_check.py", line 5, in <module>
    traced = symbolic_trace(model)
  File "/opt/conda/lib/python3.8/site-packages/transformers/utils/fx.py", line 1214, in symbolic_trace
    concrete_args = get_concrete_args(model, input_names)
  File "/opt/conda/lib/python3.8/site-packages/transformers/utils/fx.py", line 1167, in get_concrete_args
    raise ValueError(
ValueError: The model does not have input(s) named: input_ids, expected a subset of the following: pixel_values, head_mask, labels, output_attentions, output_hidden_states, interpolate_pos_encoding, return_dict

When using transformers.utils.fx.symbolic_trace, the pytorch version of vision models throws an error. This is because the default setting of dummy_inputs is "input_ids".

It doesn't matter in TEXT MODELS, but this problem occurs because VISION MODELS requires "pixel_values" as a base.

Added dummy_inputs to several PyTorch version models by referring to the dummy_inputs of the Tensorflow version model.

This change fixes the convnext, convnextv2, resnet, segformer, vit, and vit_hybrid models.

Before submitting

  • [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • [x] Did you read the contributor guideline, Pull Request section?
  • [ ] Was this discussed/approved via a Github issue or the forum? Please add a link to it if that's the case.
  • [ ] Did you make sure to update the documentation with your changes? Here are the documentation guidelines, and here are tips on formatting docstrings.
  • [ ] Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts

kolonist-minjun avatar Apr 25 '23 02:04 kolonist-minjun

Hi @kolonist-minjun, thanks for opening this PR!

The dummy_inputs is a legacy property of the pretrained models and not one we're actively supporting. To use symbolic_trace, you can directly pass in the input names:

from transformers import ViTForImageClassification
from transformers.utils.fx import symbolic_trace

model = ViTForImageClassification.from_pretrained("google/vit-base-patch16-224")
traced = symbolic_trace(model, input_names=['pixel_values'])

amyeroberts avatar Apr 25 '23 12:04 amyeroberts

Hi @amyeroberts, thanks for the comment! The TF version models have dummy_inputs, so I thought it would be good to have them in the PyTorch version models for unification.

kolonist-minjun avatar Apr 26 '23 23:04 kolonist-minjun

@kolonist-minjun Yes, it's a bit confusing considering some PyTorch models also have dummy_inputs implemented - hopefully once fully deprecated and removed it'll be clearer. We have dummy_inputs for the TF models, because Keras models have to be built in order to load pretrained weights.

amyeroberts avatar Apr 27 '23 08:04 amyeroberts

@amyeroberts Thank you for your comment. I will close this PR!

kolonist-minjun avatar Apr 27 '23 09:04 kolonist-minjun