transformers
transformers copied to clipboard
Add doctests to Perceiver examples
What does this PR do?
Related to #16292
Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the contributor guideline, Pull Request section?
- [X] Was this discussed/approved via a Github issue or the forum? Please add a link to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the documentation guidelines, and here are tips on formatting docstrings.
- [X] Did you write any new necessary tests?
Who can review?
Taken from #16292 : @patrickvonplaten @ydshieh @patil-suraj
The documentation is not available anymore as the PR was closed or merged.
Hi @stevenmanton
Thank you a lot for the PR! Currently the test will fail for the following
FAILED src/transformers/models/perceiver/modeling_perceiver.py::transformers.models.perceiver.modeling_perceiver.PerceiverForImageClassificationConvProcessing.forward
FAILED src/transformers/models/perceiver/modeling_perceiver.py::transformers.models.perceiver.modeling_perceiver.PerceiverForImageClassificationFourier.forward
FAILED src/transformers/models/perceiver/modeling_perceiver.py::transformers.models.perceiver.modeling_perceiver.PerceiverForImageClassificationLearned.forward
We should add the expected outputs in the doc examples.
For other doc examples in this model, the tests pass, but they don't have outputs to test.
For example, PerceiverForOpticalFlow
has
>>> logits = outputs.logits
without output and the expected value. We can have something like
>>> list(logits .shape)
expected shapes
Would you like to fix and enhance those doc examples?
Once you have a change (staged or commited), you can run the test like
python utils/prepare_for_doc_test.py src/transformers/models/perceiver/modeling_perceiver.py
pytest --doctest-modules src/transformers/models/perceiver/modeling_perceiver.py -sv --doctest-continue-on-failure
Once the test is run, you can clean up the git status before further changes or push.
Thanks!
@ydshieh thanks for the quick feedback! Yes, I noticed those tests were failing, but I thought it might be something about my local environment and the extra newline stuff. I just pushed a fix, which passes locally.
By the way, how did you know those tests were failing? The CI pipelines all seemed to be passing. Did you have to checkout my branch and run it locally?
@stevenmanton Thanks for the push. However, instead of changing to
>>> predicted_class = model.config.id2label[predicted_class_idx]
we should change it to
>>> print("Predicted class:", model.config.id2label[predicted_class_idx])
add some output here - so the doctest will test again it
You can find similar work is done https://github.com/huggingface/transformers/blob/d5848a574a3990c95f20512673ecef9f57e0fe81/src/transformers/models/deit/modeling_deit.py#L735-L736
Otherwise, the example has nothing to be tested.
By the way, how did you know those tests were failing? The CI pipelines all seemed to be passing. Did you have to checkout my branch and run it locally?
Yes. The PR CI on CircleCI does not run doctest :-). It is run after the PR being merged.
@ydshieh Thanks for your feedback and patience. I believe I've corrected it. The extra newline stuff is confusing, but if you run prepare_for_doc_test.py
(which I think just adds a newline to all the docstrings) on the last commit, all tests pass for me locally.
Thanks! As we are going to run the doctest for this model, would you like to add some expected outputs at the following places?
https://github.com/huggingface/transformers/blob/bebcb950c7ad4dc1ef676806a6ac4283df7f5885/src/transformers/models/perceiver/modeling_perceiver.py#L1921
https://github.com/huggingface/transformers/blob/bebcb950c7ad4dc1ef676806a6ac4283df7f5885/src/transformers/models/perceiver/modeling_perceiver.py#L1695
https://github.com/huggingface/transformers/blob/bebcb950c7ad4dc1ef676806a6ac4283df7f5885/src/transformers/models/perceiver/modeling_perceiver.py#L1131
https://github.com/huggingface/transformers/blob/bebcb950c7ad4dc1ef676806a6ac4283df7f5885/src/transformers/models/perceiver/modeling_perceiver.py#L1033
https://github.com/huggingface/transformers/blob/bebcb950c7ad4dc1ef676806a6ac4283df7f5885/src/transformers/models/perceiver/modeling_perceiver.py#L1021
And a few places in this doc example (for logits
and loss
)
https://github.com/huggingface/transformers/blob/bebcb950c7ad4dc1ef676806a6ac4283df7f5885/src/transformers/models/perceiver/modeling_perceiver.py#L1695
For logits
, it would look like to add (the values should be collected from the run)
>>> list(logits.shape)
[1, 196, 8192]
When running prepare_for_doc_test.py
, it will add some empty lines - to make doctest pass. That is why we should stage our change, run that script, run doctest, and discard the change before commit or further changes :-)
@ydshieh Ok, I added some more checks for the sizes of logits. They all pass for me locally.