transformers icon indicating copy to clipboard operation
transformers copied to clipboard

Pipeline testing - using tiny models on Hub

Open ydshieh opened this issue 3 years ago • 4 comments

What does this PR do?

Pipeline testing - using tiny models on Hub.

A few comments:

  • This PR moves the tiny model creation from PipelineTestCaseMeta (where done dynamically during testing) to utils/create_dummy_models.py (where the tiny models are created once and live on the Hub):

    • The logic is still large, but at least it is done once rather than being created dynamically
  • When a new model is added in transformers, it would NOT be used in pipeline testing UNTIL we create & upload tiny models for the new model type.

    • even if we upload the new tiny models (or re-create the existing one), we also have to UPDATE this repo, see comments below
  • While pytest collects the tests to run, the collection is done in each process (if we specify -n N with N > 1):

    • If we use from_pretrained during test collection, there will be too many calls, and the server refuses the requests at some point: the tests being collected will vary each time and being incomplete
    • So I upload a file processor_classes.json containing necessary information to call gen_test. The from_pretrained will only be called when the test is actually running.
  • Some tests are just not working (yet), and an important subset of those failed tests is not tested in the current main branch

    • for example, on main, all pipeline tests use fast tokenizers
    • we probably need to check (and possibly fix) some of them, but depends on impact and usage, we will leave some of them skipped for now

ydshieh avatar Nov 23 '22 20:11 ydshieh

The documentation is not available anymore as the PR was closed or merged.

The documentation is not available anymore as the PR was closed or merged.

Gently pining @LysandreJik for review at their convenience. We discussed last time offline the pipeline testing will eventually avoid using metaclass - I will work on it in a future PR. I think it's better to have progressive changes toward our ultimate goal 😊🙏

ydshieh avatar Jan 12 '23 12:01 ydshieh

Thank you for the ping, I'll have a look!

LysandreJik avatar Jan 20 '23 20:01 LysandreJik

Hi @Narsil

In this PR, commit 3d46ed81, I revert some changes in your (merged) PR #20851.

In short: this def get_test_pipeline(self, model, tokenizer, feature_extractor, image_processor): is changed to get_test_pipeline(self, model, tokenizer, processor):

Before you PR, it was get_test_pipeline(self, model, tokenizer, feature_extractor):

More context:

  • This PR leverages the uploaded checkpoints on the Hub for pipeline testing.
  • In a follow-up PR, we plan to remove the usage of PipelineTestCaseMeta
    • (therefore, this particular change will be just short-lived)

Let me know if you have any question or comment 🙏

ydshieh avatar Jan 30 '23 07:01 ydshieh

This change was necessary to get some tests running.

Namely testing that oneformer and the like are actually working. These models do not have a feature extractor, only a ImageProcessor. So how can you make it work?

Since you're using tiny models maybe that function could be bypassed entirely?

Also for the network issue (too many from pretrained iiuc) isn't there a way to download all tiny models once and keep them on the runner so we could run the tests in offline mode? So no network calls? Maybe running network mode if there's a failure? (so downloading the new model)

Narsil avatar Jan 30 '23 09:01 Narsil

@Narsil

These models do not have a feature extractor, only a ImageProcessor. So how can you make it work?

  • The creation and upload of tiny models (which is done in another script) should create the tokenizers and/or processors (feature extractors or image processor). During pipeline testing, we just load them. I don't see any problem here, but let me know if I miss any detail.
  • (however, the tiny model creation should be run in a regular basis (or triggered by some conditions) in order to make the tiny checkpoints for newly added models available on the hub)
    • this is not done yet, but I will work on it too
    • oneformer doesn't have a tiny model checkpoint yet, so not tested by this PR
    • but for other models, even they only have image processors, the tests could pass already

Also for the network issue (too many from pretrained iiuc) isn't there a way to download all tiny models once and keep them on the runner

On our hosted runners, it's fine (i.e. cached). But what I mentioned is for pull request CI - which runs on CircleCI. So far I haven't looked into how to do similar things on it.

ydshieh avatar Jan 30 '23 09:01 ydshieh