[CI] [GHA] Use `OV_CACHE` in the WWB tests
@akashchi any ideas about: https://github.com/openvinotoolkit/openvino.genai/actions/runs/18162007204/job/51697457952#step:7:4574
ERROR tools/who_what_benchmark/tests/test_cli_text.py::test_text_target_model - OSError: Can't load tokenizer for 'facebook/opt-125m'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'facebook/opt-125m' is the correct path to a directory containing all relevant files for a GPT2TokenizerFast tokenizer. ?
I believe that this
Looks like a new Path intervenes with old string paths. Better check it once more.
and the error reported by @as-suvorov are not because of the Path/os.path but because models were not present on the share and thus could not be used by path and firstly needed to be downloaded by model_id and then used by the full paths. I am working on it.
I believe that this
Looks like a new Path intervenes with old string paths. Better check it once more.
and the error reported by @as-suvorov are not because of the
Path/os.pathbut because models were not present on the share and thus could not be used by path and firstly needed to be downloaded bymodel_idand then used by the full paths. I am working on it.
Thanks.
I want to unblock PRs merging asap. The part with ov_cache in wwb seems to work better, I see no rate limiting in this PR. Can we proceed with merging and address this oserror in the next PRs?
Please address @sgonorov and copilot comments
@apaniukov good comments. Don't you mind to address them in a separate PR? I want to merge this PR asap to unblock other PR