question_generation icon indicating copy to clipboard operation
question_generation copied to clipboard

How to use local model?

Open mirfan899 opened this issue 3 years ago • 1 comments

I tried to use the local model from my machine to generate questions, but it seems the pipeline can't handle the model argument if the model is stored on the local machine. It always downloads the model to generate questions.

from pipelines import pipeline

nlp = pipeline("question-generation", model="/home/irfan/Downloads/qg/t5-small-qg-hl")


questions = nlp("42 is the answer to life, the universe and everything.")

for question in questions:
    print(question)

Here is what this shows.

Downloading:   2%|▏         | 4.42M/242M [00:03<02:35, 1.53MB/s]Traceback (most recent call last):
  File "/home/irfan/PycharmProjects/qg/question_generation/test.py", line 7, in <module>
    nlp = pipeline("question-generation", model=model, tokenizer=tokenizer)
  File "/home/irfan/PycharmProjects/qg/question_generation/pipelines.py", line 357, in pipeline
    ans_model = AutoModelForSeq2SeqLM.from_pretrained(ans_model)
  File "/home/irfan/environments/qg/lib/python3.6/site-packages/transformers/modeling_auto.py", line 1206, in from_pretrained
    return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
  File "/home/irfan/environments/qg/lib/python3.6/site-packages/transformers/modeling_utils.py", line 651, in from_pretrained
    local_files_only=local_files_only,
  File "/home/irfan/environments/qg/lib/python3.6/site-packages/transformers/file_utils.py", line 571, in cached_path
    local_files_only=local_files_only,
  File "/home/irfan/environments/qg/lib/python3.6/site-packages/transformers/file_utils.py", line 750, in get_from_cache
    http_get(url, temp_file, proxies=proxies, resume_size=resume_size, user_agent=user_agent)
  File "/home/irfan/environments/qg/lib/python3.6/site-packages/transformers/file_utils.py", line 643, in http_get
    for chunk in response.iter_content(chunk_size=1024):
  File "/home/irfan/environments/qg/lib/python3.6/site-packages/requests/models.py", line 753, in generate
    for chunk in self.raw.stream(chunk_size, decode_content=True):
  File "/home/irfan/environments/qg/lib/python3.6/site-packages/urllib3/response.py", line 576, in stream
    data = self.read(amt=amt, decode_content=decode_content)
  File "/home/irfan/environments/qg/lib/python3.6/site-packages/urllib3/response.py", line 512, in read
    with self._error_catcher():
  File "/usr/lib/python3.6/contextlib.py", line 79, in __enter__
    def __enter__(self):
KeyboardInterrupt
Downloading:   2%|▏         | 4.43M/242M [00:03<02:48, 1.41MB/s]

mirfan899 avatar Jun 26 '21 05:06 mirfan899

I am also facing same issue. I downloaded the valhalla/t5-small-qg-hl from huggingface hub. But dont know how to inference Please write the code for the same

mruthyunjaya117 avatar May 09 '23 09:05 mruthyunjaya117