You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to use the local model from my machine to generate questions, but it seems the pipeline can't handle the model argument if the model is stored on the local machine. It always downloads the model to generate questions.
frompipelinesimportpipelinenlp=pipeline("question-generation", model="/home/irfan/Downloads/qg/t5-small-qg-hl")
questions=nlp("42 is the answer to life, the universe and everything.")
forquestioninquestions:
print(question)
Here is what this shows.
Downloading: 2%|▏ | 4.42M/242M [00:03<02:35, 1.53MB/s]Traceback (most recent call last):
File "/home/irfan/PycharmProjects/qg/question_generation/test.py", line 7, in<module>
nlp = pipeline("question-generation", model=model, tokenizer=tokenizer)
File "/home/irfan/PycharmProjects/qg/question_generation/pipelines.py", line 357, in pipeline
ans_model = AutoModelForSeq2SeqLM.from_pretrained(ans_model)
File "/home/irfan/environments/qg/lib/python3.6/site-packages/transformers/modeling_auto.py", line 1206, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/home/irfan/environments/qg/lib/python3.6/site-packages/transformers/modeling_utils.py", line 651, in from_pretrained
local_files_only=local_files_only,
File "/home/irfan/environments/qg/lib/python3.6/site-packages/transformers/file_utils.py", line 571, in cached_path
local_files_only=local_files_only,
File "/home/irfan/environments/qg/lib/python3.6/site-packages/transformers/file_utils.py", line 750, in get_from_cache
http_get(url, temp_file, proxies=proxies, resume_size=resume_size, user_agent=user_agent)
File "/home/irfan/environments/qg/lib/python3.6/site-packages/transformers/file_utils.py", line 643, in http_get
forchunkin response.iter_content(chunk_size=1024):
File "/home/irfan/environments/qg/lib/python3.6/site-packages/requests/models.py", line 753, in generate
forchunkin self.raw.stream(chunk_size, decode_content=True):
File "/home/irfan/environments/qg/lib/python3.6/site-packages/urllib3/response.py", line 576, in stream
data = self.read(amt=amt, decode_content=decode_content)
File "/home/irfan/environments/qg/lib/python3.6/site-packages/urllib3/response.py", line 512, inread
with self._error_catcher():
File "/usr/lib/python3.6/contextlib.py", line 79, in __enter__
def __enter__(self):
KeyboardInterrupt
Downloading: 2%|▏ | 4.43M/242M [00:03<02:48, 1.41MB/s]
The text was updated successfully, but these errors were encountered:
I am also facing same issue.
I downloaded the valhalla/t5-small-qg-hl from huggingface hub. But dont know how to inference
Please write the code for the same
I tried to use the local model from my machine to generate questions, but it seems the pipeline can't handle the model argument if the model is stored on the local machine. It always downloads the model to generate questions.
Here is what this shows.
The text was updated successfully, but these errors were encountered: