Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use local model? #80

Open
mirfan899 opened this issue Jun 26, 2021 · 1 comment
Open

How to use local model? #80

mirfan899 opened this issue Jun 26, 2021 · 1 comment

Comments

@mirfan899
Copy link

I tried to use the local model from my machine to generate questions, but it seems the pipeline can't handle the model argument if the model is stored on the local machine. It always downloads the model to generate questions.

from pipelines import pipeline

nlp = pipeline("question-generation", model="/home/irfan/Downloads/qg/t5-small-qg-hl")


questions = nlp("42 is the answer to life, the universe and everything.")

for question in questions:
    print(question)

Here is what this shows.

Downloading:   2%|| 4.42M/242M [00:03<02:35, 1.53MB/s]Traceback (most recent call last):
  File "/home/irfan/PycharmProjects/qg/question_generation/test.py", line 7, in <module>
    nlp = pipeline("question-generation", model=model, tokenizer=tokenizer)
  File "/home/irfan/PycharmProjects/qg/question_generation/pipelines.py", line 357, in pipeline
    ans_model = AutoModelForSeq2SeqLM.from_pretrained(ans_model)
  File "/home/irfan/environments/qg/lib/python3.6/site-packages/transformers/modeling_auto.py", line 1206, in from_pretrained
    return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
  File "/home/irfan/environments/qg/lib/python3.6/site-packages/transformers/modeling_utils.py", line 651, in from_pretrained
    local_files_only=local_files_only,
  File "/home/irfan/environments/qg/lib/python3.6/site-packages/transformers/file_utils.py", line 571, in cached_path
    local_files_only=local_files_only,
  File "/home/irfan/environments/qg/lib/python3.6/site-packages/transformers/file_utils.py", line 750, in get_from_cache
    http_get(url, temp_file, proxies=proxies, resume_size=resume_size, user_agent=user_agent)
  File "/home/irfan/environments/qg/lib/python3.6/site-packages/transformers/file_utils.py", line 643, in http_get
    for chunk in response.iter_content(chunk_size=1024):
  File "/home/irfan/environments/qg/lib/python3.6/site-packages/requests/models.py", line 753, in generate
    for chunk in self.raw.stream(chunk_size, decode_content=True):
  File "/home/irfan/environments/qg/lib/python3.6/site-packages/urllib3/response.py", line 576, in stream
    data = self.read(amt=amt, decode_content=decode_content)
  File "/home/irfan/environments/qg/lib/python3.6/site-packages/urllib3/response.py", line 512, in read
    with self._error_catcher():
  File "/usr/lib/python3.6/contextlib.py", line 79, in __enter__
    def __enter__(self):
KeyboardInterrupt
Downloading:   2%|| 4.43M/242M [00:03<02:48, 1.41MB/s]
@mruthyunjaya117
Copy link

I am also facing same issue.
I downloaded the valhalla/t5-small-qg-hl from huggingface hub. But dont know how to inference
Please write the code for the same

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants