You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's attention_mask to obtain reliable results.
Setting pad_token_id to eos_token_id:28895 for open-end generation.
Output:
Photosynthesis is [photosynthetic activity]... [that] is one of the fundamental capabilities of plants on Earth. I would never be thinking about that in light of a world without oxygen. There are still oxygen
I run the "Example Usage":
import torch
from transformers import GPT2LMHeadModel, GPT2Tokenizer
device = torch.device("cuda")
tokenizer = GPT2Tokenizer.from_pretrained("stanford-crfm/BioMedLM")
model = GPT2LMHeadModel.from_pretrained("stanford-crfm/BioMedLM").to(device)
input_ids = tokenizer.encode(
"Photosynthesis is ", return_tensors="pt"
).to(device)
sample_output = model.generate(input_ids, do_sample=True, max_length=50, top_k=50)
print("Output:\n" + 100 * "-")
print(tokenizer.decode(sample_output[0], skip_special_tokens=True))
The text was updated successfully, but these errors were encountered: