transformers version: 3.0.2@sshleifer
Model I am using (Bert, XLNet ...): google/pegasus-cnn_dailymail
The problem arises when using:
import torch
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
model_name = 'google/pegasus-cnn_dailymail'
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device)
Traceback:
RuntimeError Traceback (most recent call last)
~/projects/transformers/src/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
854 try:
--> 855 state_dict = torch.load(resolved_archive_file, map_location="cpu")
856 except Exception:
~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
584 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
--> 585 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
586
~/anaconda3/envs/abstractive_summarizer/lib/python3.8/site-packages/torch/serialization.py in _legacy_load(f, map_location, pickle_module, **pickle_load_args)
771 assert key in deserialized_objects
--> 772 deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
773 if offset is not None:
RuntimeError: unexpected EOF, expected 10498989 more bytes. The file might be corrupted.
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-1-1ae6eb884edd> in <module>
7 model_name = 'google/pegasus-cnn_dailymail'
8 tokenizer = PegasusTokenizer.from_pretrained(model_name)
----> 9 model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device)
~/projects/transformers/src/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
855 state_dict = torch.load(resolved_archive_file, map_location="cpu")
856 except Exception:
--> 857 raise OSError(
858 "Unable to load weights from pytorch checkpoint file. "
859 "If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. "
OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
works for me in in torch 1.5.1. and torch 1.6.
Maybe this is a one off s3 failure?
Can anybody else replicate?
from transformers import PegasusForConditionalGeneration
model = PegasusForConditionalGeneration.from_pretrained(model_name)
I set force_download=True and it worked. Thanks!
I set
force_download=Trueand it worked. Thanks!
can you describe in detail how did you solved the problem
Just upgrading the PyTorch and TensorFlow version solved the problem for me.
Most helpful comment
works for me in in torch 1.5.1. and torch 1.6.
Maybe this is a one off s3 failure?
Can anybody else replicate?