Transformers: pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py error

Created on 21 Nov 2018  路  6Comments  路  Source: huggingface/transformers

attributeError: 'BertForPreTraining' object has no attribute 'global_step'

Most helpful comment

Hum I will see if I can let people import any kind of TF model in PyTorch, that's a bit risky so it has to be done properly.
In the meantime you can add global_step in the list line 53 of convert_tf_checkpoint_to_pytorch.py

All 6 comments

Maybe some additional information could help me help you?

Initialize PyTorch weight ['cls', 'seq_relationship', 'output_weights']
Skipping cls/seq_relationship/output_weights/adam_m
Skipping cls/seq_relationship/output_weights/adam_v
Traceback (most recent call last):
File "/home/tiandan.cxj/python/model_serving_python/lib/python3.5/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/tiandan.cxj/python/model_serving_python/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/tiandan.cxj/platform/pytorch_BERT/pytorch-pretrained-BERT/pytorch_pretrained_bert/__main__.py", line 19, in
convert_tf_checkpoint_to_pytorch(TF_CHECKPOINT, TF_CONFIG, PYTORCH_DUMP_OUTPUT)
File "/home/tiandan.cxj/platform/pytorch_BERT/pytorch-pretrained-BERT/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py", line 69, in convert_tf_checkpoint_to_pytorch
pointer = getattr(pointer, l[0])
File "/home/tiandan.cxj/python/model_serving_python/lib/python3.5/site-packages/torch/nn/modules/module.py", line 518, in __getattr__
type(self).__name__, name))
AttributeError: 'BertForPreTraining' object has no attribute 'global_step'

Hum I will see if I can let people import any kind of TF model in PyTorch, that's a bit risky so it has to be done properly.
In the meantime you can add global_step in the list line 53 of convert_tf_checkpoint_to_pytorch.py

@thomwolf sir, i am also same issue. it doen't resolve. how i am convert my finetuned pretrained model to pytorch?

export BERT_BASE_DIR=/home/dell/backup/NWP/bert-base-uncased/bert_tensorflow_e100

pytorch_pretrained_bert convert_tf_checkpoint_to_pytorch \
  $BERT_BASE_DIR/model.ckpt-100 \
  $BERT_BASE_DIR/bert_config.json \
  $BERT_BASE_DIR/pytorch_model.bin

Traceback (most recent call last):
  File "/home/dell/Downloads/Downloads/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/dell/Downloads/Downloads/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/dell/backup/bert_env/lib/python3.6/site-packages/pytorch_pretrained_bert/__main__.py", line 19, in <module>
    convert_tf_checkpoint_to_pytorch(TF_CHECKPOINT, TF_CONFIG, PYTORCH_DUMP_OUTPUT)
  File "/home/dell/backup/bert_env/lib/python3.6/site-packages/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py", line 69, in convert_tf_checkpoint_to_pytorch
    pointer = getattr(pointer, l[0])
  File "/home/dell/backup/bert_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 535, in __getattr__
    type(self).__name__, name))
AttributeError: 'BertForPreTraining' object has no attribute 'global_step'

sir how to resolve this issue?
thanks.

thanks @thomwolf sir. it was resolved.

I added the global_step to the skipping list in the modelling.py . Still facing the error. Am I missing something?

Was this page helpful?
0 / 5 - 0 ratings