Transformers: Import error in example script `run_language_modeling.py`

Created on 26 Mar 2020  路  5Comments  路  Source: huggingface/transformers

馃悰 Bug

Information

Model I am using (Bert, XLNet ...): RobertaForMaskedLM

Language I am using the model on (English, Chinese ...): English

The problem arises when using:

  • [x] the official example scripts: (give details below)
  • [ ] my own modified scripts: (give details below)

The tasks I am working on is:

  • [ ] an official GLUE/SQUaD task: (give the name)
  • [ ] my own task or dataset: (give details below)

To reproduce

Steps to reproduce the behavior:

  1. pip install transformers
  2. run run_language_modeling.py, which is the example script
    3.


Error message:

Traceback (most recent call last): File "run_language_modeling.py", line 42, in from transformers import (
ImportError: cannot import name 'MODEL_WITH_LM_HEAD_MAPPING'

Expected behavior

The script should run..

Environment info

  • transformers version: 2.5.1
  • Platform: Ubuntu 18.04
  • Python version: 3.7
  • PyTorch version (GPU?): 1.4.0
  • Tensorflow version (GPU?): na
  • Using GPU in script?: y
  • Using distributed or parallel set-up in script?: n

Note

The work around is to use

from transformers.modeling_auto import MODEL_WITH_LM_HEAD_MAPPING
from transformers.file_utils import WEIGHTS_NAME

Can you please update the example script? It is confusing ...

Most helpful comment

You need to upgrade your version of transformers (to 2.6), or better, to install from source.

All 5 comments

You need to upgrade your version of transformers (to 2.6), or better, to install from source.

I just pulled the huggingface/transformers-tensorflow-gpu:2.10.0 docker image, went to the examples/language-modeling/ folder and ran the following, and I got the same error:

python3 run_language_modeling.py --output_dir=/app/data --model_type=distilbert --model_name_or_path=distilbert-base-uncased --do_train --train_data_file=/app/data/train_data.txt  --do_eval --eval_data_file=/app/data/eval_data.txt  --mlm

Haven't tried the workaround above yet.

Steps:

  • docker run -it -vpwd/data:/app/data huggingface/transformers-tensorflow-gpu:2.10.0
  • cd workspace/examples/language-modeling/
  • try to run example command using python3

python3 -m pip show transformers reports 2.10.0 is installed.

I get the issue (the master branch being checked out in the docker build) it just seems like it'd be cool for there to be a simpler way to run the examples in docker. If you wanted to use the 2.9.0 image, you'd have to pull the image and have your script first check out master as of the tag 2.9.0 then install from source, right?

It'd be a nice feature if the docker images could run the examples without modification

I get the same issue when I pip install transformers. When I downgrade to 2.6.0, it can't import CONFIG_MAPPING. Anything from 2.7.0 to 2.10.0 up I get the MODEL_WITH_LM_HEAD_MAPPING error

Okay, I got it to work for 2.10.0. I just had to reinstall PyTorch

pip3 install torch
Was this page helpful?
0 / 5 - 0 ratings

Related issues

chuanmingliu picture chuanmingliu  路  3Comments

delip picture delip  路  3Comments

siddsach picture siddsach  路  3Comments

hsajjad picture hsajjad  路  3Comments

adigoryl picture adigoryl  路  3Comments