Transformers: Checking that the LM actually trained

Created on 9 Apr 2020  路  8Comments  路  Source: huggingface/transformers

I have trained a gpt2 from scratch with the way that is decribed in that post https://huggingface.co/blog/how-to-train .
Just in the step 4, where he checks if the trained model actually works, he uses from pipeline the
"fill-mask" but that works only for models with masked language modeling objective.
Exists something similar i could use like "fill-mask" for my case?

wontfix

All 8 comments

Yes: simply model.generate() (not even a need for a Pipeline in that case)

cc @patrickvonplaten

I'd check if 'GPT2' works by sampling from a simple prompt. E.g.:

output = model.generate(tokenizer.encode('The president', return_tensors='pt'), do_sample=True)
tokenizer.decode(output[0])

Thanks for clarifying! I was about to consider sending a PR for a GenerationPipeline under transformers.pipeline.

I have a branch that implements a GenerationPipeline which already works for GPT models

The initial version of GenerationPipeline can be found in the branch's pipelines module, where I've registered it to the pipeline function using gpt2 as the default.

The implementation is based on the approach taken in run_generation.py, which means the forward pass uses the model.generate() method explained by @julien-c and @patrickvonplaten above.

So far, the code above works smoothly for open-ai and gpt2.

Sample code:

# Pip install
# If you're using Google Colab, make sure to reset runtime after installing
!pip install -e git+git://github.com/enzoampil/transformers.git@generation_pipeline#egg=transformers

# Pipeline uses `gpt2` by default
from transformers import pipeline
gpt = pipeline('generation', num_return_sequences=1, length=40)
gpt("You look great")
# ['You look great, me!" he says. "There\'s nothing wrong with that, it\'s just I wanted a bit of attention so I had to go to work. I had to back down."\n']

However, the module still doesn't work with other language models like xlm, xlnet, and transfo-xl.

I will do a root cause analysis on this and will send a PR as soon as I get this to work on the rest of the language models that should work with GenerationPipeline (i.e. those runnable from run_generation.py).

For more details, you can check out this colab notebook, which shows the gpt models working so far, and the rest of the models not working in the later sections.

[UPDATE] The issues above have been resolved and I'm in the process of sending a PR.

Google Colab tutorial here for running GenerationPipeline for the following LM models:

  1. OpenAI GPT
  2. OpenAI GPT-2
  3. Transformer-XL
  4. XML
  5. XLNet
  6. T5
  7. CTRL

You're PR looks very nice so far :-) I will take a look early next week!

Thanks!

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

siddsach picture siddsach  路  3Comments

0x01h picture 0x01h  路  3Comments

lcswillems picture lcswillems  路  3Comments

hsajjad picture hsajjad  路  3Comments

ereday picture ereday  路  3Comments