Transformers: run evalation after every epoch in Trainer

Created on 27 May 2020  路  3Comments  路  Source: huggingface/transformers

馃殌 Feature request

With the current Trainer implementation:
trainer.train(..) is called first followed by trainer.evaluate(..). It would be nice if the user can pass the flag --run_eval (something similar) to run evaluation after every epoch. It would be nice for users who want to see how model performs on validation set as training progresses. In some cases, this is the general norm (run evaluation after every epoch).

Most helpful comment

@prajjwal1 , you should be able to achieve this with --evaluate_during_training provided you set --save_steps to number_of_samples/batch_size. However, I'm currently having trouble achieving this with that option when using both run_language_modeling.py and run_glue.py as I specify in https://github.com/huggingface/transformers/issues/4630. Any ideas @julien-c ? Thanks in advance.

All 3 comments

You should use --evaluate_during_training which should do mostly what you're looking for

@prajjwal1 , you should be able to achieve this with --evaluate_during_training provided you set --save_steps to number_of_samples/batch_size. However, I'm currently having trouble achieving this with that option when using both run_language_modeling.py and run_glue.py as I specify in https://github.com/huggingface/transformers/issues/4630. Any ideas @julien-c ? Thanks in advance.

There's a problem with MNLI though. In the example, arguments are changed from mnli to mnli-mm, so running evaluation after each epoch will happen on MNLI and not the mismatched one with the current implementation.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

alphanlp picture alphanlp  路  3Comments

chuanmingliu picture chuanmingliu  路  3Comments

fyubang picture fyubang  路  3Comments

lcswillems picture lcswillems  路  3Comments

HansBambel picture HansBambel  路  3Comments