Transformers: breaking change

Created on 10 Sep 2019  路  5Comments  路  Source: huggingface/transformers

Great job, just in case it went unnoticed:
from revision 995e38b7af1aa325b994246e1bfcc7bf7c9b6b4f
to revision 2c177a87eb5faab8a0abee907ff75898b4886689

examples are broken due to changed orders of parameters in
pytorch_transformers/modeling_bert.py

<     def forward(self, input_ids, token_type_ids=None, attention_mask=None, labels=None,
<                 position_ids=None, head_mask=None):
<         outputs = self.bert(input_ids, position_ids=position_ids, token_type_ids=token_type_ids,
<                             attention_mask=attention_mask, head_mask=head_mask)
---
>     def forward(self, input_ids, attention_mask=None, token_type_ids=None,
>                 position_ids=None, head_mask=None, labels=None):
>
wontfix

Most helpful comment

Indeed, I've fixed and cleaned up the examples in 8334993 (the lm finetuning examples are now replaced by run_lm_finetuning). Also indicated more clearly which examples are not actively maintained and tested.

All 5 comments

Indeed. What do you mean by "examples"? The docstrings examples?

Most of examples folder.
In particular run_swag.py and lm finetuning scripts.

just came here to say the same.

Indeed, I've fixed and cleaned up the examples in 8334993 (the lm finetuning examples are now replaced by run_lm_finetuning). Also indicated more clearly which examples are not actively maintained and tested.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

guanlongtianzi picture guanlongtianzi  路  3Comments

hsajjad picture hsajjad  路  3Comments

alphanlp picture alphanlp  路  3Comments

siddsach picture siddsach  路  3Comments

0x01h picture 0x01h  路  3Comments