Transformers: [Longformer] Output both local attentions and global attentions when `output_attentions=True` -> Good Second Issue

Created on 1 Oct 2020  路  4Comments  路  Source: huggingface/transformers

馃殌 Feature request

Good Second Issue - A more advanced issue for contributors who want to dive more into Longformer's attention mechanism.

Longformer currently only outputs global attentions, which is suboptimal because users might be interested in the local attentions as well. I propose to change the "output_attention" logic as follows in longformer:

attentions should correspond to the "local" attentions and then we'll add a new output type global_attention that contains the global_attentions. This is consistent with the naming of attention_mask and global_attention_mask IMO and the cleanest way to implement the feature.

Implementing this feature would mean to that Longformer will require its own ModelOutput class =>
BaseModelOutput, => LongformerBaseModelOutput or BaseModelOutputWithGlobalAttention (prefer the first name though)
BaseModelOutputWithPooling, => ...

Also some tests will have to be adapted.

This is a slightly more difficult issue, so I'm happy to help on it. One should understand the difference between local and global attention and how Longformer's attention is different to e.g. Bert's attention in general.

For more detail check out discussion here: https://github.com/huggingface/transformers/issues/5646

Good Second Issue

Most helpful comment

I have made the pull request.

I checked that the Longformer tests passed with my changes, and I added one more test to check the output of attention probabilities.

Quite stupidly I made the pull request to the __master__ branch, I am sorry about this. I left it as is to avoid duplicating pull requests for now. You can reject it and I will make a cleaner pull request to a separate branch.

All 4 comments

I am working on a pull request to address this. I don't see any major challenge so far, but this made me realize how much attentions in Bert-like models and in Longformers are different. Why not replace attentions in the Longformer by local_attentions?

This means that the interface of Longformers would become incompatible with every other Transformer, but maybe it should be? I don't think that there is a way to plug Longformer attentions into a code that expects Bert-like attentions and get meaningful results, so users always have to write a special case for Longformers if they use them. As is, the risk is that they get bogus output and won't realize it until they carefully read the doc (that is not yet written).

What are your thoughts on this @patrickvonplaten?

I have made the pull request.

I checked that the Longformer tests passed with my changes, and I added one more test to check the output of attention probabilities.

Quite stupidly I made the pull request to the __master__ branch, I am sorry about this. I left it as is to avoid duplicating pull requests for now. You can reject it and I will make a cleaner pull request to a separate branch.

sorry to have been so super inactive on this issue :-/ I will find time to solve it in ~1 week :-) . This issue is related as well: https://github.com/huggingface/transformers/pull/8007/files#r514633097.

No worries, there is no hurry on my side. Anyway, the issue is a little trickier than it looks because you guys have to decide how to encode attention probabilities when they are too large to be represented by a dense matrix. Let me know if there is anything I can do to help.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

rsanjaykamath picture rsanjaykamath  路  3Comments

siddsach picture siddsach  路  3Comments

zhezhaoa picture zhezhaoa  路  3Comments

alphanlp picture alphanlp  路  3Comments

HanGuo97 picture HanGuo97  路  3Comments