Practical-pytorch: Bahdanau Decoder Implementation

Created on 22 May 2017  路  6Comments  路  Source: spro/practical-pytorch

Hi @spro,

Thanks for really great explanation of decoder, especially for Bahdanau decoder. But, i'm little bit confuse about code in __init__ function of BahdanauAttnDecoderRNN class.

self.attn = GeneralAttn(hidden_size)

I can't find any class that define GeneralAttn. This is built-in class? Can you please elaborate for this? Thanks again!

Most helpful comment

Good catch, it was originally split out as 3 separate attention modules (GeneralAttn, DotAttn, ConcatAttn) instead of one with an argument to choose the strategy. Further, they actually used the "concat" strategy. So this should be self.attn = Attn("concat", hidden_size)

All 6 comments

Good catch, it was originally split out as 3 separate attention modules (GeneralAttn, DotAttn, ConcatAttn) instead of one with an argument to choose the strategy. Further, they actually used the "concat" strategy. So this should be self.attn = Attn("concat", hidden_size)

Cool, Thanks for the clarification!

Can you please change that line on the notebook?

Still not changed. Hope somebody could do it.

119 fixes this and some more issues with Bahdanau decoder.

@anantzoid it's still not fixes in tutorial.
image

Was this page helpful?
0 / 5 - 0 ratings