Keras: How to mask on loss function in Keras using Tensorflow backend

Created on 1 Nov 2017  ยท  1Comment  ยท  Source: keras-team/keras

I am trying to do a sequence-to-sequence task using LSTM by Keras with Tensorflow backend. The inputs are English sentences with variable lengths. To construct a dataset with 2-D shape [batch_number, max_sentence_length], I add EOF at the end of line and pad each sentence with enough placeholders, e.g. "#". And then each character in sentence is transformed to one-hot vector, now the dataset has 3-D shape [batch_number, max_sentence_length, character_number]. After LSTM encoder and decoder layers, softmax cross entropy between output and target is computed.

To eliminate the padding effect in model training, masking could be used on input and loss function. Mask input in Keras can be done by using "layers.core.Masking". In Tensorflow, masking on loss function can be done as follows:
loss

However, I don't find a way to realize it in Keras, since a used-defined loss function in keras only accepts parameters y_true and y_pred. So how to input true sequence_lengths to loss function and mask?

Besides, I find a function "_weighted_masked_objective(fn)" in kerasenginetraining.py. Its definition is "Adds support for masking and sample-weighting to an objective function.โ€ But it seems that the function can only accept fn(y_true, y_pred). Is there a way to use this function to solve my problem? Thanks in advance.

Most helpful comment

>All comments

Was this page helpful?
0 / 5 - 0 ratings

Related issues

Imorton-zd picture Imorton-zd  ยท  3Comments

rantsandruse picture rantsandruse  ยท  3Comments

amityaffliction picture amityaffliction  ยท  3Comments

farizrahman4u picture farizrahman4u  ยท  3Comments

snakeztc picture snakeztc  ยท  3Comments