I am trying to do a sequence-to-sequence task using LSTM by Keras with Tensorflow backend. The inputs are English sentences with variable lengths. To construct a dataset with 2-D shape [batch_number, max_sentence_length], I add EOF at the end of line and pad each sentence with enough placeholders, e.g. "#". And then each character in sentence is transformed to one-hot vector, now the dataset has 3-D shape [batch_number, max_sentence_length, character_number]. After LSTM encoder and decoder layers, softmax cross entropy between output and target is computed.
To eliminate the padding effect in model training, masking could be used on input and loss function. Mask input in Keras can be done by using "layers.core.Masking". In Tensorflow, masking on loss function can be done as follows:

However, I don't find a way to realize it in Keras, since a used-defined loss function in keras only accepts parameters y_true and y_pred. So how to input true sequence_lengths to loss function and mask?
Besides, I find a function "_weighted_masked_objective(fn)" in kerasenginetraining.py. Its definition is "Adds support for masking and sample-weighting to an objective function.โ But it seems that the function can only accept fn(y_true, y_pred). Is there a way to use this function to solve my problem? Thanks in advance.
This problem is solved as shown in "https://stackoverflow.com/questions/47057361/keras-using-tensorflow-backend-masking-on-loss-function"
Most helpful comment
This problem is solved as shown in "https://stackoverflow.com/questions/47057361/keras-using-tensorflow-backend-masking-on-loss-function"