http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf
It first maps a sequence into a vector, and then uses this vector to generate another sequence.
This is supported out of the box; see the return_sequences parameter in any recurrent layer: http://keras.io/layers/recurrent/
I think that with the current LSTM layer implemented in Keras is not possible to implement the seq2seq algorithm (the one reported in the cited paper). Some of the missing features are:
1- Ability to link the hidden state of the last step of the first LSTM to the second LSTM.
2- For the second LSTM, the ability of copying the output of each step to the input of the next step.
Therefore, you would have to implement your own seq2seq layer.
@melonista - Regarding your 2nd point - can we use solve this using Graph instead of Sequential? This way we'll have two inputs and one output. The first input will be used for the encoder. The second input and output will be used for the decoder?
@hujiewang Check this out: https://github.com/farizrahman4u/seq2seq
@farizrahman4u is there any reason on why the encoder-decoder architecture in your _Sequence to Sequence Learning with Keras_ not compatible with Tensorflow?
Most helpful comment
@farizrahman4u is there any reason on why the encoder-decoder architecture in your _Sequence to Sequence Learning with Keras_ not compatible with Tensorflow?