In the following minimal example, TimeDistributedDense() layers work, but TimeDistributed(Dense()) layers do not:
from keras.models import Sequential
from keras.layers import Dense, TimeDistributed, TimeDistributedDense, Convolution1D, MaxPooling1D
import numpy as np
num_examples = 128
num_times = 200
num_features = 5
inputs = np.random.random((num_examples,num_times,num_features))
targets = np.random.random((num_examples,num_times/2,num_features))
model = Sequential()
model.add(Convolution1D(input_dim=num_features, #input_length=num_times,
nb_filter=5,filter_length=3,activation='relu',border_mode='same'))
model.add(MaxPooling1D(pool_length=2,border_mode='valid'))
## This fails:
model.add(TimeDistributed(Dense(output_dim=16,activation='tanh')))
model.add(TimeDistributed(Dense(output_dim=1,activation='linear')))
## This works:
# model.add(TimeDistributedDense(output_dim=16,activation='tanh'))
# model.add(TimeDistributedDense(output_dim=1,activation='linear'))
model.compile(optimizer='rmsprop',loss='mse')
model.fit(x=inputs,y=targets,
batch_size=32,
nb_epoch=10)
while I can get TimeDistributed(Dense()) to work by specifying input_length in the first Convolution1D layer, this is not feasible in my application, which has variable length time-series.
I just tried your example and it worked for me. I tried it yesterday and it didn't. My guess is an update changed things. Try it again?
Hooray! It works now. Thanks.
Most helpful comment
Hooray! It works now. Thanks.