Hi,
I've built a layer called dissimilarity which returns :
K.sqrt(K.sum(K.square(inputs[1] - concatenated),axis=1, keepdims=True))
but the following error raised on buildinf the Model:
Output tensors to a Model must be Keras tensors. Found: <__main__.Dissimilarity object at 0x10f444908>
The test code is:
`
A = Input(shape=(3, None, None))
B = Input(shape=(3, None, None))
crop_right_bound = Boundary(1)(A)
crop_left = Crop_Side(3,0)(B)
dis = Dissimilarity([crop_left, crop_right_bound])
patch_compare = Model([part_A_input, part_B_input], dis)`
Do i need to configure anything else in my implementation of the layer?
+1 would like to know the answer to this also. I ran into the same problem implementing a simple custom layer that's not even at the end of the network and it fails:
class Bilinear(Layer):
""" Computes $x^T zI y rowwise for a batch"""
def __init__(self, **kwargs):
super(Bilinear, self).__init__(**kwargs)
def build(self, input_shape):
assert len(input_shape) == 3, "Input should be shape (batch_size, 3, embed_size)"
embed_dim = input_shape[2] # 0 is the batch dim
self.trainable_weights = []
def call(self, tensor, mask=None):
x = tensor[:,0,:]
y = tensor[:,1,:]
z = tensor[:,2,:]
xTz = Merge(mode='mul')([x, z])
xTzy = K.batch_dot(xTz, y, axes=1)
return xTzy
def get_output_shape_for(self, input_shape):
return (input_shape[0], 1)
vocab_size = 100
target_embed_size = 100
x_input = Input(shape=(1,), dtype='int32', name='x_input')
y_input = Input(shape=(1,), dtype='int32', name='y_input')
z_input = Input(shape=(target_embed_size,), dtype='float', name='bilinear_input')
x_embed = Embedding(input_dim=vocab_size, output_dim=target_embed_size)(x_input)
y_embed = Embedding(input_dim=vocab_size, output_dim=target_embed_size)(y_input)
z_embed = K.expand_dims(z_input, dim=1)
xyz = Merge(mode='concat', concat_axis=1)([x_embed, y_embed, z_expand])
score = Bilinear()(xyz)
output = Activation('sigmoid')(score)
#output = Activation('linear')(score) # trivial identity output layer doesn't work
#output = Lambda(lambda x:x, lambda in_shape:in_shape)(score) # another identity output that doesn't work
bi = Model(input=[x_input, y_input, z_input], output=[output])
bi.compile(optimizer=RMSprop(), loss='accuracy')
Throws error:
Exception: Output tensors to a Model must be Keras tensors. Found: Tensor("Sigmoid_1:0", shape=(?, ?), dtype=float32)
@teffland Three potential problem:
First, I think you misuse Merge. (doc) It should be:
from Keras.layers import merge
# in def call:
xTz = merge([x, z], mode='mul')
xyz = merge([x_embed, y_embed, z_expand], mode='concat', concat_axis=1)
# z_expand not exist in your code, typo?
Second, the reason why you get the error is attempting to merge "Tensor" and "Keras Tensor". Can be fixed by
z_embed = Reshape((1, target_embed_size))(z_input) # or writing Lambda layer
Last, K.batch_dot seems not work as you wish. Check this?
xTzIy = K.sum(x*y*z, axis=1, keepdims=True)
and loss='accuracy'?
I also got this Exception: Output tensors to a Model must be Keras tensors. from a similar code.
I could not find any documentation about Keras Tensor. Could some one point out what is the difference between a keras tensor and a tensor defined by K.variable or K.placeholder? Why does keras need them? How to convert a keras variable into a keras tensor?
After some dig I found the answers to the first two questions in my previous post. The answers are in the source code of topology.py. I think it would be helpful to add the Input layer to the core layers documentation.
`Input()` is used to instantiate a Keras tensor.
A Keras tensor is a tensor object from the underlying backend
(Theano or TensorFlow), which we augment with certain
attributes that allow us to build a Keras model
just by knowing the inputs and outputs of the model.
For instance, if a, b and c and Keras tensors,
it becomes possible to do:
`model = Model(input=[a, b], output=c)`
The added Keras attributes are:
._keras_shape: integer shape tuple propagated
via Keras-side shape inference.
._keras_history: last layer applied to the tensor.
the entire layer graph is retrievable from that layer,
recursively.
@joelthchao
Thanks for the recommendations.
I've had the same error. @fchollet said it can be solved by using a Lambda layer but it does not work at all. He closed the issue..... https://github.com/fchollet/keras/issues/3130
You guys are misusing Lambda layers.
It has no effect to have a Lambda layer of the form lambda x: x. It doesn't make sense.
The argument you pass to a Lambda layer should be a function that returns a TF or Theano tensor. All TF/Theano operations being applied should be inside that function. This is explained clearly in the documentation for the Lambda layer.
Otherwise, you can use a custom layer, and put your custom TF/Theano code in the call method, as explained here: http://keras.io/layers/writing-your-own-keras-layers/
@teffland your code is incomplete so it isn't possible to give you precise pointers. I think @joelthchao has already pointed out the most likely issues.
@ghost : a Model has to be defined with tensor inputs and tensor outputs, as explained throughout the documentation. You are setting as output a class instance instead.
Your code:
dis = Dissimilarity([crop_left, crop_right_bound]) # that's a class instance
patch_compare = Model([part_A_input, part_B_input], dis) # dis should be a tensor, but it's a class instance!
Hence you get an error message that explicitly tells you what you did wrong:
Output tensors to a Model must be Keras tensors. Found: <__main__.Dissimilarity object
So maybe you meant to write:
layer = Dissimilarity()
dis = layer([crop_left, crop_right_bound]) # this is now a tensor (if the Dissimilarity layer has been correctly implemented)
Most helpful comment
You guys are misusing Lambda layers.
It has no effect to have a Lambda layer of the form
lambda x: x. It doesn't make sense.The argument you pass to a Lambda layer should be a function that returns a TF or Theano tensor. All TF/Theano operations being applied should be inside that function. This is explained clearly in the documentation for the
Lambdalayer.Otherwise, you can use a custom layer, and put your custom TF/Theano code in the
callmethod, as explained here: http://keras.io/layers/writing-your-own-keras-layers/@teffland your code is incomplete so it isn't possible to give you precise pointers. I think @joelthchao has already pointed out the most likely issues.