I found that LamdaMerge was deleted from core.py and keras.io.
I want to implement a merge mode of the euclidean distance (euclidean is from scipy).
My left model is a Sequential RNN while the right model is a Sequential CNN.
Here is a snippet of my current code.
def edis(inputs):
l1 = inputs[0]
l2 = inputs[1]
return euclidean(l1,l2)
model = Sequential()
model.add(LambdaMerge(layers=[left,right], mode=edis))
Now I couldn't find the LambdaMerge source code in core.py. Is it removed from keras?
If I want to implement a custom-defined merge code, is the only way to modify source code of Merge model? It seems that the Merge layer is also moved to another py file. Since I can only find the following code line related to Merge in core.py.
from ..engine import InputSpec, Layer, Merge
p.s. This is the previous snippet about LamdaMerge in my local downloaded keras package.
`class LambdaMerge(Lambda):
'''LambdaMerge layer for evaluating an arbitrary Theano / TensorFlow
function over multiple inputs.
# Output shape
Specified by output_shape argument
# Arguments
layers - Input layers. Similar to layers argument of Merge
function - The function to be evaluated. Takes one argument:
list of outputs from input layers
output_shape - Expected output shape from function.
Could be a tuple or a function of list of input shapes
arguments: optional dictionary of keyword arguments to be passed
to the function.
'''`
You can still do that, the functionality is just built into the Merge layer by setting mode to a function or lambda. From the docs:
Arguments
- mode: string or lambda/function. If string, must be one
- of: 'sum', 'mul', 'concat', 'ave', 'cos', 'dot'. If lambda/function, it should take as input a list tensors and return a single tensor.
So you can use: model.add( Merge(layers=[left, right], mode=edis) ) instead and set the appropriate output_shape.
@zo7 Thank you! I will have a try.
Make sure to provide the output_shape parameter if using a lambda function, since it's not inferred. Here's an example for subtracting two layers:
merge([a, b], mode=lambda x: x[0] - x[1], output_shape=lambda x: x[0])
@codekansas Thank you for reminding the output_shape.
It takes me several hours to figure out how to get the output_shape until finding an example of Lamba layer.
Here is my solution:
def edis(inputs):
s = inputs[0] -inputs[1]
output = K.sqrt(K.batch_dot(s, s, -1))
return output
def edis_outputshape(input_shape):
shape = list(input_shape)
assert len(shape)==2
outshape = (shape[0][0],1)
return tuple(outshape)
model = Sequential()
model.add(Merge(layers=[left,right], mode=edis,output_shape=edis_outputshape))
It could pass the model.compile but failed in model.fit. I am debugging for new errors.
My new problem is similar to https://github.com/fchollet/keras/issues/981
However, my keras version is the newest 1.0.1.
I tried to minimize the euclidean distance between ma's output and mb's output.
Here is my simplified test code. edis and edis_outputshape is as the above part. ( I don't know why the pasted py code always becomes text. So I pasted screenshots.)

And this is the error information.


I wonder whether this is caused by the Merge part. So I changed the merge mode to built-in 'cos'. Still the same error.
It seems that the error appears when the model runs gradient descent on input.
@zo7 @codekansas @fchollet
Could anyone please help me? I have no idea now. :(
I tested the Merge example "Two merged LSTM encoders for classification over two parallel sequences".
It ran normally on my current keras version.

So did the above error come from the Merge function edis or the edis_outputshape?
Wow, this is fantastic. I too missed that LambdaMerge was still around in the new API. Thanks @zo7!
@walleva The error message kinda looks like Theano is having trouble computing your gradient, so it might be a problem with edis. Is there a reason you're using K.batch_dot instead of K.dot? (I'm not sure of the main difference between the two)
@zo7 I wrote this function with reference to built-in 'cos' mode in Merge.

And K.batch_dotcomes from Theano backend batch_dot.

I also think the problem is caused by the gradient. I wil try to modify my edis function. Thank you!
Why do you screenshot your code? It makes it hard to read on high DPI screens, and isn't particularly copy/paste friendly. Could you use Markdown instead?
Hey @walleva,
Please try the following definitions instead:
def euc_dist(x):
s = x[0] - x[1]
output = (s ** 2).sum(axis=1)
output = K.reshape(output, (output.shape[0],1))
return output
def euc_dist_shape(input_shape):
shape = list(input_shape)
outshape = (shape[0][0],1)
return tuple(outshape)
They worked for me.
@carlthome Sorry for pasting the screenshots. Because each time I pastes the code snippets between the code marker, only a few lines of them becomes the code format and others still remain the text format. I will try the markdown you said. Thank you!
@benjaminklein They also worked for me!!! Thank you VERY MUCH!!!
I pasted the code that worked for me. Hope it would be useful to others.
from keras.layers.core import Dense
import numpy as np
from keras.models import Sequential
from keras.engine import Merge
from keras import backend as K
######################################
def euc_dist(x):
'Merge function: euclidean_distance(u,v)'
s = x[0] - x[1]
output = (s ** 2).sum(axis=1)
output = K.reshape(output, (output.shape[0],1))
return output
def euc_dist_shape(input_shape):
'Merge output shape'
shape = list(input_shape)
outshape = (shape[0][0],1)
return tuple(outshape)
######################################
ma = Sequential()
ma.add(Dense(30, input_dim=20))
ma.add(Dense(15))
mb = Sequential()
mb.add(Dense(15,input_dim=50))
modelmerge = Sequential()
modelmerge.add(Merge(layers=[ma,mb], mode=euc_dist, output_shape=euc_dist_shape))
print('Compile')
modelmerge.compile(loss='mae',optimizer='sgd')
nb_train = 20
Xa = np.random.random((nb_train, 20))
Xb = np.random.random((nb_train, 50))
y_train = np.random.uniform(0,0.001, (nb_train, 1))
nb_val = 10
Xa_val = np.random.random((nb_val, 20))
Xb_val = np.random.random((nb_val, 50))
y_val = np.random.uniform(0,0.001, (nb_val, 1))
print('Fit')
his = modelmerge.fit([Xa, Xb], y_train,batch_size=10,nb_epoch=2,
validation_data=([Xa_val, Xb_val], y_val))
print(his.history)
It works for me too. Thank you for sharing your code, @walleva!
but now since Merge layer is being deprecated and the message says it will be removed from 2017.08 - what's the solution for merging with lambda?
@undertherain Just use Lambda, see answer keras-merge-layer-warning
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.
Most helpful comment
Make sure to provide the
output_shapeparameter if using a lambda function, since it's not inferred. Here's an example for subtracting two layers: