Keras: model parameter in Lambda layers

Created on 5 Dec 2016  路  5Comments  路  Source: keras-team/keras

Hi all,

I want to implement a very simple lambda layer, for a input vector, I want to scale them by a weight W and then connect to the softmax activation function.

I implemented the functionality this way, the weight W is declared as a K.variable:

import keras.backend as K
from keras.models import Sequential, Model
from keras.layers import Dense, Activation, InputLayer, Flatten, Input, Merge, merge, Reshape
from keras.layers.core import Lambda
N = 11
my_input = Input(shape=(N,), name='input')
def scale(x): 
    w = K.variable(1.0, name='w_g')
    return K.mul(x,w)
def scale_output_shape(input_shape):
    return input_shape

scaled = Lambda(scale, scale_output_shape, name='softmax_scale')(my_input)
my_out = Activation('softmax', name='softmax')(scaled)
gating = Model(input=my_input, output=my_out, name='gating')

This definition works, but I wonder whether the parameter W will be updated at all, as in the model's summary, it's written there's no parameter in this model:

>>> gating.summary()
____________________________________________________________________________________________________
Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
input (InputLayer)               (None, 11)            0                                            
____________________________________________________________________________________________________
softmax_scale (Lambda)           (None, 11)            0           input[0][0]                      
____________________________________________________________________________________________________
softmax (Activation)             (None, 11)            0           softmax_scale[0][0]              
====================================================================================================
Total params: 0
____________________________________________________________________________________________________

Or do you have any better way to implement this "scaled softmax" computation ?

Thanks,

stale

Most helpful comment

Was hoping to find an answer to a similar question and ended up on this page. How do we specify trainable parameters in a Lambda layer definition?

All 5 comments

Was hoping to find an answer to a similar question and ended up on this page. How do we specify trainable parameters in a Lambda layer definition?

Hi,

I have achieved trainable parameter using Layer.add_weight(), you can find my code here:

https://github.com/X-Wei/ETH-thesis-TREC/blob/master/2-DRMM/DRMM.py#L33

The ScaledLayer class contains just one trainable parameter W, and the output is scaled by `

This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.

I have a similar question. Is there a way to define parameters as inputs, and not within a keras layer?

There is a similar thread on this, with someone who might have a solution however it didn't work out for me. Either I cannot understand his example, or it's broken

Was this page helpful?
0 / 5 - 0 ratings