Keras: Gradient-reversal layer

Created on 1 Jul 2016  Â·  10Comments  Â·  Source: keras-team/keras

I`d like to put here question from keras-users https://groups.google.com/forum/#!topic/keras-users/QWe3xAkuBOg

I'm wondering if and how the gradient reversal layer from the paper "Domain-Adversarial Training of Neural Networks" (http://arxiv.org/abs/1505.07818) can be implemented in Keras.

The "gradient reversal layer" as described there is a layer which passes input through unchanged, but during backpropagation multiplies all gradients by a constant (negative) factor.*

According to this answer https://groups.google.com/forum/#!topic/keras-users/FcbrlLY94ms

If you want to explicitly define your gradients you would probably be better off using Torch than using Keras/Theano.

it is not possible in Keras. Am I right?

stale

Most helpful comment

The following code works for me, though it might only compatible with theano backend.

import theano


class ReverseGradient(theano.Op):
    """ theano operation to reverse the gradients
    Introduced in http://arxiv.org/pdf/1409.7495.pdf
    """

    view_map = {0: [0]}

    __props__ = ('hp_lambda', )

    def __init__(self, hp_lambda):
        super(ReverseGradient, self).__init__()
        self.hp_lambda = hp_lambda

    def make_node(self, x):
        assert hasattr(self, '_props'), "Your version of theano is too old to support __props__."
        x = theano.tensor.as_tensor_variable(x)
        return theano.Apply(self, [x], [x.type()])

    def perform(self, node, inputs, output_storage):
        xin, = inputs
        xout, = output_storage
        xout[0] = xin

    def grad(self, input, output_gradients):
        return [-self.hp_lambda * output_gradients[0]]

    def infer_shape(self, node, i0_shapes):
        return i0_shapes
class GradientReversalLayer(Layer):
    """ Reverse a gradient 
    <feedforward> return input x
    <backward> return -lambda * delta
    """
    def __init__(self, hp_lambda, **kwargs):
        super(GradientReversalLayer, self).__init__(**kwargs)
        self.hp_lambda = hp_lambda
        self.gr_op = ReverseGradient(self.hp_lambda)

    def build(self, input_shape):
        self.trainable_weights = []

    def call(self, x, mask=None):
        return self.gr_op(x)

    def get_output_shape_for(self, input_shape):
        return input_shape

    def get_config(self):
        config = {"name": self.__class__.__name__,
                         "lambda": self.hp_lambda}
        base_config = super(GradientReversalLayer, self).get_config()
        return dict(list(base_config.items()) + list(config.items()))

All 10 comments

It's possible in Keras. You'll have to skip using fit/predict and instead
define your own optimization procedure.

On 1 July 2016 at 07:43, Dmytro Mishkin [email protected] wrote:

I`d like to put here question from keras-users
https://groups.google.com/forum/#!topic/keras-users/QWe3xAkuBOg

I'm wondering if and how the gradient reversal layer from the paper
"Domain-Adversarial Training of Neural Networks" (
http://arxiv.org/abs/1505.07818) can be implemented in Keras.

The "gradient reversal layer" as described there is a layer which passes
input through unchanged, but during backpropagation multiplies all
gradients by a constant (negative) factor.*

According to this answer
https://groups.google.com/forum/#!topic/keras-users/FcbrlLY94ms

If you want to explicitly define your gradients you would probably be
better off using Torch than using Keras/Theano.

it is not possible in Keras. Am I right?

—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/fchollet/keras/issues/3119, or mute the thread
https://github.com/notifications/unsubscribe/AArWb3qsaquZj62-tKHzBxvI2p9Z7NqWks5qRSeagaJpZM4JDNLD
.

@fchollet sounds much harder, than defining "gradient reversal layer"

The following code works for me, though it might only compatible with theano backend.

import theano


class ReverseGradient(theano.Op):
    """ theano operation to reverse the gradients
    Introduced in http://arxiv.org/pdf/1409.7495.pdf
    """

    view_map = {0: [0]}

    __props__ = ('hp_lambda', )

    def __init__(self, hp_lambda):
        super(ReverseGradient, self).__init__()
        self.hp_lambda = hp_lambda

    def make_node(self, x):
        assert hasattr(self, '_props'), "Your version of theano is too old to support __props__."
        x = theano.tensor.as_tensor_variable(x)
        return theano.Apply(self, [x], [x.type()])

    def perform(self, node, inputs, output_storage):
        xin, = inputs
        xout, = output_storage
        xout[0] = xin

    def grad(self, input, output_gradients):
        return [-self.hp_lambda * output_gradients[0]]

    def infer_shape(self, node, i0_shapes):
        return i0_shapes
class GradientReversalLayer(Layer):
    """ Reverse a gradient 
    <feedforward> return input x
    <backward> return -lambda * delta
    """
    def __init__(self, hp_lambda, **kwargs):
        super(GradientReversalLayer, self).__init__(**kwargs)
        self.hp_lambda = hp_lambda
        self.gr_op = ReverseGradient(self.hp_lambda)

    def build(self, input_shape):
        self.trainable_weights = []

    def call(self, x, mask=None):
        return self.gr_op(x)

    def get_output_shape_for(self, input_shape):
        return input_shape

    def get_config(self):
        config = {"name": self.__class__.__name__,
                         "lambda": self.hp_lambda}
        base_config = super(GradientReversalLayer, self).get_config()
        return dict(list(base_config.items()) + list(config.items()))

@yusuke0519 Thank you very much, I will try!

@ducha-aiki Did this work for you?

@VanushVaswani yes, it does.

@ducha-aiki Thanks. Are you using it for domain adaptation?

@VanushVaswani no, for different task.

@yusuke0519 what do you pass as an argument to the layer? i.e. what is hp_lambda when the layer is placed in a keras model?

@michetonu I guess it is the scalar by which you want to multiply the gradient(s). It helps you monitor the trade-off between minimizing loss on a given task and maximizing loss on another one during training phase. This paper delivers a very good visualization of it (p. 3).

Was this page helpful?
0 / 5 - 0 ratings

Related issues

vinayakumarr picture vinayakumarr  Â·  3Comments

LuCeHe picture LuCeHe  Â·  3Comments

harishkrishnav picture harishkrishnav  Â·  3Comments

kylemcdonald picture kylemcdonald  Â·  3Comments

KeironO picture KeironO  Â·  3Comments