Keras: `predict` should be able to activate learning phase

Created on 25 Jun 2016  路  6Comments  路  Source: keras-team/keras

In models such as denoising auto-encoders it is often helpful to inspect the reconstruction results. However, noise is only available in the learning phase, not in prediction phase. Thus one cannot use model.predict but has to create a function from the model using K.function. This however may introduce bugs where the batch size is not correct and generates general discomfort.

Why not add a in_learning_phase parameter to Model.predict? E.g.

    def predict(self, x, batch_size=32, verbose=0, in_learning_phase=False):
        # ...
        if self.uses_learning_phase:
            ins = x + [1. if in_learning_phase else 0.]
        # ...

Most helpful comment

Thanks a lot but I feel we are not on the same page. I would have to change my model significantly to integrate K.dropout for test-time even though I just want to inspect my model's behavior during training. I don't want to change the prediction behavior permanently, I just want to inspect the way my model is working during training with minimal effort (to avoid making further mistakes) while debugging.

My scenario is the following:
I have a auto-encoding model that uses noise during training but not during test-time but I suspect there is something wrong so I want to inspect the reconstruction that it does during training time.

This scenario can also be applied to models using dropout, batch normalization.

All 6 comments

Plenty of ways to have test-time dropout or noise if you wish to.

predict means _by definition_ learning_phase=0.

And what way would that be without using K.function? I would not propose such a change if I had found something that would do that.

e.g. K.dropout...

Thanks a lot but I feel we are not on the same page. I would have to change my model significantly to integrate K.dropout for test-time even though I just want to inspect my model's behavior during training. I don't want to change the prediction behavior permanently, I just want to inspect the way my model is working during training with minimal effort (to avoid making further mistakes) while debugging.

My scenario is the following:
I have a auto-encoding model that uses noise during training but not during test-time but I suspect there is something wrong so I want to inspect the reconstruction that it does during training time.

This scenario can also be applied to models using dropout, batch normalization.

Can you give an example of test-time dropout? Is there a setting I need to set?

Solution with TensorFlow as a backend

In Keras as a simplified interface to TensorFlow: tutorial the section "Different behaviors during training and testing" explains one way to do this. Essentially, if you have a model model and you want to get its output with learning phase set to 1 you can do:

```
from keras import backend as K

sess = K.get_session()
input_ph = tf.placeholder(..., shape=model.input.shape)
output = sess.run(model(input_ph), feed_dict={input_ph: ...,
K.learning_phase(): 1})
````

This also works with tf.keras, you just have to import the backend from tf.keras instead of keras.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

zygmuntz picture zygmuntz  路  3Comments

NancyZxll picture NancyZxll  路  3Comments

rantsandruse picture rantsandruse  路  3Comments

anjishnu picture anjishnu  路  3Comments

LuCeHe picture LuCeHe  路  3Comments