I'm looking for a way to do something that seems conceptually simple, but I haven't found a way to do it in Keras yet. I have a CNN with dropout applied in various places in the network. After training, I'd like to compute the forward pass on some data and have the dropout mask applied such that each time I compute the forward pass a different dropout mask is applied.
If I understand things correctly, this is what is done during training. However, if I use the standard model.predict(X_test) after training, no dropout is applied and the outputs are multiplied by the dropout probability, yielding a non-stochastic set of predictions.
Does anyone know how I can achieve this?
Permanent dropout:
from keras.layers.core import Lambda
from keras import backend as K
model.add(Lambda(lambda x: K.dropout(x, level=0.5)))
There is no elegant way of doing this with a 'normally' defined model that at test time can be evaluated with and without the dropout mask ?
The simplest thing you can do is use your own dropout layer. Look at the
code for the existing dropout layer. Simply remove the part that makes
dropout conditional on the learning phase.
On 12 June 2016 at 19:44, Christopher Bonnett [email protected]
wrote:
There is no elegant way of doing this with a 'normally' defined model that
at test time can be evaluated with and without the dropout mask ?—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
https://github.com/fchollet/keras/issues/1606#issuecomment-225479014,
or mute the thread
https://github.com/notifications/unsubscribe/AArWb0Sj_JsBRKt95J3vDQxkZzlWlMQjks5qLMQegaJpZM4HQGbd
.
@fchollet How do we generalize this answer to other forms of dropout, such as those in Embedding and RNN layers?
class PermanentDropout(Dropout):
def __init__(self, rate, **kwargs):
super(PermanentDropout, self).__init__(rate, **kwargs)
self.uses_learning_phase = False
def call(self, x, mask=None):
if 0. < self.rate < 1.:
noise_shape = self._get_noise_shape(x)
x = K.dropout(x, self.rate, noise_shape)
return x
@fchollet I am interested in making a dropout layer that is static throughout the course of training and testing. Unlike normal dropout, I only want to sever a certain amount of random weights, not the entire node. Is creating a custom layer the easiest way to achieve this?
Here is my current progress. Any suggestions would help.
@fchollet is there a way to do the same with spatial dropout?
Based on issue #9412, it appears that this "permanent dropout" feature has been (quietly) added into core Keras.
Per @fchollet:
There is this feature in Keras: it's the training argument in the call of the Dropout layer.
Here's a model with a Dense layer and a Dropout layer that runs both in training and testing:
import keras inputs = keras.Input(shape=(10,)) x = keras.layers.Dense(3)(inputs) outputs = keras.layers.Dropout(0.5)(x, training=True) model = keras.Model(inputs, outputs)
Oddly, the training
argument doesn't currently appeared to be documented on https://keras.io/layers/core/, but you can see it in line 118 of the source.
You can also change the dropout with a trained model:
f = K.function([model.layers[0].input, K.learning_phase()],
[model.layers[-1].output])
In this way you don't have to train again the model!!!
How can I pass dropout rate during predict in Keras?
for example:
dropout_rate = 0.1
prediction = model.predict(x_test, dropout_rate)
Most helpful comment
Permanent dropout: