Keras: Variable called "std" in BatchNormalization code actually stores variance

Created on 16 Aug 2016  路  3Comments  路  Source: keras-team/keras

This is not an error, but it's confusing and could lead to people making errors if they try to extend the code. Not sure if people are already aware, but the variables "std" and "running_std" in the BatchNormalization code (e.g. here: https://github.com/fchollet/keras/blob/master/keras/layers/normalization.py#L120) actually store the variance (see the return value for normalize_batch_in_training here: https://github.com/fchollet/keras/blob/master/keras/backend/theano_backend.py#L383). The problem cancels out because the batch_normalization function, which is passed this std variable (https://github.com/fchollet/keras/blob/master/keras/layers/normalization.py#L139), actually expects the variance (https://github.com/fchollet/keras/blob/master/keras/backend/theano_backend.py#L386). But even though there's no error, it's confusing.

stale

Most helpful comment

That's fine, but why call the variable "std" instead of just calling it "var"?

All 3 comments

The normalization should be to unit standard deviation but since var = std虏 it is easier just to use the variance.

That's fine, but why call the variable "std" instead of just calling it "var"?

AFAIK, this is still an issue.

Was this page helpful?
0 / 5 - 0 ratings