The blog "Keras as a simplified interface to TensorFlow: tutorial" said: it is NOT an op. See below:
from keras.layers import BatchNormalization
layer = BatchNormalization()(x)
update_ops = []
for old_value, new_value in layer.updates:
update_ops.append(tf.assign(old_value, new_value))
But I guess it is an op already, see below code snippet from backend/tensorflow_backend.py 馃憤
for update in updates:
if isinstance(update, tuple):
p, new_p = update
updates_ops.append(tf.assign(p, new_p))
else:
# assumed already an op
updates_ops.append(update)
So, my point is the blog should be revision?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.
Hi,
I am using keras model in a tensorflow train/test pipeline. My model uses two batchNorm layers. I also understand that 'model.updates' stores the assign tensors but unable to proceed any further.
So, can you please tell me in a little detail, how exactly to update the population parameters in a tensorflow training setting.
Most helpful comment
Hi,
I am using keras model in a tensorflow train/test pipeline. My model uses two batchNorm layers. I also understand that 'model.updates' stores the assign tensors but unable to proceed any further.
So, can you please tell me in a little detail, how exactly to update the population parameters in a tensorflow training setting.