I have a tf.keras model which contains a Batchnorm Layer.
This is what I want to do:
Keras model => Checkpoint files => frozen_graph.pb => Load frozen graph (ERROR)
I get the following error message:
Input 0 of node inference/conv1_1_3x3_s2_bn/cond/ReadVariableOp/Switch was passed float from inference/conv1_1_3x3_s2_bn/gamma:0 incompatible with expected resource.
Here is a jupyter notebook which reproduces the error:
https://gist.github.com/JanRuettinger/6ba8662c4b8df86213bfc2ec6ee426ca#file-batchnorm-bug
Any updates on this? I have the exact same issue.
Were you able to figure out @JanRuettinger ?
Same here
@JanRuettinger @bkanaki @fferroni guys, just resolved exactly the same issue. On SO I posted an answer how I managed to do that. In short you have to place keras.backend.set_learning_phase(0) before your model loading:
import tensorflow as tf
from tensorflow.python.framework import graph_io
from tensorflow.keras.applications.inception_v3 import InceptionV3
def freeze_graph(graph, session, output):
with graph.as_default():
graphdef_inf = tf.graph_util.remove_training_nodes(graph.as_graph_def())
graphdef_frozen = tf.graph_util.convert_variables_to_constants(session, graphdef_inf, output)
graph_io.write_graph(graphdef_frozen, ".", "frozen_model.pb", as_text=False)
tf.keras.backend.set_learning_phase(0)
base_model = InceptionV3()
session = tf.keras.backend.get_session()
INPUT_NODE = base_model.inputs[0].op.name
OUTPUT_NODE = base_model.outputs[0].op.name
freeze_graph(session.graph, session, [out.op.name for out in base_model.outputs])
And this frozen model frozen_model.pb works perfectly:
from PIL import Image
import numpy as np
import tensorflow as tf
# https://i.imgur.com/tvOB18o.jpg
im = Image.open("/home/chichivica/Pictures/eagle.jpg").resize((299, 299), Image.BICUBIC)
im = np.array(im) / 255.0
im = im[None, ...]
graph_def = tf.GraphDef()
with tf.gfile.GFile("frozen_model.pb", "rb") as f:
graph_def.ParseFromString(f.read())
graph = tf.Graph()
with graph.as_default():
net_inp, net_out = tf.import_graph_def(
graph_def, return_elements=["input_1", "predictions/Softmax"]
)
with tf.Session(graph=graph) as sess:
out = sess.run(net_out.outputs[0], feed_dict={net_inp.outputs[0]: im})
print(np.argmax(out))
Closing this issue since its resolved. Feel free to reopen if the problem still persists. Thanks!
@chichivica I use the same way to solve the issue. However, the key to the solution is to fix value before model construction. That means the model become untrainable. Is there any way to train a model with Batch normalization layer and the frozen model is still loadable in the future?
Hi, I am having this issue where I have a model resnet model trained with batch normalization layer but when loading the .pb file it throws an error and the above-mentioned technique doesn't help.
The error I am getting is given below:-
ValueError: Input 0 of node resnet50/bn_conv1/cond/ReadVariableOp/Switch was passed float from bn_conv1/gamma:0 incompatible with expected resource.
Is there a way to get an already trained network working in a .pb format with batch normalization.
@posutsai were you able to solve it?
Thanks, it works for me,
@chkda did you set keras.backend.set_learning_phase(0) before loading the model? If you set it after loading the model, it does not work.
@posutsai you can simly store the model, do the setting and load it again
@davidwalter2 this is my code. Currently its doing no good. Before getting the graph and checkpoint files I am setting tf.keras.backend.set_learning_phase(0)
import tensorflow as tf
from tensorflow.python.framework import graph_util
import os,sys
tf.keras.backend.set_learning_phase(0)
with tf.Session() as sess:
gom = tf.train.import_meta_graph('C:\\DL\\models\\job.ckpt-2.meta')
gom.restore(sess,tf.train.latest_checkpoint('C:\\DL\\models'))
graph = tf.get_default_graph()
input_graph = graph.as_graph_def()
output_node_name = "sigmoid"
output_graph = tf.graph_util.convert_variables_to_constants(sess,input_graph,output_node_name.split(','))
res_file = 'C:\\DL\models\\resnet.pb'
with tf.gfile.GFile(res_file,'wb') as f:
f.write(output_graph.SerializeToString())
@chkda Were you able to solve the issue, even I face exactly the same one.
@davidwalter2 Could you please suggest clear steps on resolving in case of resnet model trained with batch normalization and loading the .pb file
I'm unable to resolve the issue with the solution by @chichivica.
I'm using "Keras as a simplified interface to Keras" in that I build the model with Keras layers but don't use a Keras model class.
I've tried all permutations of setting and not setting tf.keras.backend.set_learning_phase(1) during training, before freezing and before loading the model.
Most helpful comment
@JanRuettinger @bkanaki @fferroni guys, just resolved exactly the same issue. On SO I posted an answer how I managed to do that. In short you have to place
keras.backend.set_learning_phase(0)before your model loading:And this frozen model frozen_model.pb works perfectly: