Attempt to load pre-trained inception v4 model from checkpoint fails
code like:
batch_size = 5
height, width = 299, 299
inputs = tf.random_uniform((batch_size, height, width, 3))
inc = inception_v4.inception_v4_base(inputs)
sess = tf.Session()
saver = tf.train.Saver()
with tf.Session() as sess:
# Restore variables from disk.
saver.restore(sess, "inception_v4.ckpt")
print("Model restored.")
fails with:
NotFoundError (see above for traceback): Tensor name "InceptionV4/Mixed_7d/Branch_2/Conv2d_0c_1x3/biases" not found in checkpoint files /Users/tdurakov/PycharmProjects/sandbox/inception_v4.ckpt
[[Node: save/RestoreV2_290 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_290/tensor_names, save/RestoreV2_290/shape_and_slices)]]
I met the same question when restored the inception_res_v2.ckpt
@sguada could you take a look?
Unfortunately, one possibility is the checkpoints simply aren't compatible with the code anymore after updates to TF 1.0+.
simply apply the arg_scope could solve the issue
@Hugh0120 code snippet with the example will be very useful, could you please share?
https://github.com/tensorflow/models/issues/1030
you may find it in issue 1030
sess = tf.Session()
arg_scope = inception_v4_arg_scope()
input_tensor = tf.placeholder(tf.float32, (None, 299, 299, 3))
with slim.arg_scope(arg_scope):
logits, end_points = inception_v4(input_tensor, is_training=False)
saver = tf.train.Saver()
saver.restore(sess, checkpoint_file)
Most helpful comment