Models: Error during pre-trained inception v4 loaded from checkpoint

Created on 13 Jun 2017  路  7Comments  路  Source: tensorflow/models

System information

  • What is the top-level directory of the model you are using: slim/nets
  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): MacOS Sierra
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (use command below): 1.1.0
  • Bazel version (if compiling from source):
  • CUDA/cuDNN version:
  • GPU model and memory:
  • Exact command to reproduce:

Describe the problem

Attempt to load pre-trained inception v4 model from checkpoint fails

Source code / logs

code like:
batch_size = 5
height, width = 299, 299
inputs = tf.random_uniform((batch_size, height, width, 3))
inc = inception_v4.inception_v4_base(inputs)
sess = tf.Session()
saver = tf.train.Saver()
with tf.Session() as sess:
# Restore variables from disk.
saver.restore(sess, "inception_v4.ckpt")
print("Model restored.")
fails with:
NotFoundError (see above for traceback): Tensor name "InceptionV4/Mixed_7d/Branch_2/Conv2d_0c_1x3/biases" not found in checkpoint files /Users/tdurakov/PycharmProjects/sandbox/inception_v4.ckpt
[[Node: save/RestoreV2_290 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_290/tensor_names, save/RestoreV2_290/shape_and_slices)]]

awaiting model gardener

Most helpful comment

sess = tf.Session()
arg_scope = inception_v4_arg_scope()
input_tensor = tf.placeholder(tf.float32, (None, 299, 299, 3))
with slim.arg_scope(arg_scope):
    logits, end_points = inception_v4(input_tensor, is_training=False)
saver = tf.train.Saver()
saver.restore(sess, checkpoint_file)

All 7 comments

I met the same question when restored the inception_res_v2.ckpt

@sguada could you take a look?

Unfortunately, one possibility is the checkpoints simply aren't compatible with the code anymore after updates to TF 1.0+.

simply apply the arg_scope could solve the issue

@Hugh0120 code snippet with the example will be very useful, could you please share?

https://github.com/tensorflow/models/issues/1030
you may find it in issue 1030

sess = tf.Session()
arg_scope = inception_v4_arg_scope()
input_tensor = tf.placeholder(tf.float32, (None, 299, 299, 3))
with slim.arg_scope(arg_scope):
    logits, end_points = inception_v4(input_tensor, is_training=False)
saver = tf.train.Saver()
saver.restore(sess, checkpoint_file)
Was this page helpful?
0 / 5 - 0 ratings

Related issues

dsindex picture dsindex  路  3Comments

sun9700 picture sun9700  路  3Comments

frankkloster picture frankkloster  路  3Comments

atabakd picture atabakd  路  3Comments

noumanriazkhan picture noumanriazkhan  路  3Comments