Models: freeze_graph script error : google.protobuf.text_format.ParseError

Created on 22 Jun 2017  路  12Comments  路  Source: tensorflow/models

I am trying to run the freeze_graph script on my own .pb and .ckpt file. However I am getting this error.

google.protobuf.text_format.ParseError: 2:1 : Message type "tensorflow.GraphDef" has no field named "j".

The stack trace is as follows:

Traceback (most recent call last):
  File "/home/gabbar/ML/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 255, in <module>
    app.run(main=main, argv=[sys.argv[0]] + unparsed)
  File "/home/gabbar/ML/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/platform/app.py", line 48, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "/home/gabbar/ML/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 187, in main
    FLAGS.variable_names_blacklist)
  File "/home/gabbar/ML/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 165, in freeze_graph
    input_graph_def = _parse_input_graph_proto(input_graph, input_binary)
  File "/home/gabbar/ML/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 134, in _parse_input_graph_proto
    text_format.Merge(f.read(), input_graph_def)
  File "/home/gabbar/.cache/bazel/_bazel_gabbar/3ef5463937ccade414be63dae84521e3/execroot/tensorflow/bazel-out/local_linux-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf/python/google/protobuf/text_format.py", line 481, in Merge
    descriptor_pool=descriptor_pool)
  File "/home/gabbar/.cache/bazel/_bazel_gabbar/3ef5463937ccade414be63dae84521e3/execroot/tensorflow/bazel-out/local_linux-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf/python/google/protobuf/text_format.py", line 535, in MergeLines
    return parser.MergeLines(lines, message)
  File "/home/gabbar/.cache/bazel/_bazel_gabbar/3ef5463937ccade414be63dae84521e3/execroot/tensorflow/bazel-out/local_linux-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf/python/google/protobuf/text_format.py", line 568, in MergeLines
    self._ParseOrMerge(lines, message)
  File "/home/gabbar/.cache/bazel/_bazel_gabbar/3ef5463937ccade414be63dae84521e3/execroot/tensorflow/bazel-out/local_linux-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf/python/google/protobuf/text_format.py", line 583, in _ParseOrMerge
    self._MergeField(tokenizer, message)
  File "/home/gabbar/.cache/bazel/_bazel_gabbar/3ef5463937ccade414be63dae84521e3/execroot/tensorflow/bazel-out/local_linux-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf/python/google/protobuf/text_format.py", line 652, in _MergeField
    (message_descriptor.full_name, name))
google.protobuf.text_format.ParseError: 2:1 : Message type "tensorflow.GraphDef" has no field named "j".

Most helpful comment

Yeah just use is binary= True

All 12 comments

Did you find a solution? I am having the same trouble.

Yeah just use is binary= True

Hi !
Do you have more details about your solution ? I'm having the same problem trying to train the pet detector. Here's my stacktrace :

Traceback (most recent call last): File "object_detection/train.py", line 201, in <module> tf.app.run() File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 48, in run _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File "object_detection/train.py", line 146, in main model_config, train_config, input_config = get_configs_from_pipeline_file() File "object_detection/train.py", line 106, in get_configs_from_pipeline_file text_format.Merge(a, pipeline_config) File "/usr/local/lib/python3.5/dist-packages/google/protobuf/text_format.py", line 481, in Merge descriptor_pool=descriptor_pool) File "/usr/local/lib/python3.5/dist-packages/google/protobuf/text_format.py", line 535, in MergeLines return parser.MergeLines(lines, message) File "/usr/local/lib/python3.5/dist-packages/google/protobuf/text_format.py", line 568, in MergeLines self._ParseOrMerge(lines, message) File "/usr/local/lib/python3.5/dist-packages/google/protobuf/text_format.py", line 583, in _ParseOrMerge self._MergeField(tokenizer, message) File "/usr/local/lib/python3.5/dist-packages/google/protobuf/text_format.py", line 657, in _MergeField (message_descriptor.full_name, name)) google.protobuf.text_format.ParseError: 1:1 : Message type "object_detection.protos.TrainEvalPipelineConfig" has no field named "syntax".

Thanks in advance !

Ps this has change: so it should be, '''input_binary=True'''

hello @kiariendegwa, When i tried to change to 'input_binary=True' as you mensioned i got this error:
TypeError: names_to_saveables must be a dict mapping string names to Tensors/Variables. Not a variable: Tensor("BoxPredictor_0/BoxEncodingPredictor/biases:0", shape=(12,), dtype=float32).

I used export_inference_graph.py in the beginning, after that i used (I used export_inference_graph.py instead of freeze_graph) freeze_graph.py, because I want to use that in c++ API Tensorflow or Opencv3.3 dnn in c++.
Thanks in advanced.

Hello @kerolos, I'm having the same problem. I would like to convert a ssd_mobilenet_v2 tensorflow model to tflite.
Command line:
bazel-bin/tensorflow/python/tools/freeze_graph --input_graph=frozen_inference_graph.pb --input_checkpoint=model.ckpt --output_graph=frozen_graph.pb --output_node_names=detection_boxes,detection_scores,detection_classes,num_detections --input_binary=True

TypeError: names_to_saveables must be a dict mapping string names to Tensors/Variables. Not a variable: Tensor("BoxPredictor_0/BoxEncodingPredictor/biases:0", shape=(12,), dtype=float32)

Did you fix it? Thx.

Hey @MengAjin did you find any solution for this error?

@harsh-agar
In here, Allen Lavoie said "The graph is already frozen, so there are no variables in the graph to replace. You'll need to start with the non-frozen graph."

It seems like the .pb file is already a frozen graphdef.

My frozen_inference_graph.pb is generated by export_inference_graph.py.

@MengAjin
But as mentioned in the script freeze_graph.py

This script is designed to take a GraphDef proto, a SaverDef proto, and a set of
variable values stored in a checkpoint file, and output a GraphDef with all of
the variable ops converted into const ops containing the values of the
variables.
It's useful to do this when we need to load a single file in C++, especially in
environments like mobile or embedded where we may not have access to the
RestoreTensor ops and file loading calls that they rely on.
An example of command-line usage is:
bazel build tensorflow/python/tools:freeze_graph && \
bazel-bin/tensorflow/python/tools/freeze_graph \
--input_graph=some_graph_def.pb \
--input_checkpoint=model.ckpt-8361242 \
--output_graph=/tmp/frozen_graph.pb --output_node_names=softmax

it seems we need to give a frozen graph (.pb file) as input_graph.

Hello, @harsh-agar. I was also confused at the beginning.
Before I use export_inference_graph.py., my model is saved in a directory which contains:

  • checkpoint
  • graph.pbtxt
  • model.ckpt-xxx.data-00000-of-00001
  • model.ckpt-xxx.index
  • model.ckpt-xxx.meta
  • pipline.config
  • events.out.xxxx

I tried to freeze the graph.pbtxt and this error is not exist.
So I guess the purpose of export_inference_graph.py is to freeze the graph.

This morning I have successfully run model in mobile.

PS: I just contact tensorflow lite for two weeks. If I have something wrong, please forgive me. ^鈥哶鈥哵

can anyone please help me with the error:,,please do help,,
doe@doe:~/anaconda3/envs/tensorflow/models/research/object_detection$ python3 train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/ssd_mobilenet_v1_coco.config
WARNING:tensorflow:From /home/doe/anaconda3/lib/python3.6/site-packages/tensorflow/python/platform/app.py:124: main (from __main__) is deprecated and will be removed in a future version.
Instructions for updating:
Use object_detection/model_main.py.
Traceback (most recent call last):
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1500, in _ParseAbstractInteger
return int(text, 0)
ValueError: invalid literal for int() with base 0: '03'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1449, in _ConsumeInteger
result = ParseInteger(tokenizer.token, is_signed=is_signed, is_long=is_long)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1471, in ParseInteger
result = _ParseAbstractInteger(text, is_long=is_long)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1502, in _ParseAbstractInteger
raise ValueError('Couldn\'t parse integer: %s' % text)
ValueError: Couldn't parse integer: 03

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "train.py", line 184, in
tf.app.run()
File "/home/doe/anaconda3/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 124, in run
_sys.exit(main(argv))
File "/home/doe/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 136, in new_func
return func(args, *kwargs)
File "train.py", line 93, in main
FLAGS.pipeline_config_path)
File "/home/doe/anaconda3/envs/tensorflow/models/research/object_detection/utils/config_util.py", line 94, in get_configs_from_pipeline_file
text_format.Merge(proto_str, pipeline_config)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 536, in Merge
descriptor_pool=descriptor_pool)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 590, in MergeLines
return parser.MergeLines(lines, message)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 623, in MergeLines
self._ParseOrMerge(lines, message)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 638, in _ParseOrMerge
self._MergeField(tokenizer, message)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 763, in _MergeField
merger(tokenizer, message, field)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 837, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 763, in _MergeField
merger(tokenizer, message, field)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 837, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 763, in _MergeField
merger(tokenizer, message, field)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 871, in _MergeScalarField
value = _ConsumeInt32(tokenizer)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1362, in _ConsumeInt32
return _ConsumeInteger(tokenizer, is_signed=True, is_long=False)
File "/home/doe/.local/lib/python3.6/site-packages/google/protobuf/text_format.py", line 1451, in _ConsumeInteger
raise tokenizer.ParseError(str(e))
google.protobuf.text_format.ParseError: 9:18 : Couldn't parse integer: 03

when i run the eval.py meet the problem:
google.protobuf.text_format.ParseError: 1:1 : Message type "object_detection.protos.TrainEvalPipelineConfig" has no field named "node".
could you give me a advice?

Was this page helpful?
0 / 5 - 0 ratings