What is the top-level directory of the model you are using:
object_detection
Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
Follow this tutorial
https://pythonprogramming.net/video-tensorflow-object-detection-api-tutorial/
to write a script to
process the video with the loaded pre-trained model.
OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
Linux Ubuntu 14.04
TensorFlow installed from (source or binary):
binary
TensorFlow version (use command below):
1.4.1
Bazel version (if compiling from source):
X
CUDA/cuDNN version:
CUDA: release 8.0, V8.0.61
cuDNN: CUDNN_MAJOR 7
GPU model and memory:
GeForce GTX 1050 Ti, 4G memory
Exact command to reproduce:
Describe the problem clearly here. Be sure to convey here why it's a bug in TensorFlow or a feature request.
tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: load frozen_inference_graph.ph file.sess = tf.Session(graph=detection_graph): initalize session.sess.run: get error messages.Source code: Please refer to the link above.
Error:
error.txt
Thank you.
Could you try with tensorflow 1.5? This looks like a version compatibility issue.
If it doesn't work, you may also want to re-export (models/research/object_detection/export_inference_graph.py) the frozen graph with your machine and test again.
Hello @pkulzc , thank you for the quick response. I will upgrade my Ubuntu from 14.04 to 16.04 and try with tensorflow 1.5 ASAP.
Hello @pkulzc,
this issue is fixed in Tensorflow 1.8.0.
Thanks.
I'm pretty sure this is the same issue I'm having, but I'm a bit stuck - I exported the model on my local PC, which is running 1.8.0 and then I am running tensorflow_model_server on a cloud ubuntu machine in docker, which is also running 1.8.0. My exact output, when trying to call through my client script is
root@docker-s-2vcpu-4gb-lon1-01:~# python client.py --server=172.17.0.2:3000 --image=/root/car.jpg
Traceback (most recent call last):
File "client.py", line 32, in <module>
result = stub.Predict(request, 10.0) # 10 secs timeout
File "/usr/local/lib/python2.7/dist-packages/grpc/beta/_client_adaptations.py", line 309, in __call__
self._request_serializer, self._response_deserializer)
File "/usr/local/lib/python2.7/dist-packages/grpc/beta/_client_adaptations.py", line 195, in _blocking_unary_unary
raise _abortion_error(rpc_error_call)
grpc.framework.interfaces.face.face.AbortionError: AbortionError(code=StatusCode.INVALID_ARGUMENT, details="NodeDef mentions attr 'index_type' not in Op<name=Fill; signature=dims:int32, value:T -> output:T; attr=T:type>; NodeDef: GridAnchorGenerator/Meshgrid_3/ExpandedShape/ones = Fill[T=DT_INT32, index_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](SecondStagePostprocessor/Slice/begin, SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayUnstack_1/range/delta). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
[[Node: GridAnchorGenerator/Meshgrid_3/ExpandedShape/ones = Fill[T=DT_INT32, index_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](SecondStagePostprocessor/Slice/begin, SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayUnstack_1/range/delta)]]")
Can anyone tell me if this is the same issue or if I'm to open a new issue.
Thanks.
Luke
Hi,
I have taken a pretrained_model from Google/automl/Efficientdet
converted the checkpoint to a frozen_graph.pb
On unning the piece of code,
graph_pb = './savedmodeldir/efficientdet-d0_frozen.pb/'
inp = ['image']
out = ['detections']
converter=tf.compat.v1.lite.TFLiteConverter.from_frozen_graph(
graph_pb, inp, out,input_shapes={"image":[1,512,512,3]})
I get this error:
batch_mean:U, batch_variance:U, reserve_space_1:U, reserve_space_2:U, reserve_space_3:U; attr=T:type,allowed=[DT_HALF, DT_BFLOAT16, DT_FLOAT]; attr=U:type,allowed=[DT_FLOAT]; attr=epsilon:float,default=0.0001; attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW"]; attr=is_training:bool,default=true>; NodeDef: {{node efficientnet-b0/stem/tpu_batch_normalization/FusedBatchNormV3}}. (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
Hi,
For those googling and receiving either errors when using keras.load_model(model) or keras.load_model(model.h5) :
For me it was a versioning issue. The versions of keras and tensorflow that I was loading the models with were earlier than the versions I had used to train them. Uninstalling and reinstalling solved it for me.
Most helpful comment
I'm pretty sure this is the same issue I'm having, but I'm a bit stuck - I exported the model on my local PC, which is running 1.8.0 and then I am running tensorflow_model_server on a cloud ubuntu machine in docker, which is also running 1.8.0. My exact output, when trying to call through my client script is
root@docker-s-2vcpu-4gb-lon1-01:~# python client.py --server=172.17.0.2:3000 --image=/root/car.jpg Traceback (most recent call last): File "client.py", line 32, in <module> result = stub.Predict(request, 10.0) # 10 secs timeout File "/usr/local/lib/python2.7/dist-packages/grpc/beta/_client_adaptations.py", line 309, in __call__ self._request_serializer, self._response_deserializer) File "/usr/local/lib/python2.7/dist-packages/grpc/beta/_client_adaptations.py", line 195, in _blocking_unary_unary raise _abortion_error(rpc_error_call) grpc.framework.interfaces.face.face.AbortionError: AbortionError(code=StatusCode.INVALID_ARGUMENT, details="NodeDef mentions attr 'index_type' not in Op<name=Fill; signature=dims:int32, value:T -> output:T; attr=T:type>; NodeDef: GridAnchorGenerator/Meshgrid_3/ExpandedShape/ones = Fill[T=DT_INT32, index_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](SecondStagePostprocessor/Slice/begin, SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayUnstack_1/range/delta). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.). [[Node: GridAnchorGenerator/Meshgrid_3/ExpandedShape/ones = Fill[T=DT_INT32, index_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](SecondStagePostprocessor/Slice/begin, SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayUnstack_1/range/delta)]]")Can anyone tell me if this is the same issue or if I'm to open a new issue.
Thanks.
Luke