run code in inspect_model.ipynb:
_ = plt.imshow(activations["input_image"][0])
then:
~/Tools/tensorflow_python3/lib/python3.5/site-packages/matplotlib/cm.py in to_rgba(self, x, alpha, bytes, norm)
255 if xx.dtype.kind == 'f':
256 if norm and xx.max() > 1 or xx.min() < 0:
--> 257 raise ValueError("Floating point image RGB values "
258 "must be in the 0..1 range.")
259 if bytes:
ValueError: Floating point image RGB values must be in the 0..1 range.
I check the code and find that :
input_image shape: (1, 1024, 1024, 3) min: -123.70000 max: 148.10001
So I modify the code :
input_image_a = activations["input_image"][0]+config.MEAN_PIXEL
input_image_a = A.astype(int)
input_image_a = A/255
log("input_image_a:",input_image_a)
_ = plt.imshow(input_image_a)
And then the image showed but there are some difference between your image in inspect_model.ipynb and mine.
My image is just the origin input image after resized and padded.
I think it should not be what you want to show.
Which version of matplotlib are you using? Mine is 2.0.2 and it doesn't complain about requiring float values to be 0..1.
matplotlib.__version__
And, you can use unmold_image() instead. It converts the image back from the form that the network expects to the original form (i.e. reverses the effect of mold_image())
Thanks!
My matplotlib version is 2.1.0.
I uninstall it and install matplotlib version 2.0.2, and the bug is fixed.
The unmold_image() function is useful.
Could I create a pull request which modify
activations["input_image"][0]
to
modellib.unmold_image(activations["input_image"][0],config)
?
Yes, please. Thank you.
Most helpful comment
Which version of matplotlib are you using? Mine is 2.0.2 and it doesn't complain about requiring float values to be 0..1.