hello, sorry to bother you.
I want to use VGG16, but when I run the command model = VGG16(weights='imagenet', include_top=False)>>> from keras.applications.vgg16 import VGG16
, it shows error:
>>> model = VGG16(weights='imagenet', include_top=False)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ztgong/local/anaconda2/lib/python2.7/site-packages/keras/applications/vgg16.py", line 169, in VGG16
model.load_weights(weights_path)
File "/home/ztgong/local/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 2494, in load_weights
f = h5py.File(filepath, mode='r')
File "/home/ztgong/local/anaconda2/lib/python2.7/site-packages/h5py/_hl/files.py", line 271, in __init__
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/home/ztgong/local/anaconda2/lib/python2.7/site-packages/h5py/_hl/files.py", line 101, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/tmp/pip-nCYoKW-build/h5py/_objects.c:2840)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/tmp/pip-nCYoKW-build/h5py/_objects.c:2798)
File "h5py/h5f.pyx", line 78, in h5py.h5f.open (/tmp/pip-nCYoKW-build/h5py/h5f.c:2117)
IOError: Unable to open file (Truncated file: eof = 21463040, sblock->base_addr = 0, stored_eoa = 58889256)
how to solve this problem?
thank you in advance!
I ran into this problem because I had accidentally terminated the download of the keras model data file before it was done, making it unreadable by h5py. I went into .keras/models/
and deleted the data file so that keras would re-download it.
@adamheins my god. you are right! thank you very much!
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.
hello I met the same problem. But I cannot find the model folder under keras and where the broken model located. I am using anaconda2 under Linux.. anyone could help?
@kli017 went into .keras/models/
@kli017
finally I fix this problem .keras/models usually in ~/.keras/models
see this code
I found it, thank you guys!
Go into .keras/models/ and deleted the data file so that keras would re-download it work for me. Thank @adamheins
Hello, I could not find the /models/ folder under ~/.keras. There is only a /datasets/ folder. I am using keras with Tensorflow backend.
Anyone could help?
@toodlesliyy If you're on a Mac, try using the "Go to folder" from the "Go" menu in finder and just copy paste "~/.keras/models" and see if that works. It's typically a hidden folder at the root of your computer, where folders like "Documents" or "Downloads" live.
Thank you so much!!!!!!
in ubuntu 16.04, there is NO model
folder under .keras
there is only one file inside .keras
which is keras.json
!!!!!!!!!!
@Abduoit how to solve it if we can't find "model" under ~/.keras/
@Alvin1994
I remember the issue still there even when I created a model
folder, try to create a folder manually and see
This issue seems to still be causing some confusion, so I created this repo to try to help reproduce and debug the error.
I'm confused by the comments that say there isn't any model folder (at least on Ubuntu 16.04, which is what I'm currently using), because when I run the code to create a VGG16 model in Keras, it creates ~/.keras/models
. I would suggest trying out the steps in the repository I linked and see how that goes, so we can at least be on the same page for further debugging.
@adamheins Big thanks to our help, everything works well now(Ubuntu 16.04). I don't have the "model" folder is just simply I have not download it.
PLEASE I need your HELP:
Hello there,
I'm still having the same problem please any help???
I have Windows 10
Aslo I don't have model folder under Keras folder.
There is an application folder!
This is the whole message I got :
` OSError Traceback (most recent call last)
----> 1 vgg16_model = keras.applications.vgg16.VGG16()
E:\Programs\Anaconda\envs\ztdl\libsite-packages\keras\applications\vgg16.py in VGG16(include_top, weights, input_tensor, input_shape, pooling, classes)
167 WEIGHTS_PATH_NO_TOP,
168 cache_subdir='models')
--> 169 model.load_weights(weights_path)
170 if K.backend() == 'theano':
171 layer_utils.convert_all_kernels_in_model(model)
E:\Programs\Anaconda\envs\ztdl\libsite-packages\keras\engine\topology.py in load_weights(self, filepath, by_name)
2530 if h5py is None:
2531 raise ImportError('load_weights
requires h5py.')
-> 2532 f = h5py.File(filepath, mode='r')
2533 if 'layer_names' not in f.attrs and 'model_weights' in f:
2534 f = f['model_weights']
E:\Programs\Anaconda\envs\ztdl\libsite-packages\h5py_hl\files.py in __init__(self, name, mode, driver, libver, userblock_size, swmr, *kwds)
269
270 fapl = make_fapl(driver, libver, *kwds)
--> 271 fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
272
273 if swmr_support:
E:\Programs\Anaconda\envs\ztdl\libsite-packages\h5py_hl\files.py in make_fid(name, mode, userblock_size, fapl, fcpl, swmr)
99 if swmr and swmr_support:
100 flags |= h5f.ACC_SWMR_READ
--> 101 fid = h5f.open(name, flags, fapl=fapl)
102 elif mode == 'r+':
103 fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)
h5py_objects.pyx in h5py._objects.with_phil.wrapper()
h5py_objects.pyx in h5py._objects.with_phil.wrapper()
h5py\h5f.pyx in h5py.h5f.open()
OSError: Unable to open file (Truncated file: eof = 238592000, sblock->base_addr = 0, stored_eof = 553467096)`
I just solved my problem on Windows 10:
Go to this directory:
C:\Users\YOURNAME.kerasmodels
Find the file vgg16_weights_tf_dim_ordering_tf_kernels.h5 file. So just delete it then Rerun the python code again.
It just works fine with me :)
@Waleed911 from your response, I think the problem is you didn't download the ".h5" file properly.
I running Mask_rcnn to trainning a module on ubuntu 16.04, get a error,like this:
Using TensorFlow backend.
Configurations:
BACKBONE resnet101
BACKBONE_STRIDES [4, 8, 16, 32, 64]
BATCH_SIZE 1
BBOX_STD_DEV [0.1 0.1 0.2 0.2]
DETECTION_MAX_INSTANCES 100
DETECTION_MIN_CONFIDENCE 0.7
DETECTION_NMS_THRESHOLD 0.3
GPU_COUNT 1
GRADIENT_CLIP_NORM 5.0
IMAGES_PER_GPU 1
IMAGE_MAX_DIM 4224
IMAGE_META_SIZE 15
IMAGE_MIN_DIM 3200
IMAGE_MIN_SCALE 0
IMAGE_RESIZE_MODE square
IMAGE_SHAPE [4224 4224 3]
LEARNING_MOMENTUM 0.9
LEARNING_RATE 0.001
LOSS_WEIGHTS {'mrcnn_mask_loss': 1.0, 'rpn_class_loss': 1.0, 'mrcnn_class_loss': 1.0, 'mrcnn_bbox_loss': 1.0, 'rpn_bbox_loss': 1.0}
MASK_POOL_SIZE 14
MASK_SHAPE [28, 28]
MAX_GT_INSTANCES 100
MEAN_PIXEL [123.7 116.8 103.9]
MINI_MASK_SHAPE (56, 56)
NAME shapes
NUM_CLASSES 3
POOL_SIZE 7
POST_NMS_ROIS_INFERENCE 1000
POST_NMS_ROIS_TRAINING 2000
ROI_POSITIVE_RATIO 0.33
RPN_ANCHOR_RATIOS [0.5, 1, 2]
RPN_ANCHOR_SCALES (48, 96, 192, 384, 768)
RPN_ANCHOR_STRIDE 1
RPN_BBOX_STD_DEV [0.1 0.1 0.2 0.2]
RPN_NMS_THRESHOLD 0.7
RPN_TRAIN_ANCHORS_PER_IMAGE 256
STEPS_PER_EPOCH 100
TRAIN_BN False
TRAIN_ROIS_PER_IMAGE 32
USE_MINI_MASK True
USE_RPN_ROIS True
VALIDATION_STEPS 5
WEIGHT_DECAY 0.0001
Traceback (most recent call last):
File "/root/.pycharm_helpers/pydev/pydevd.py", line 1668, in
main()
File "/root/.pycharm_helpers/pydev/pydevd.py", line 1662, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "/root/.pycharm_helpers/pydev/pydevd.py", line 1072, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/root/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/usr/mask-rcnn/Mask_RCNN/samples/oilpressreading/oilpress.py", line 218, in
"mrcnn_bbox", "mrcnn_mask"])
File "/usr/local/lib/python3.5/dist-packages/mask_rcnn-2.1-py3.5.egg/mrcnn/model.py", line 2085, in load_weights
File "/usr/local/lib/python3.5/dist-packages/h5py/_hl/files.py", line 269, in __init__
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/usr/local/lib/python3.5/dist-packages/h5py/_hl/files.py", line 99, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 78, in h5py.h5f.open
OSError: Unable to open file (file signature not found)
============
how can I do to solve it,thanks!
I had this error when I downloaded a pre-trained network model. What the problem ended up being is that the file containing the model had a .hdf5.part extension, and it sufficed to simply switch it to a .hdf5 (without the .part). I hope that helps.
python train.py
Using TensorFlow backend.
train_file model/vgg16_no_top.h5
model_weights train_pair.txt
0 input_1
1 block1_conv1
2 block1_conv2
3 block1_pool
4 dropout_1
5 block2_conv1
6 block2_conv2
7 block2_pool
8 dropout_2
9 block3_conv1
10 block3_conv2
11 block3_conv3
12 block3_pool
13 dropout_3
14 block4_conv1
15 block4_conv2
16 block4_conv3
17 block4_pool
18 dropout_4
19 block5_conv1
20 block5_conv2
21 block5_conv3
22 dropout_5
23 C4_cfe_cfe0
24 C4_cfe_cfe1_dilation
25 C4_cfe_cfe2_dilation
26 C4_cfe_cfe3_dilation
27 C5_cfe_cfe0
28 C5_cfe_cfe1_dilation
29 C5_cfe_cfe2_dilation
30 C5_cfe_cfe3_dilation
31 C3_cfe_cfe0
32 C3_cfe_cfe1_dilation
33 C3_cfe_cfe2_dilation
34 C3_cfe_cfe3_dilation
35 C4_cfeconcatcfe
36 C5_cfeconcatcfe
37 C3_cfeconcatcfe
38 C4_cfe_BN
39 C5_cfe_BN
40 C3_cfe_BN
41 C4_cfe_relu
42 C5_cfe_relu
43 C3_cfe_relu
44 C4_cfe_up2
45 C5_cfe_up4
46 C345_aspp_concat
47 C345_ChannelWiseAttention_withcpfe_GlobalAveragePooling2D
48 dense_1
49 dense_2
50 C345_ChannelWiseAttention_withcpfe_reshape
51 C345_ChannelWiseAttention_withcpfe_repeat
52 C345_ChannelWiseAttention_withcpfe_multiply
53 C345_conv
54 C345_BN
55 C345_relu
56 C345_up4
57 spatial_attention_1_conv1
58 spatial_attention_2_conv1
59 attention1_1_BN
60 attention2_1_BN
61 C2_conv
62 attention1_1_relu
63 attention2_1_relu
64 C1_conv
65 C2_BN_BN
66 spatial_attention_1_conv2
67 spatial_attention_2_conv2
68 C1_BN_BN
69 C2_BN_relu
70 attention1_2_BN
71 attention2_2_BN
72 C1_BN_relu
73 C2_up2
74 attention1_2_relu
75 attention2_2_relu
76 C12_concat
77 spatial_attention_add
78 C12_conv
79 activation_1
80 C12_BN
81 repeat_1
82 C12_relu
83 C12_atten_mutiply
84 fuse_concat
85 sa
Traceback (most recent call last):
File "train.py", line 72, in
model.load_weights(model_name,by_name=True)
File "/home/lthpc/anaconda3/envs/py27/lib/python2.7/site-packages/keras/engine/topology.py", line 2566, in load_weights
f = h5py.File(filepath, mode='r')
File "/home/lthpc/anaconda3/envs/py27/lib/python2.7/site-packages/h5py/_hl/files.py", line 271, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/home/lthpc/anaconda3/envs/py27/lib/python2.7/site-packages/h5py/_hl/files.py", line 101, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/home/ilan/minonda/conda-bld/h5py_1490028130695/work/h5py/_objects.c:2846)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/home/ilan/minonda/conda-bld/h5py_1490028130695/work/h5py/_objects.c:2804)
File "h5py/h5f.pyx", line 78, in h5py.h5f.open (/home/ilan/minonda/conda-bld/h5py_1490028130695/work/h5py/h5f.c:2123)
IOError: Unable to open file (File signature not found)
What did you do about it?
for windows users, I found it in SPB_DATA folder it's the ~ path of the fit bash just run your git bash and cd to .keras
(raghu) dioxe@dioxe-Inspiron-3542:~/project/SmartFit-master$ ./run_smartfit.sh inputs/example_person.jpg inputs/example_clothing.jpg
Running human parsing...
WARNING:tensorflow:From evaluate_parsing_JPPNet-s2.py:116: calling argmax (from tensorflow.python.ops.math_ops) with dimension is deprecated and will be removed in a future version.
Instructions for updating:
Use the axis
argument instead
Restored model parameters from model.ckpt-205632
[*] Load SUCCESS
step 0
./datasets/examples/images/example_person.jpg
Running pose estimation...
Using TensorFlow backend.
Traceback (most recent call last):
File "./extract_keypoints.py", line 231, in
model.load_weights(keras_weights_file)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/site-packages/keras/engine/topology.py", line 2658, in load_weights
with h5py.File(filepath, mode='r') as f:
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/site-packages/h5py/_hl/files.py", line 312, in __init__
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/home/dioxe/anaconda3/envs/raghu/lib/python3.6/site-packages/h5py/_hl/files.py", line 142, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 78, in h5py.h5f.open
OSError: Unable to open file (truncated file: eof = 126712795, sblock->base_addr = 0, stored_eof = 209602136)
cp: cannot stat 'pose_estimation/output/pose.pkl': No such file or directory
you may try:
Clear all files under ~/.cache/torch/transformers and try again,~ means your home directory, such as root user is /root, other users are /home/username。
Most helpful comment
I ran into this problem because I had accidentally terminated the download of the keras model data file before it was done, making it unreadable by h5py. I went into
.keras/models/
and deleted the data file so that keras would re-download it.