Error on f.attrs['layer_names'] = [layer.name.encode('utf8') for layer in flattened_layers].
If there are too many layers, the object header message will exceed hdf5's limit 64KB. It's a problem of hdf5 and is marked as an known issue for hdf5 group. But it will be nice to see a work-around, for example, saving weights to multiple files.
How to reproduce: Have a lot of layers and run model.save_weights.
Traceback (most recent call last):
File "train_model_pyramid.py", line 279, in <module>
model.save_weights('model_py_'+str(step)+'.hdf5')
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 2652, in save_weights
self.save_weights_to_hdf5_group(f)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 2663, in save_weights_to_hdf5_group
f.attrs['layer_names'] = [layer.name.encode('utf8') for layer in flattened_layers]
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/tmp/pip_build_root/h5py/h5py/_objects.c:2574)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/tmp/pip_build_root/h5py/h5py/_objects.c:2533)
File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/attrs.py", line 87, in __setitem__
self.create(name, data=value, dtype=base.guess_dtype(value))
File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/attrs.py", line 177, in create
attr = h5a.create(self._id, self._e(tempname), htype, space)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/tmp/pip_build_root/h5py/h5py/_objects.c:2574)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/tmp/pip_build_root/h5py/h5py/_objects.c:2533)
File "h5py/h5a.pyx", line 47, in h5py.h5a.create (/tmp/pip_build_root/h5py/h5py/h5a.c:1809)
RuntimeError: Unable to create attribute (Object header message is too large)
Please make sure that the boxes below are checked before you submit your issue. If your issue is an implementation question, please ask your question on StackOverflow or join the Keras Slack channel and ask there instead of filing a GitHub issue.
Thank you!
[x] Check that you are up-to-date with the master branch of Keras. You can update with:
pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps
[x] If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found here.
[ ] If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:
pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps
[x] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).
I suggest that you save your weights manually by retrieving the weight
values via model.get_weights().
I have never seen issue even though I have worked with very large models.
How many layers do you have?
On 2 February 2017 at 09:59, Fei Xia notifications@github.com wrote:
Error on f.attrs['layer_names'] = [layer.name.encode('utf8') for layer in
flattened_layers].If there are too many layers, the object header message will exceed hdf5's
limit 64KB. It's a problem of hdf5 and is marked as an known issue for hdf5
group. But it will be nice to see a work-around, for example, saving
weights to multiple files.How to reproduce: Have a lot of layers and run model.save_weights.
Traceback (most recent call last):
File "train_model_pyramid.py", line 279, in
model.save_weights('model_py_'+str(step)+'.hdf5')
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 2652, in save_weights
self.save_weights_to_hdf5_group(f)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 2663, in save_weights_to_hdf5_group
f.attrs['layer_names'] = [layer.name.encode('utf8') for layer in flattened_layers]
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/tmp/pip_build_root/h5py/h5py/_objects.c:2574)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/tmp/pip_build_root/h5py/h5py/_objects.c:2533)
File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/attrs.py", line 87, in __setitem__
self.create(name, data=value, dtype=base.guess_dtype(value))
File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/attrs.py", line 177, in create
attr = h5a.create(self._id, self._e(tempname), htype, space)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/tmp/pip_build_root/h5py/h5py/_objects.c:2574)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/tmp/pip_build_root/h5py/h5py/_objects.c:2533)
File "h5py/h5a.pyx", line 47, in h5py.h5a.create (/tmp/pip_build_root/h5py/h5py/h5a.c:1809)
RuntimeError: Unable to create attribute (Object header message is too large)Please make sure that the boxes below are checked before you submit your
issue. If your issue is an implementation question, please ask your
question on StackOverflow
http://stackoverflow.com/questions/tagged/keras or join the Keras Slack
channel https://keras-slack-autojoin.herokuapp.com/ and ask there
instead of filing a GitHub issue.Thank you!
-
[x ] Check that you are up-to-date with the master branch of Keras.
You can update with:
pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps
-[x ] If running on TensorFlow, check that you are up-to-date with the
latest version. The installation instructions can be found here
https://www.tensorflow.org/get_started/os_setup.
-If running on Theano, check that you are up-to-date with the master
branch of Theano. You can update with:
pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps
-[x ] Provide a link to a GitHub Gist of a Python script that can
reproduce your issue (or just copy the script here if it is short).—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/fchollet/keras/issues/5253, or mute the thread
https://github.com/notifications/unsubscribe-auth/AArWb8NcaPxeOLH0pw4N-cYkfJ0aoKkoks5rYhlogaJpZM4L1b7d
.
Thanks, good idea. I have 5.6k layers.
@fchollet model.get_weights() and model.set_weights() work like a charm. Thanks!
@fxia22 How to use model.get_weights() and model.set_weights(). Could you give a simple example?
How to save the model?
This issue is reproduced with DPN https://github.com/titu1994/Keras-DualPathNetworks
@burgalon There are a lot of additional lambda layers which I had to use in order to attempt grouped convolutions. When those are removed after TF and Keras adds them, it should be fine, I think.
I've been able to reproduce this using NasNetLarge no top + ~10 additional layers.
There is a PR for it, if you care about it please take over the PR or/and review it. https://github.com/keras-team/keras/pull/7508
https://drive.google.com/open?id=1xzrqP7ExTmJiZqVt0A_G6AT69EbIjEI9tUDLD1twqj8
def mymodel():
inputShape= (28, 28, 3);
model= Sequential()
model.add(Conv2D(20, 5, padding="same", input_shape=inputShape))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(500))
model.add(Activation('relu'))
model.add(Dense(2, activation= "softmax"))
return model
model.fit(....) #paramaters to start training your model
weigh= model.get_weights()
pklfile= "D:/modelweights.pkl"
try:
fpkl= open(pklfile, 'wb') #Python 3
pickle.dump(weigh, fpkl, protocol= pickle.HIGHEST_PROTOCOL)
fpkl.close()
except:
fpkl= open(pklfile, 'w') #Python 2
pickle.dump(weigh, fpkl, protocol= pickle.HIGHEST_PROTOCOL)
fpkl.close()
pklfile= "D:/modelweights.pkl"
try:
f= open(pklfile) #Python 2
weigh= pickle.load(f);
f.close();
except:
f= open(pklfile, 'rb') #Python 3
weigh= pickle.load(f);
f.close();
restoredmodel= mymodel()
restoredmodel.set_weights(weigh)
y_pred= restoredmodel.predict(X)
Read this drive link, it is better replica of above code:
https://drive.google.com/open?id=1xzrqP7ExTmJiZqVt0A_G6AT69EbIjEI9tUDLD1twqj8
Most helpful comment
I suggest that you save your weights manually by retrieving the weight
values via
model.get_weights().I have never seen issue even though I have worked with very large models.
How many layers do you have?
On 2 February 2017 at 09:59, Fei Xia notifications@github.com wrote: