Keras: Error creating shared node when using multiple Convolutional layers

Created on 29 Apr 2016  路  20Comments  路  Source: keras-team/keras

I am trying to run following script:

import numpy as np
from keras.models import Model
from keras.layers import Input, Dense, merge
from keras.layers import Convolution2D, MaxPooling2D, Flatten

input1 = Input((64, 64, 3))
input2 = Input((64, 64, 3))

conv_1 = Convolution2D(32, 3, 3, activation='relu')
conv_2 = Convolution2D(64, 3, 3, activation='relu')
fl_3 = Flatten()
fc_4 = Dense(64, activation='relu')
fc_5 = Dense(32, activation='relu')

rep1 = fc_5(fc_4(fl_3(conv_2(conv_1(input1)))))
rep2 = fc_5(fc_4(fl_3(conv_2(conv_1(input2)))))
#rep1 = fc_5(fc_4(fl_3(conv_1(input1))))
#rep2 = fc_5(fc_4(fl_3(conv_1(input2))))

combined_vec = merge([rep1, rep2], mode='concat')

fc_6 = Dense(64)(combined_vec)
prediction = Dense(100, activation='softmax')(fc_6)

model = Model([input1, input2], prediction)
model.compile('sgd', 'categorical_crossentropy', metric=['accuracy'])

I get following error:

Using Theano backend.
Traceback (most recent call last):
  File "shared-example.py", line 15, in <module>
    rep1 = fc_5(fc_4(fl_3(conv_2(conv_1(input1)))))
  File "/usr/lib/python3.5/site-packages/keras/engine/topology.py", line 458, in __call__
    self.build(input_shapes[0])
  File "/usr/lib/python3.5/site-packages/keras/layers/core.py", line 589, in build
    name='{}_W'.format(self.name))
  File "/usr/lib/python3.5/site-packages/keras/initializations.py", line 59, in glorot_uniform
    return uniform(shape, s, name=name)
  File "/usr/lib/python3.5/site-packages/keras/initializations.py", line 30, in uniform
    return K.variable(np.random.uniform(low=-scale, high=scale, size=shape),
  File "mtrand.pyx", line 1565, in mtrand.RandomState.uniform (numpy/random/mtrand/mtrand.c:17303)
OverflowError: Range exceeds valid bounds

But when I remove one of the convolutional layers (as done in commented code in above script), I do not get the error. I am guessing this error arises due to non integer value of scale passed to np.random.uniform. How can I solve this issue ?

Thanks.

PS: I am using Keras using Theano backend on arch linux x64 with latest versions of all libraries.

Most helpful comment

from keras import backend as K
K.set_image_dim_ordering('th')
Try this, this may work

All 20 comments

By default, Keras takes (channel, height, width) as dim_ordering. Please change to:

input1 = Input((3, 64, 64))
input2 = Input((3, 64, 64))

or add argument dim_ordering='tf' for conv layer.

I am having a similar issue. My Convolutional neural net is giving me an error on the first DenseLayer, saying "OverflowError: Range exceeds valid bounds". My code looks correct given the other examples that I have consulted, but I'm not really sure.

IMAGE_HEIGHT = 6
IMAGE_WIDTH = 200
NUM_PEOPLE = 18

def gen_model():
    """
    Generates the model to be used
    :return: the model, untrained
    """
    model = Sequential()

    model.add(Convolution2D(5, 5, 5, input_shape=(1, IMAGE_HEIGHT, IMAGE_WIDTH)))
    model.add(AveragePooling2D(pool_size=(2, 2)))
    model.add(Convolution2D(5, 5, 5))
    model.add(AveragePooling2D(pool_size=(2, 2)))
    model.add(Flatten())
    model.add(Dense(output_dim=20))
    model.add(Activation('relu'))
    model.add(Dense(output_dim=18))
    model.add(Activation('softmax'))
    model.compile(optimizer="adagrad", loss="categorical_crossentropy", metrics=['accuracy'])
    return model

And here is the full traceback error:
Traceback (most recent call last):
File "/home/chris/Desktop/KerasCNN/model.py", line 63, in <module>
main()
File "/home/chris/Desktop/KerasCNN/model.py", line 52, in main
model = gen_model()
File "/home/chris/Desktop/KerasCNN/model.py", line 31, in gen_model
model.add(Dense(output_dim=20))
File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 142, in add
output_tensor = layer(self.outputs[0])
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 458, in __call__
self.build(input_shapes[0])
File "/usr/local/lib/python2.7/dist-packages/keras/layers/core.py", line 596, in build
name='{}_W'.format(self.name))
File "/usr/local/lib/python2.7/dist-packages/keras/initializations.py", line 59, in glorot_uniform
return uniform(shape, s, name=name)
File "/usr/local/lib/python2.7/dist-packages/keras/initializations.py", line 30, in uniform
return K.variable(np.random.uniform(low=-scale, high=scale, size=shape),
File "mtrand.pyx", line 1565, in mtrand.RandomState.uniform
(numpy/random/mtrand/mtrand.c:16656)
OverflowError: Range exceeds valid bounds

As @joelthchao suggested, I made the change in dim ordering, and my code worked.

@harshhemani I believe my input shape is already in the ordering of (channel, height, width).

@ChrisHayduk Suggest you to add border_mode='same' for Convolution2D. Your height will become smaller than filter size after the first pooling.

@joelthchao Thank you! That seems to have resolved my first issue. However, I am now receiving this error:

Exception: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 arrays but instead got the following list of 540 arrays

@ChrisHayduk Your input should be a numpy array with first axis indicate number of samples you have, not a list of numpy array. Maybe try concat_inputs = np.concatenate(inputs, axis=0)

@joelthchao How am I supposed to input multiple images then? I have 540 images, which end up as multidimensional matrices, that I need the neural net to train on.

@ChrisHayduk Not sure what is your problem, my answer is for multiple images too, you have to aggregate images into a 4-d matrices, not a list of 3-d matrices. I guess your task is classification, then, you can follow example.

@joelthchao But how should I set up my labels? I originally had 540 to correspond to each image, but now that I concatenated all of the arrays, I am left 107,911 samples and only 540 labels. Also, after concatenation, shouldn't the shape of the matrix be (numSamples, numChannels, height, width)? Because mine looks like this: (107911, 6).

@ChrisHayduk Try concat_inputs = np.array(inputs). It should be able to give you correct shape.

@joelthchao Thanks! I am now receiving the following error:

ValueError: ('Bad input argument to theano function with name "/usr/local/lib/python3.5/dist-packages/keras/backend/theano_backend.py:514" at index 0(0-based)', 'setting an array element with a sequence.')

Is my data just set up incorrectly?

@ChrisHayduk Paste your code, it's too hard to debug with only error message.

@ChrisHayduk
First, your height is too small to handle these model, recommend you change filter size to 3. Also, you forget to put Activation after Convolution2D.

model.add(Convolution2D(32, 5, 5, border_mode='same', input_shape=(1, IMAGE_HEIGHT, IMAGE_WIDTH)))
# Output: (None, 32, 6, 6)
model.add(MaxPooling2D(pool_size=(2, 2)))
# Output: (None, 32, 3, 3)
model.add(Convolution2D(32, 5, 5, border_mode='same'))
# Error: Input is small than filter size
model.add(MaxPooling2D(pool_size=(2, 2)))

Second, with my random data, this network should be able to train. You should examine your data shape before feeding into network.

train_size = 540
test_size = 100
train_concat = np.random.rand(train_size, 1, IMAGE_HEIGHT, IMAGE_WIDTH) #np.array(train)
train_labels_binary = np.random.rand(train_size, NUM_PEOPLE) #to_categorical(train_labels)
test_concat = np.random.rand(test_size, 1, IMAGE_HEIGHT, IMAGE_WIDTH) #np.array(test)
test_labels_binary = np.random.rand(test_size, NUM_PEOPLE) #to_categorical(test_labels)

@joelthchao Ok, thank you very much! I'll look into the data and see what could be causing these issue.

Same problem here! I transfered my scripts to a different machine (both Ubuntu 16.04) and the error popped up. On my old machine it is working flawlessly.

My code causing the problem:

model = Sequential()

model.add(Convolution2D(nb_conv_filters, conv_kernel_size, conv_kernel_size, border_mode='valid', input_shape=(1, img_rows, img_cols)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(pool_kernel_size, pool_kernel_size)))

model.add(Flatten())

model.add(Dense(nb_dense_neurons)) #128
model.add(Activation('relu'))
model.add(Dropout(0.5))

model.add(Dense(nb_classes))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy',
                  optimizer='adadelta',
                  metrics=['accuracy'])

The error is the following:

Traceback (most recent call last):
  File "sensitivityAnalysis.py", line 14, in <module>
    cnn.buildModelLargeKernels( [4])
  File "/home/user/CNN.py", line 341, in buildModelLargeKernels
    model.add(Dense(nb_dense_neurons)) #128
  File "/usr/local/lib/python3.5/dist-packages/keras/models.py", line 308, in add
    output_tensor = layer(self.outputs[0])
  File "/usr/local/lib/python3.5/dist-packages/keras/engine/topology.py", line 487, in __call__
    build(input_shapes[0])
  File "/usr/local/lib/python3.5/dist-packages/keras/layers/core.py", line 695, in build
    name='{}_W'.format(self.name))
  File "/usr/local/lib/python3.5/dist-packages/keras/initializations.py", line 59, in glorot_uniform
    return uniform(shape, s, name=name)
  File "/usr/local/lib/python3.5/dist-packages/keras/initializations.py", line 32, in uniform
    return K.random_uniform_variable(shape, -scale, scale, name=name)
  File "/usr/local/lib/python3.5/dist-packages/keras/backend/theano_backend.py", line 140, in random_uniform_variable
    return variable(np.random.uniform(low=low, high=high, size=shape),
  File "mtrand.pyx", line 1565, in mtrand.RandomState.uniform (numpy/random/mtrand/mtrand.c:17311)
OverflowError: Range exceeds valid bounds

Have you solved it?

from keras import backend as K
K.set_image_dim_ordering('th')
Try this, this may work

Yes, what you suggested solved it. I stated that in a comment above. Thanks.

Was this page helpful?
0 / 5 - 0 ratings