Keras: Graph disconnected: cannot obtain value for tensor Tensor

Created on 8 Sep 2017  路  7Comments  路  Source: keras-team/keras

Hi All,

I am trying to add the last layer's output of one model to last layer's output of another model as below.

class Student(object):

    def build(self, rgb, alpha, teacher):
        shape = (1, 1, int(1024 * alpha))
        """
        This looks dangerous. Not sure how the model would get affected with the laarning_phase variable set to True.
        """
        K.set_learning_phase(True)
        img_input = Input(shape=(32,32,3))

        conv = _conv_block(img_input, 32, alpha, strides=(2, 2))
        conv = _depthwise_conv_block(conv, 64, alpha, depth_multiplier, block_id =1)
        conv = _depthwise_conv_block(conv, 128, alpha, depth_multiplier,strides=(2, 2), block_id =2)
        conv = _depthwise_conv_block(conv, 128, alpha, depth_multiplier,block_id =3)
        conv = _depthwise_conv_block(conv, 256, alpha, depth_multiplier, strides=(2,2),block_id =4)
        conv = _depthwise_conv_block(conv, 256, alpha, depth_multiplier, block_id =5)
        conv = _depthwise_conv_block(conv, 512, alpha, depth_multiplier, strides = (2,2), block_id =6)
        conv = _depthwise_conv_block(conv, 512, alpha, depth_multiplier, block_id =8)
        conv = _depthwise_conv_block(conv, 512, alpha, depth_multiplier, block_id =9)
        conv = _depthwise_conv_block(conv, 512, alpha, depth_multiplier, block_id =10)
        conv = _depthwise_conv_block(conv, 512, alpha, depth_multiplier, block_id =11)
        conv = _depthwise_conv_block(conv, 512, alpha, depth_multiplier, block_id =12)
        conv = _depthwise_conv_block(conv, 1024, alpha, depth_multiplier,strides=(2,2), block_id =13)
        conv = _depthwise_conv_block(conv, 1024, alpha, depth_multiplier, block_id =14)

        conv = GlobalAveragePooling2D()(conv)
        conv = Reshape(shape, name='reshape_1')(conv)

        conv = Dropout(0.5, name='dropout')(conv)
        conv = Conv2D(NUM_CLASSES, (1, 1), padding='same', name='conv_preds')(conv)
        conv = Activation('softmax', name='act_softmax')(conv)
        conv = Reshape((NUM_CLASSES,), name='reshape_2')(conv)
        conv = add([conv, teacher.layers[87].output], name='add')
        model = Model(img_input, conv)
        return model

I am passing the teacher object to student build function, so that I can access teacher's last layer output.
I have used add layer to add element wise tensors of teacher and student.

But I am seeing below error. ValueError: Graph disconnected: cannot obtain value for tensor Tensor("input_1:0", shape=(?, 32, 32, 3), dtype=float32) at layer "input_1". The following previous layers were accessed without issue: []
Does someone knows the issue?

Most helpful comment

I had the same issue and i solved it by constracting my merged model as follow:

mergedModel = Model(inputs=[firstModel.input, secondModel.input], outputs=secondModel.layers[-1].output)

Hope it will help ;)

All 7 comments

I could solve this issue by giving an additional input to the model because the "teacher.layers[87].output" is not part of the graph hence this value is calculated with the extra input that I give to the model.

Hence my model instance would like this now: new_model = Model(inputs= [student_model.input,teacher_model.input] , outputs=x) instead of model = Model(img_input, conv)

How did you resolve it. @raginisharma14 . I am facing the same problem

Im trying to merge pretrained mobilenet with my model, the same error appeared:

ValueError: Graph disconnected: cannot obtain value for tensor Tensor("dense_1_input:0", shape=(?, 1024), dtype=float32) at layer "dense_1_input". The following previous layers were accessed without issue: []

@schliffen Did you solve the problem? I am trying to connect two pretrained models and raising an error of graph?

I had the same issue and i solved it by constracting my merged model as follow:

mergedModel = Model(inputs=[firstModel.input, secondModel.input], outputs=secondModel.layers[-1].output)

Hope it will help ;)

I have the same problem. But I do not want to merge every layer of two models, so will @B-Yassine 's advice work for my case?
Will this:
mergedModel = Model(inputs=[firstModel.input, secondModel.input], outputs=secondModel.layers[-1].output)
merge every layer of two models?

I could also try to use this way to see if it works.
Just want to see if we have same concerns.
Thanks!

mergedModel = Model(inputs=[firstModel.input, secondModel.input], outputs=secondModel.layers[-1].output)

I did that, and now I have 2 of each layer in my merged model... has anyone solved this?

Was this page helpful?
0 / 5 - 0 ratings