Keras: Output tensors to a Model must be Keras tensors. Found: Tensor

Created on 15 Apr 2017  Â·  11Comments  Â·  Source: keras-team/keras

Hi, I have been struggling to address this problem, even though, similar problems were solved
. But I've been trying to handle this, with less success. I will appreciate any idea to my peculiar problem.Thank you.

x= Dense(n_neurons,input_dim=n_feat, W_regularizer=l2(0.001),activation='relu')(input1)
y = Dense(n_neurons,input_dim=n_feat,activation='relu')(input2)

concat_x = K.concat([x,input2],1)# + 0.01* tf.nn.l2_loss(W0)

x2 = Dense(n_neurons)(input2)

my_lamb = Lambda(my_input,output_shape=my_output)(x2)

merg_x = merge([x, y],mode='concat',concat_axis=-1)
pred_out = Dense(1,activation='relu')(merg_x)

f_ =flip_gradient(merg_x, 1)
x = Dense(n_feat,activation='relu')(f_)
x = Dense(n_feat,activation='relu')(x)
dom_out = Dense(2,activation='softmax')(x)
print(K.shape(pred_out))

mse2 = Dense(1,activation='relu')(mse)

#total_loss = add(mse , dom_out) # do i need to reduce or take it directly
#model= Model(inputs=[input1, input2], outputs=[mse,dom_out])

model = Model(input=[input1, input2], output=[pred_out, dom_out])

TypeError: Output tensors to a Model must be Keras tensors. Found: Tensor("Softmax:0", shape=(?, 2), dtype=float32)

stale

Most helpful comment

If you directly compute a tensor it wont work the way you want to. Any tensor you are feeding into a layer should be coming from another layer. Make sure absolutely everything you do is wrapped within a Lambda and that particular error should go away. flip_gradient looks like the likely suspect. Your K.concat has the same problem. Need to use keras.layers.Concatenate.

Also, are you on keras 1 or 2? Helps to know the version.

Cheers

All 11 comments

If you directly compute a tensor it wont work the way you want to. Any tensor you are feeding into a layer should be coming from another layer. Make sure absolutely everything you do is wrapped within a Lambda and that particular error should go away. flip_gradient looks like the likely suspect. Your K.concat has the same problem. Need to use keras.layers.Concatenate.

Also, are you on keras 1 or 2? Helps to know the version.

Cheers

@bstriner Thanks a lot for the reply. What you said is right about the flip_gradient. The problem is fixed by Joel using f_ = Lambda(lambda x: flip_gradient(x, 1)) (merg_x). However, regarding the statement that "Any tensor you are feeding into a layer should be coming from another layer" , if suppose y1 is a 1 dim numpy array of a ground truth, and I want to use
mse = (y1 - pred_out), how can I feed it to work in Model() function so as to minimize total_loss = mse + dom_pred ?
Thanks

@amaall Keras standard is for y1 to be a 2 dim array (n,1) so all of the dimension checking will work correctly. When you use it you can always y1[:,0] to get a 1-d view of the 2-d array.

Anything you are passing into another layer needs to be a keras tensor so it will have a known shape. Keras tensors are theano/tf tensors with additional information included. You get keras tensors from keras.layers.Input or any time you pass an Input to a keras.layers.Layer. So if you are going to be using a tensor as an input to another layer or an output of a model, make sure to use Lambdas.

If you're just using the tensor in a loss calculation or something else, you don't have to wrap it in Lambdas.

Let's say your model has an output named pred_out that is (n,1) and that is some output from some layer. Keras will expect a target that is the same shape. It will pass the output and the target to the loss function. Hypothetically, myloss = lambda ytrue, ypred: T.mean(ytrue-ypred) is what you want and use that function as the loss function for pred_out.

Use a different loss function for whatever dom_pred is. You can use a dictionary of losses when you compile.

That is the most straightforward way to do things. The alternative you sometimes use in some situations is to have y1 be a (n,1) input y1=Input((1,)). You can then calculate the loss and add it to the model or one of the layers in the model (should be after you make model, before you compile model).
model.add_loss(y1-pred_out).

The first way you would pass y1 as a target when you train. The second way you would pass y1 as an input when you train. If you're using y1 for something else as well the second way is sometimes easier.

Cheers

@bstriner Many thanks for the explanation. I have followed the solution you provided as below:

def my_loss(yb,pred_out):
mse =lambda ys,pred_out: K.mean(yb - pred_out)
return mse
def tot_out(my_loss,dom_out):
total_loss = lambda my_loss, dom_out: my_loss + dom_out
return total_loss
model = Model(input=[input1,input2], output=[my_loss, dom_out,tot_out])
model.compile(optimizer='RMSprop',loss={'my_loss':'mean_squared_error','dom_out':'binary_crossentropy','tot_out':'binary_crossentropy'})

model.fit([Xb,tb],[yb, D_labels],nb_epoch=50,batch_size=128)

And I got the following error: TypeError: Output tensors to a Model must be Keras tensors. Found: .my_loss at 0x7f41e2291bf8>

Could there be a way around it? I tried using mse =Lambda(lambda ys,pred_out: K.mean(yb - pred_out)) and I got thesame error.
Thanks

Dude, fix your formatting. The loss is a function or a lambda, not one wrapped in the other. You give keras a list of outputs and targets and losses and it adds the losses together. You don't need to total things yourself. You pass your custom losses into compile, not the Model constructor.

Simple code for custom loss. Just add more losses if you want and Keras will add them together.

If you have multiple outputs, you can use a dictionary or an array for the losses. In order for the loss dictionary to work, make sure your output layers have matching names. Otherwise, pass an array to loss and make sure the order is the same as your outputs.

Subtracting two values is not the mse or mean_squared_error. Please call it something else or someone is going to get confused.

The below uses a custom loss for pred_out and binary_crossentropy for dom_out. The two losses are added together to make the total loss automatically by keras. If you want to adjust how they are added, use loss_weights.

# pred_out and dom_out are the outputs of two layers
# input1 and input2 are keras.layers.Input objects
model = Model(input=[input1, input2],output=[pred_out, dom_out])

#Something like this
def my_loss(ytrue, ypred):
  # ytrue is your target Y during training. ypred is the output, in this case, pred_out
  return ytrue-ypred
model.compile('RMSprop',loss={'pred_out':my_loss, 'dom_out':'binary_crossentropy'})

#or something like this
model.compile('RMSprop',loss=[lambda ytrue, ypred: ytrue-ypred, 'binary_crossentropy'])

Cheers

@bstriner You are a great tutor! I learn a lot from you. Thanks

I have used the same data setup in a feed function in tensorflow, and my codes is working. I wonder why the codes below give me the error:
_raise ValueError('All target arrays (y) should have
ValueError: All target arrays (y) should have the same number of samples._
Thanks.
`
def myModel2():

input1 = Input(shape=(n_feat,),name='input1')
input2 = Input(shape=(1,),name='input2')
input3 = Input(shape=(1,),name='input3')
dom_input = Input(shape=(2,),name='dom_input')
S_batches = batch_generator([Xs, ys,ts], batch_size / 2)
T_batches = batch_generator([Xt, yt,tt], batch_size / 2)
X0, y0,t0 = next(S_batches)
X1, y1,t1 = next(T_batches)
Xb = np.vstack([X0, X1])
tb = np.vstack([t0,t1])
yb = y0



#Create Model
x= Dense(n_neurons,input_dim=n_feat, W_regularizer=l2(0.001),activation='relu')(input1)
merg_x = merge([x, input2],mode='concat',concat_axis=-1) 
pred_out = Dense(1,activation='relu',name='pred_out')(merg_x) 


f_ = Lambda(lambda x: flip_gradient(x, 1)) (merg_x)
x = Dense(n_feat,activation='relu')(f_)
x = Dense(n_feat,activation='relu')(x)
dom_out = Dense(2,activation='softmax',name='dom_out')(x)
print(K.shape(pred_out))
D_labels = np.hstack([np.zeros(int(batch_size / 2), dtype=np.int32),
                          np.ones(int(batch_size / 2), dtype=np.int32)])
print(Xb.shape,tb.shape,yb.shape,D_labels.shape)
model = Model(input=[input1,input2], output=[pred_out, dom_out])
model.compile(optimizer='RMSprop',loss=[lambda yb,pred_out : yb-pred_out, 'sparse_categorical_crossentropy']) 
model.fit({'input1':Xb,'input2':tb},[yb, D_labels],nb_epoch=50,batch_size=128) 
y_pred3 = model.predict([Xt,t_test],batch_size=50,verbose=0)  
rms_ite = mean_squared_error((get_ITE(X_train,y_train,y_test)), get_ITE(X_train, y_train,y_pred3)) 
return rms_ite `

This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.

i am trying to convlove two tensor and output of the convolution will be the the output of the model,but it is giving error that output tensor should be keras tensor.
def ResNet50(input_shape,output_shape,data_a,data_b,labels,weights='imagenet'):
"""
Implementation of the popular ResNet50 the following architecture:
CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK2 -> CONVBLOCK -> IDBLOCK3
-> CONVBLOCK -> IDBLOCK5 -> CONVBLOCK -> IDBLOCK2 -> AVGPOOL -> TOPLAYER
Arguments:
input_shape -- shape of the images of the dataset
classes -- integer, number of classes
Returns:
model -- a Model() instance in Keras
"""

# Define the input as a tensor with shape input_shape
#X_input = tf.placeholder(tf.float32,shape=input_shape)
X_input=Input(input_shape)
# Zero-Padding
X = ZeroPadding2D((3, 3))(X_input)

Stage 1

X = Conv2D(64, (7, 7), strides = (2, 2), name = 'conv1')(X)
X = BatchNormalization(axis = 3, name = 'bn_conv1')(X)
X = Activation('relu')(X)
X = MaxPooling2D((3, 3), strides=(2, 2))(X)

# Stage 2
X = convolutional_block(X, f = 3, filters = [64, 64, 256], stage = 2, block='a', s = 1)
X = identity_block(X, 3, [64, 64, 256], stage=2, block='b')
X = identity_block(X, 3, [64, 64, 256], stage=2, block='c')


# Stage 3
X = convolutional_block(X, f=3, filters = [128,128,512], stage = 3, block='a', s=2)
X = identity_block(X, 3, filters = [128,128,512],stage=3, block='b')
X = identity_block(X, 3, filters = [128,128,512], stage=3, block='c')
X = identity_block(X, 3, filters = [128,128,512], stage =3, block='d')

# Stage 4 
X = convolutional_block(X, f=3, filters = [256,256,1024],stage=4, block='a', s=2)
X = identity_block(X, 3, filters = [256,256,1024], stage=4, block='b')
X = identity_block(X, 3, filters = [256, 256, 1024], stage=4, block='c')
X = identity_block(X, 3, filters= [256,256,1024], stage=4, block='d')
X = identity_block(X, 3, filters=[256,256,1024], stage=4, block='e')
X = identity_block(X, 3, filters=[256,256,1024], stage=4, block='f')

# Stage 5 
X = convolutional_block(X, f=3, filters=[256,256,2048], stage=5,block='a', s=3)
X = identity_block(X, 3, filters=[256,256,2048], stage=5, block='b')
X = identity_block(X,3, filters=[256,256,2048], stage=5, block='c')

# AVGPOOL (≈1 line). Use "X = AveragePooling2D(...)(X)"
X = AveragePooling2D((6,6), name='avg_pool')(X)
F=Conv2D(8,(1,1),strides=(1,1),name='Parameter',dilation_rate=(1, 1),kernel_initializer = glorot_uniform(seed=0))(X)

F_= tf.reshape(F,[3, 3,1,1])
#print(F_)

#F_=tf.keras.backend.cast(F,dtype=tf.float32)
#F_=tf.to_float(F_, name='ToFloat')
#Y_output=prediction (F_,Y_u)
#Y_u=Input(y_train_u[0].shape)
#Y_u=tf.convert_to_tensor(Y_u)
#print(Y_u.shape)
#Y=convolution2d(Y_u,F)
#print(tf.shape(F))
Y_u=Input(output_shape)
ypred=(tf.nn.conv2d(Y_u, F_, [1, 1,1,1], "SAME"))


#model = Model(inputs =X_input, outputs = ypred, name='ResNet50')
model = Model(inputs=[X_input, Y_u], outputs=ypred,name='ResNet50')
sgd=ke.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0)
model.compile(sgd,loss='mean_squared_error', metrics=['accuracy'])
n=model.fit([data_a, data_b],labels,batch_size=16, epochs=1)
return n

que

i have the same problem.
ValueError: Output tensors to a Model must be the output of a Keras Layer (thus holding past layer metadata). Found: Tensor("concat:0", shape=(3000, 300), dtype=float32)

when i used concat ,this problem appear.

sentence_input = Input(shape=(1000,), dtype='int32')
embedded_sequences = embedding_layer(sentence_input)
print(embedded_sequences.shape)
l_lstm = Bidirectional(GRU(100, return_sequences=True))(embedded_sequences)
l_att = AttLayer(100)(l_lstm)

total_vec = np.load('total_vec.npy')
total_vec = np.array(total_vec)
sess=tf.Session()
sess.run(tf.global_variables_initializer())
data_tensor= tf.convert_to_tensor(total_vec,dtype=tf.float32)
document_vec = K.concatenate([l_att,data_tensor],axis=1)

sentEncoder = Model(sentence_input, document_vec)
preds = Dense(3, activation='softmax')(document_vec)
print(type(preds))
print(preds.shape)

what shou i do? thank you!

Can you guys help me out i'm trying to deploy keras neural network on a flask web service, but i get this error when i click on the predict button
ValueError: Tensor Tensor("dense_1/Identity:0", shape=(None, 2), dtype=float32) is not an element of this graph.

import base64
import numpy as np
import io
from PIL import Image
from tensorflow import keras

from tensorflow.keras import backend as k

from keras import backend as K

from tensorflow.keras import backend as k
from tensorflow.keras.models import Sequential
from tensorflow.keras.models import load_model # used for loading model
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.preprocessing.image import img_to_array
from flask import Flask
from flask import request
from flask import jsonify
import tensorflow as tf
import json as json
from flask_cors import CORS, cross_origin

app = Flask(__name__)
cors = CORS(app)
app.config['CORS_HEADERS'] = 'Content-Type'

def get_model():
global model
model = load_model('Mobilenet_Cat_and_Dog.h5')
#model._make_predict_function()
print(" * Model Loaded!!!")
global graph
graph =tf.compat.v1.get_default_graph()
#graph.append(tf.get_default_graph())

def preprocess_image(image, target_size):
if image.mode != "RGB":
image = image.convert("RGB")
image = image.resize(target_size)
image = img_to_array(image)
image = np.expand_dims(image, axis=0)

return image

print(" * Loading keras model...")
get_model()

@app.route("/predict", methods=["POST","GET"])
@cross_origin()
def predict():
message = request.get_json(force=True)
encoded = message['image']
decoded = base64.b64decode(encoded)

image =Image.open(io.BytesIO(decoded))
processed_image = preprocess_image(image, target_size=(224, 224))


with graph.as_default():

        prediction = model.predict(processed_image).tolist()

        response = {

            'prediction' : {

                'dog' : prediction[0][0],
                'cat' : prediction[0][1]
            }
        }
        return jsonify(response)


<!DOCTYPE html>



ADNDYJAZZ_AI





Predictions


dog:


cat:


<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.5/jquery.min.js"></script>
<script>
    $(document).ready(function() {
        let base64Image;
        $("#image-selector").change(function() {
            let reader = new FileReader();
            reader.onload = function(e) {
                let dataURL = reader.result;
                $("#selected-image").attr("src", dataURL);
                base64Image = dataURL.replace("data:image/jpeg;base64,","");
                console.log(base64Image);

            }
            reader.readAsDataURL($("#image-selector")[0].files[0]);
            $("dog-prediction").text("");
            $("cat-prediction").text("");
        });

        $("#predict-button").click(function(event){
            let message = {
                image: base64Image
            }
            console.log(message);
            $.post("http://127.0.0.1:5000/predict", JSON.stringify(message), function(response){
                $("#dog-prediction").text(response.prediction.dog.toFixed(60));
                $("#cat-prediction").text(response.prediction.cat.toFixed(60));
                console.log(response);
            });
        });

    });

</script>


Pls help anyone

Was this page helpful?
0 / 5 - 0 ratings

Related issues

somewacko picture somewacko  Â·  3Comments

NancyZxll picture NancyZxll  Â·  3Comments

anjishnu picture anjishnu  Â·  3Comments

nryant picture nryant  Â·  3Comments

zygmuntz picture zygmuntz  Â·  3Comments