Keras: load_model() fails in Flask request context only

Created on 6 Sep 2017  路  7Comments  路  Source: keras-team/keras

# predict.py

import os
import random
import re
import pickle
import utils
import shutil
import requests
import keras
from keras.models import load_model
from keras import backend as K


def load_classification_model(company_id):
    model_dir = os.path.realpath('./models/company_' + str(company_id))
    model_dir += '/' + os.listdir(model_dir)[-1]
    model_path = model_dir + '/model.h5'
    labels_path = model_dir + '/labels.pickle'
    print 'Loading model ' + model_path + ' ...'
    model = load_model(model_path)
    graph = K.function([model.layers[0].input, K.learning_phase()], [model.layers[-1].output])
    class_names = pickle.load(open(labels_path, 'rb'))
    return graph, class_names


def predict_image(company_id, url = None, part = None, inspection = None):    
    model_graph, class_names = load_classification_model(company_id)
    # ...load image, preprocess, predict...

import predict and predict.predict_image(...) works perfectly in REPL, but fails when called inside of a Flask request context. The error traceback:

# curl ml:5000/predict?company=1&image_url=<image_url>

[top of traceback omitted for brevity]
  File "/code/app.py", line 22, in predict_route
    result = predict_image(company_id, url=image_url)
  File "/code/predict.py", line 34, in predict_image
    model_graph, class_names = load_classification_model(company_id)
  File "/code/predict.py", line 21, in load_classification_model
    model = load_model(model_path)
  File "/usr/local/lib/python2.7/site-packages/keras/models.py", line 242, in load_model
    topology.load_weights_from_hdf5_group(f['model_weights'], model.layers)
  File "/usr/local/lib/python2.7/site-packages/keras/engine/topology.py", line 3095, in load_weights_from_hdf5_group
    K.batch_set_value(weight_value_tuples)
  File "/usr/local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 2193, in batch_set_value
    get_session().run(assign_ops, feed_dict=feed_dict)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 895, in run
    run_metadata_ptr)
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1071, in _run
    + e.args[0])
TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(2048, 64), dtype=float32) is not an element of this graph.

Seems like a tensorflow session collision maybe?

Possibly of relevance is that inside utils there is another load_model() for a 'pre-processing' model (frozen pre-trained model). The reason for this pre-model is that I am training many different (small) top classifier models to stack on top of the (large) pre-trained model. All will use the same large base model, just with different small classifier models on top.

What I don't understand is why this would only fail inside a Flask request context, and not with REPL predict.predict_image(...)

Most helpful comment

I had the same issue. The following resolved it for me:

from keras import backend as K
K.clear_session()

All 7 comments

I just tried K.clear_graph() after loading the base model, which fixes this load_model() error but obviously creates the new problem of not being able to run the base model. Fairly confident this is a tf session handling problem now.

Appreciate the input. I'm not able to load the model so I can't get to the step where I'd save the graph. Maybe if I save the graph from the base model and then clear the session?

You shouldn't load models in request handlers (with Flask, concurrency is best handled outside of it :)

Thanks. I'll have to try a different strategy. I never did get to root cause of this error.

The reason I wanted to load each model on request is that I may (eventually) have tens or hundreds of models for which a prediction could be requested at any time, and I thought having them all loaded into memory would be a bad idea. I guess I'll cross that bridge when I get there and just load the couple models I need now.

I had the same issue. The following resolved it for me:

from keras import backend as K
K.clear_session()

This is biting me too. Exact scenario. May gave to ditch flask no time...pity, I was really getting to like it.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

farizrahman4u picture farizrahman4u  路  3Comments

snakeztc picture snakeztc  路  3Comments

braingineer picture braingineer  路  3Comments

Imorton-zd picture Imorton-zd  路  3Comments

vinayakumarr picture vinayakumarr  路  3Comments