Keras: 'The Session graph is empty' when run behind Flask API

Created on 3 Jul 2018  路  7Comments  路  Source: keras-team/keras

Hi,

Following conversations on Keras gitter, I was asked to create an issue.
I am experiencing an issue similar to https://github.com/keras-team/keras/issues/5331

Traceback (most recent call last):
  File "/Users/aliostad/envs/py27/lib/python2.7/site-packages/flask/app.py", line 2292, in wsgi_app
    response = self.full_dispatch_request()
  File "/Users/aliostad/envs/py27/lib/python2.7/site-packages/flask/app.py", line 1815, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/Users/aliostad/envs/py27/lib/python2.7/site-packages/flask/app.py", line 1718, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "/Users/aliostad/envs/py27/lib/python2.7/site-packages/flask/app.py", line 1813, in full_dispatch_request
    rv = self.dispatch_request()
  File "/Users/aliostad/envs/py27/lib/python2.7/site-packages/flask/app.py", line 1799, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "/Users/aliostad/github/hexagon-rl/api.py", line 42, in move
    mv = players[key].turn(s.ownedCells)
  File "/Users/aliostad/github/hexagon-rl/hexagon.py", line 324, in turn
    move, h, world = self.turnx(cells)
  File "/Users/aliostad/github/hexagon-rl/hexagon.py", line 315, in turnx
    world.uberCells)) == 0 or self.timeForBoost(world):
  File "/Users/aliostad/github/hexagon-rl/centaur.py", line 135, in timeForBoost
    self.target_train()  # iterates target model
  File "/Users/aliostad/github/hexagon-rl/centaur.py", line 89, in target_train
    weights = self.model.get_weights()
  File "/Users/aliostad/envs/py27/lib/python2.7/site-packages/keras/models.py", line 699, in get_weights
    return self.model.get_weights()
  File "/Users/aliostad/envs/py27/lib/python2.7/site-packages/keras/engine/topology.py", line 2015, in get_weights
    return K.batch_get_value(weights)
  File "/Users/aliostad/envs/py27/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 2323, in batch_get_value
    return get_session().run(ops)
  File "/Users/aliostad/envs/py27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 895, in run
    run_metadata_ptr)
  File "/Users/aliostad/envs/py27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1053, in _run
    raise RuntimeError('The Session graph is empty.  Add operations to the '

The code is a public github repo hexagon-rl. I am building an RL agent that can play and the player is exposed as a Flask API. I am trying the simplistic doe provided in this article as the starting point which employs two models.

The solution provided in the issue above https://github.com/keras-team/keras/issues/5331#issuecomment-317255312 makes sense but having two models at at the same time, means the chance of collision is very high and it can be brittle.

When I run the Centaur module which does not involve multi-threading, it works fine but the error happens when I use the API (the game itself is a separate scala project).

I appreciate if you could point me to a solution.

Thanks


Environment

Python 2.7.10 on Mac
Keras 2.1.5
Tensorflow 1.3.0
Flask: 1.0.2

Most helpful comment

@eliadl Thanks Eliad. My problem was exactly the same, just the same model that could not be shared across different threads.

Problem is that you could have different threads (more than 2) in flask I guess.

All 7 comments

I have the same issue,it disappeared when I update to tensorflow2.0-alpha.
I suspect it was the the problem of thread which when we start a new thread, the thread did't copy the default graph or something else, I haven't address the section resulting this problem

@eliadl sorry do you mean loading same model file in a dedicated model object in each thread? I think this is helpful but loading models sometimes is quite time consuming and might not be practical behind an API.

@aliostad Oh sorry, I haven't really dived deep into the details of your question, just saw the same error I got.
I didn't realize you had two models.
My scenario was different - loading one model and using it in multiple threads.

My problem was that the graph was empty on the other threads because they had different default Session and Graph.
Loading a model appears to affect these 2 objects in a way that appears to turn them into "dependencies" which the model must have in order to function properly.
So if you keep these 2 objects for each of your models (total of 4 objects), and utilize them as needed, you may be able to prevent this "empty graph" error.

I hope this helps.

Try this:
pip install Flask==0.12.2

This solved the same error in my environment
Ubuntu16.04
Python 3.6.8
tensorflow==1.13.1
Keras==2.2.4
Flask==1.0.3

@eliadl Thanks Eliad. My problem was exactly the same, just the same model that could not be shared across different threads.

Problem is that you could have different threads (more than 2) in flask I guess.

So, a possible solution is to run one-threaded Flask application, using the keyword argument passed to the app, like here:
app.run(host='0.0.0.0', port=7117, threaded=False)

Was this page helpful?
0 / 5 - 0 ratings

Related issues

farizrahman4u picture farizrahman4u  路  3Comments

Imorton-zd picture Imorton-zd  路  3Comments

harishkrishnav picture harishkrishnav  路  3Comments

vinayakumarr picture vinayakumarr  路  3Comments

MarkVdBergh picture MarkVdBergh  路  3Comments