I have a loaded model before, i save in file h5, and json. So i want to load it in multi thread, each thread i loaded model to use predicted separate,
Then keras get error:
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 267, in init
fetch, allow_tensor=True, allow_operation=True))
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2405, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2489, in _as_graph_element_locked
raise ValueError("Operation %s is not an element of this graph." % obj)
So Can I just load same model through all thread ?
Not really a Keras' issue but anyway.
You need to load the model in different graph.
http://stackoverflow.com/questions/41990014/load-multiple-models-in-tensorflow
hey, Its model i train in keras, not tensorflow. I know problem in tensorflow with load model in different graph, but its is problem with keras. Do you know any other solution?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.
Hi @trangtv57,
any update for this problem? any chance already found the solution? thanks
I think we have another solution of Dref360 answer, you need load the model in different graph.
http://stackoverflow.com/questions/41990014/load-multiple-models-in-tensorflow
I see, what about your solution @trangtv57 ? I just want to compare which one simpler
i follow this solution, i just create new grap for each thread, It's the only solution I know now.
ok thanks @trangtv57. At the end, i also follow this solution
Considering the backend is set to tensorflow for the keras. you can use code and do parallel processing for multiple model invocation/ multiple model loading.
def model1(dir_model):
model = os.path.join(dir_model, 'model.json')
dir_weights = os.path.join(dir_model, 'model.h5')
graph1 = Graph()
with graph1.as_default():
session1 = Session(graph=graph1, config=config)
with session1.as_default():
with open(model, 'r') as data:
model_json = data.read()
model_1 = model_from_json(model_json)
model_1.load_weights(dir_weights)
return model_1,gap_weights,session1,graph1
def model_2(dir_model):
model = os.path.join(dir_model, 'model.json')
dir_weights = os.path.join(dir_model, 'model.h5')
graph2 = Graph()
with graph2.as_default():
session2 = Session(graph=graph2, config=config)
with session2.as_default():
with open(model, 'r') as data:
model_json = data.read()
model_2 = model_from_json(model_json)
model_2.load_weights(dir_weights)
return model_2,session2,graph2
and for invocation of the specific model do the following experiments.
for model 1 predict do the following
K.set_session(session1)
with graph1.as_default():
img_pred[img_name] =
patch_dict[np.argmax(np.squeeze(model_1.predict(img_invoke)))
and for the model 2 it follows same as
K.set_session(session2)
with graph2.as_default():
img_pred[img_name] =
patch_dict[np.argmax(np.squeeze(model_2.predict(img_invoke)))]
Hi,dear, i follow your solution. However, if I load my model several times in a loop, I find remarkable memory leak. Why? thank you. @trangtv57 @jaiprasadreddy
@wangyexiang
can you please eloborate the issue?? it looks like u r loading the same model several times in a loop, i wanted to understand the reason for loading the same model several time, please post your code snippet here.
Most helpful comment
Considering the backend is set to tensorflow for the keras. you can use code and do parallel processing for multiple model invocation/ multiple model loading.
and for invocation of the specific model do the following experiments.
for model 1 predict do the following
and for the model 2 it follows same as