I am using keras on grid. There are many cores available, however I can't use all of them at same time. Therefore I would like to specify number of cores keras (tensorflow) can use. I have been looking for solution for quite some time. Is it possible?
Python's GIL locks it to a single core per process. I believe TF supports asynchronous computation. Try putting train ops on separate threads
Of course I found the solution after posting the question...
config = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1, \
allow_soft_placement=True, device_count = {'CPU': 1})
session = tf.Session(config=config)
K.set_session(session)
Of course I found the solution after posting the question...
config = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1, \ allow_soft_placement=True, device_count = {'CPU': 1}) session = tf.Session(config=config) K.set_session(session)
So this tells keras
to use only 1 core, right? I would like to know if you set device_count = {'GPU':0}
what would happen; will it use all detected CPU cores?
Why allow_soft_placement=True
is needed?
And maybe it's needed to set device_count={'GPU': 0}
also?
Most helpful comment
Of course I found the solution after posting the question...