How can I choose which GPU to use with Keras and the Tensorflow backend? I have an AWS gx.8xlarge with 4 GPUs, I'd like to run totally seperate experiments on each, does anyone know if this possible?
you can use tensorflow tf.device just like you would in regular tensorflow code. Please see this for an example http://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html
@tetmin
Use environment variable CUDA_VISIBLE_DEVICES
http://www.acceleware.com/blog/cudavisibledevices-masking-gpus
@tetmin Another possibility is to run your experiments in separate docker containers and pass one GPU per container with the --device flag. See this.
@linxihui, @denlecoeuche good ideas guys, i'm actually using a docker container already so that seems like a good way to do it. I tried using tf.device as suggested but that doesn't seem to work, the network ended up using multiple GPUs in a strange way.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs, but feel free to re-open it if needed.
At the beggining of the script set the following enviroment variables.
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "1" # use id from $ nvidia-smi
Worked for me to choose the 2nd GPU in the cluster i'm using.
Most helpful comment
@tetmin
Use environment variable
CUDA_VISIBLE_DEVICEShttp://www.acceleware.com/blog/cudavisibledevices-masking-gpus