I have a computer with 4 GPUs, and I want to train a few models at the same time on different GPUs. Is there a way to assign different GPUs when training different models?
ps: I am using TensorFlow as the backend.
You should look at #3333
Thanks @denlecoeuche. Seeting the environment variable CUDA_VISIBLE_DEVICES seems to be the easiest solution for my problem.
in python add this, os.environ["CUDA_VISIBLE_DEVICES"]="1"
Can two keras models run simultaneously with os.environ["CUDA_VISIBLE_DEVICES"]="0" and os.environ["CUDA_VISIBLE_DEVICES"]="1" settings on them?
Can two keras models run simultaneously with os.environ["CUDA_VISIBLE_DEVICES"]="0" and os.environ["CUDA_VISIBLE_DEVICES"]="1" settings on them?
I wasn't sure whether it would work when I saw this comment since it was posed as a question, so I figured it might be useful for future visitors that yes, two keras models can run simultaneously with setting different os.environ["CUDA_VISIBLE_DEVICES"]. If you use tf.device instead, both instances will recognize all gpus, which would result in errors.
Most helpful comment
in python add this,
os.environ["CUDA_VISIBLE_DEVICES"]="1"