First of all, we have tf.device thing that tensorflow has to offer for us to choose which gpu we want. Like:
with tf.device('/gpu:0'):
a = tf.constant(3.0)
This allocates for gpu 0.
Instead of writing gpu selection code everytime in every other code blocks, we can make use of CUDA's environment variable
CUDA_VISIBLE_DEVICES
to actually limit GPU's visiblity to tensorflow/keras.
Example:
export CUDA_VISIBLE_DEVICES=2
This makes only GPU number 2 visible to tf/keras.
In addition to this, after making gpu 2 visible, tensorflow/keras enumerates the device id from 0,....
So, GPU number 2 is seen as "/gpu:0" by tensorflow.
This has been a life changer for us.