We see this quite often in many of the TensorFlow tutorials:
sess = tf.Session(config=tf.ConfigProto(allow_soft_placem
In addition to comments in tensorflow/core/protobuf/config.proto (allow_soft_placement, log_device_placement) it is also explained in TF's using GPUs tutorial.
To find out which devices your operations and tensors are assigned to, create the session with
log_device_placement
configuration option set to True.
Which is helpful for debugging. For each of the nodes of your graph, you will see the device it was assigned to.
If you would like TensorFlow to automatically choose an existing and supported device to run the operations in case the specified one doesn't exist, you can set
allow_soft_placement
to True in the configuration option when creating the session.
Which will help you if you accidentally manually specified the wrong device or a device which does not support a particular op. This is useful if you write a code which can be executed in environments you do not know. You still can provide useful defaults, but in the case of failure a graceful fallback.
This option allows resilient device assignment, but it only works when your tensorflow is not GPU compiled. If your tensorflow is GPU supported the operations always perform on GPU no matter if allow_soft_placement is set or not and even if you set device as CPU. But if you set it as false and device as GPU but GPU cannot be found in your machine it raises error.
This config tells you which device the operation is allocated while building the graph. It can always find the prioritized device with best performance on you machine. It seems just to ignore your settings.
If you look at the API of ConfigProto, on line 278, you will see this:
// Whether soft placement is allowed. If allow_soft_placement is true,
// an op will be placed on CPU if
// 1. there's no GPU implementation for the OP
// or
// 2. no GPU devices are known or registered
// or
// 3. need to co-locate with reftype input(s) which are from CPU.
bool allow_soft_placement = 7;
What this really means is that if you do something like this without allow_soft_placement=True
, TensorFlow will throw an error.
with tf.device('/gpu:0'):
# some op that doesn't have a GPU implementation
Right below it, you will see on line 281:
// Whether device placements should be logged.
bool log_device_placement = 8;
When log_device_placement=True
, you will get a verbose output of something like this:
2017-07-03 01:13:59.466748: I tensorflow/core/common_runtime/simple_placer.cc:841] Placeholder_1: (Placeholder)/job:localhost/replica:0/task:0/cpu:0
Placeholder: (Placeholder): /job:localhost/replica:0/task:0/cpu:0
2017-07-03 01:13:59.466765: I tensorflow/core/common_runtime/simple_placer.cc:841] Placeholder: (Placeholder)/job:localhost/replica:0/task:0/cpu:0
Variable/initial_value: (Const): /job:localhost/replica:0/task:0/cpu:0
2017-07-03 01:13:59.466783: I tensorflow/core/common_runtime/simple_placer.cc:841] Variable/initial_value: (Const)/job:localhost/replica:0/task:0/cpu:0
You can see where each operation is mapped to. For this case, they are all mapped to /cpu:0
, but if you're in a distributed setting, there would be many more devices.
In simple words:
allow_soft_placement
allows dynamic allocation of GPU memory,
log_device_placement
prints out device information