I\'m quite confused about whether to use tf.nn.dropout or tf.layers.dropout.
many MNIST CNN examples seems to use tf.nn.droput, with keep_prop as one of params.
The idea is the same, the parameters are slightly different. In nn.dropout, keep_prob is the probability that each element is kept. In layers.dropout rate=0.1 would drop out 10% of input units.
So keep_prob = 1 - rate
. Also layers.dropout allows training
parameter.
In general, just read carefully documentation about the functions you care about and you will see the differences.