TensorFlow - reproducing results when using dropout
问题 I am training a neural network using dropout regularization. I save the weights and biases the network is initialized with, so that I can repeat the experiment when I get good results. However, the use of dropout introduces some randomness in the network: since dropout drops units randomly, each time I rerun the network, different units are being dropped - even though I initialize the network with the exact same weights and biases (if I understand this correctly). Is there a way to make the