What's the difference between tf.placeholder and tf.Variable?

前端 未结 14 836
余生分开走
余生分开走 2020-12-07 06:33

I\'m a newbie to TensorFlow. I\'m confused about the difference between tf.placeholder and tf.Variable. In my view, tf.placeholder is

相关标签:
14条回答
  • 2020-12-07 07:13

    The most obvious difference between the tf.Variable and the tf.placeholder is that


    you use variables to hold and update parameters. Variables are in-memory buffers containing tensors. They must be explicitly initialized and can be saved to disk during and after training. You can later restore saved values to exercise or analyze the model.

    Initialization of the variables is done with sess.run(tf.global_variables_initializer()). Also while creating a variable, you need to pass a Tensor as its initial value to the Variable() constructor and when you create a variable you always know its shape.


    On the other hand, you can't update the placeholder. They also should not be initialized, but because they are a promise to have a tensor, you need to feed the value into them sess.run(<op>, {a: <some_val>}). And at last, in comparison to a variable, placeholder might not know the shape. You can either provide parts of the dimensions or provide nothing at all.


    There other differences:

    • the values inside the variable can be updated during optimizations
    • variables can be shared, and can be non-trainable
    • the values inside the variable can be stored after training
    • when the variable is created, 3 ops are added to a graph (variable op, initializer op, ops for the initial value)
    • placeholder is a function, Variable is a class (hence an uppercase)
    • when you use TF in a distributed environment, variables are stored in a special place (parameter server) and are shared between the workers.

    Interesting part is that not only placeholders can be fed. You can feed the value to a Variable and even to a constant.

    0 讨论(0)
  • 2020-12-07 07:13

    Variables

    A TensorFlow variable is the best way to represent shared, persistent state manipulated by your program. Variables are manipulated via the tf.Variable class. Internally, a tf.Variable stores a persistent tensor. Specific operations allow you to read and modify the values of this tensor. These modifications are visible across multiple tf.Sessions, so multiple workers can see the same values for a tf.Variable. Variables must be initialized before using.

    Example:

    x = tf.Variable(3, name="x")
    y = tf.Variable(4, name="y")
    f = x*x*y + y + 2
    

    This creates a computation graph. The variables (x and y) can be initialized and the function (f) evaluated in a tensorflow session as follows:

    with tf.Session() as sess:
         x.initializer.run()
         y.initializer.run()
         result = f.eval()
    print(result)
    42
    

    Placeholders

    A placeholder is a node (same as a variable) whose value can be initialized in the future. These nodes basically output the value assigned to them during runtime. A placeholder node can be assigned using the tf.placeholder() class to which you can provide arguments such as type of the variable and/or its shape. Placeholders are extensively used for representing the training dataset in a machine learning model as the training dataset keeps changing.

    Example:

    A = tf.placeholder(tf.float32, shape=(None, 3))
    B = A + 5
    

    Note: 'None' for a dimension means 'any size'.

    with tf.Session as sess:
        B_val_1 = B.eval(feed_dict={A: [[1, 2, 3]]})
        B_val_2 = B.eval(feed_dict={A: [[4, 5, 6], [7, 8, 9]]})
    
    print(B_val_1)
    [[6. 7. 8.]]
    print(B_val_2)
    [[9. 10. 11.]
     [12. 13. 14.]]
    

    References:

    1. https://www.tensorflow.org/guide/variables
    2. https://www.tensorflow.org/api_docs/python/tf/placeholder
    3. O'Reilly: Hands-On Machine Learning with Scikit-Learn & Tensorflow
    0 讨论(0)
  • 2020-12-07 07:16

    The difference is that with tf.Variable you have to provide an initial value when you declare it. With tf.placeholder you don't have to provide an initial value and you can specify it at run time with the feed_dict argument inside Session.run

    0 讨论(0)
  • 2020-12-07 07:21

    TL;DR

    Variables

    • For parameters to learn
    • Values can be derived from training
    • Initial values are required (often random)

    Placeholders

    • Allocated storage for data (such as for image pixel data during a feed)
    • Initial values are not required (but can be set, see tf.placeholder_with_default)
    0 讨论(0)
  • 2020-12-07 07:21

    For TF V1:

    1. Constant is with initial value and it won't change in the computation;

    2. Variable is with initial value and it can change in the computation; (so good for parameters)

    3. Placeholder is without initial value and it won't change in the computation. (so good for inputs like prediction instances)

    For TF V2, same but they try to hide Placeholder (graph mode is not preferred).

    0 讨论(0)
  • 2020-12-07 07:24

    Since Tensor computations compose of graphs then it's better to interpret the two in terms of graphs.

    Take for example the simple linear regression

    WX+B=Y
    

    where W and B stand for the weights and bias and X for the observations' inputs and Y for the observations' outputs.

    Obviously X and Y are of the same nature (manifest variables) which differ from that of W and B (latent variables). X and Y are values of the samples (observations) and hence need a place to be filled, while W and B are the weights and bias, Variables (the previous values affect the latter) in the graph which should be trained using different X and Y pairs. We place different samples to the Placeholders to train the Variables.

    We only need to save or restore the Variables (at checkpoints) to save or rebuild the graph with the code.

    Placeholders are mostly holders for the different datasets (for example training data or test data). However, Variables are trained in the training process for the specific tasks, i.e., to predict the outcome of the input or map the inputs to the desired labels. They remain the same until you retrain or fine-tune the model using different or the same samples to fill into the Placeholders often through the dict. For instance:

     session.run(a_graph, dict = {a_placeholder_name : sample_values}) 
    

    Placeholders are also passed as parameters to set models.

    If you change placeholders (add, delete, change the shape etc) of a model in the middle of training, you can still reload the checkpoint without any other modifications. But if the variables of a saved model are changed, you should adjust the checkpoint accordingly to reload it and continue the training (all variables defined in the graph should be available in the checkpoint).

    To sum up, if the values are from the samples (observations you already have) you safely make a placeholder to hold them, while if you need a parameter to be trained harness a Variable (simply put, set the Variables for the values you want to get using TF automatically).

    In some interesting models, like a style transfer model, the input pixes are going to be optimized and the normally-called model variables are fixed, then we should make the input (usually initialized randomly) as a variable as implemented in that link.

    For more information please infer to this simple and illustrating doc.

    0 讨论(0)
提交回复
热议问题