What's the difference between tf.placeholder and tf.Variable?

前端 未结 14 837
余生分开走
余生分开走 2020-12-07 06:33

I\'m a newbie to TensorFlow. I\'m confused about the difference between tf.placeholder and tf.Variable. In my view, tf.placeholder is

相关标签:
14条回答
  • 2020-12-07 07:32

    Placeholder :

    1. A placeholder is simply a variable that we will assign data to at a later date. It allows us to create our operations and build our computation graph, without needing the data. In TensorFlow terminology, we then feed data into the graph through these placeholders.

    2. Initial values are not required but can have default values with tf.placeholder_with_default)

    3. We have to provide value at runtime like :

      a = tf.placeholder(tf.int16) // initialize placeholder value
      b = tf.placeholder(tf.int16) // initialize placeholder value
      
      use it using session like :
      
      sess.run(add, feed_dict={a: 2, b: 3}) // this value we have to assign at runtime
      

    Variable :

    1. A TensorFlow variable is the best way to represent shared, persistent state manipulated by your program.
    2. Variables are manipulated via the tf.Variable class. A tf.Variable represents a tensor whose value can be changed by running ops on it.

    Example : tf.Variable("Welcome to tensorflow!!!")

    0 讨论(0)
  • 2020-12-07 07:34

    Example snippet:

    import numpy as np
    import tensorflow as tf
    
    ### Model parameters ###
    W = tf.Variable([.3], tf.float32)
    b = tf.Variable([-.3], tf.float32)
    
    ### Model input and output ###
    x = tf.placeholder(tf.float32)
    linear_model = W * x + b
    y = tf.placeholder(tf.float32)
    
    ### loss ###
    loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares
    
    ### optimizer ###
    optimizer = tf.train.GradientDescentOptimizer(0.01)
    train = optimizer.minimize(loss)
    
    ### training data ###
    x_train = [1,2,3,4]
    y_train = [0,-1,-2,-3]
    
    ### training loop ###
    init = tf.global_variables_initializer()
    sess = tf.Session()
    sess.run(init) # reset values to wrong
    for i in range(1000):
      sess.run(train, {x:x_train, y:y_train})
    

    As the name say placeholder is a promise to provide a value later i.e.

    Variable are simply the training parameters (W(matrix), b(bias) same as the normal variables you use in your day to day programming, which the trainer updates/modify on each run/step.

    While placeholder doesn't require any initial value, that when you created x and y TF doesn't allocated any memory, instead later when you feed the placeholders in the sess.run() using feed_dict, TensorFlow will allocate the appropriately sized memory for them (x and y) - this unconstrained-ness allows us to feed any size and shape of data.


    In nutshell:

    Variable - is a parameter you want trainer (i.e. GradientDescentOptimizer) to update after each step.

    Placeholder demo -

    a = tf.placeholder(tf.float32)
    b = tf.placeholder(tf.float32)
    adder_node = a + b  # + provides a shortcut for tf.add(a, b)
    

    Execution:

    print(sess.run(adder_node, {a: 3, b:4.5}))
    print(sess.run(adder_node, {a: [1,3], b: [2, 4]}))
    

    resulting in the output

    7.5
    [ 3.  7.]
    

    In the first case 3 and 4.5 will be passed to a and b respectively, and then to adder_node ouputting 7. In second case there's a feed list, first step 1 and 2 will be added, next 3 and 4 (a and b).


    Relevant reads:

    • tf.placeholder doc.
    • tf.Variable doc.
    • Variable VS placeholder.
    0 讨论(0)
提交回复
热议问题