Alternative to arg_scope when using tf.layers

此生再无相见时 提交于 2019-12-14 02:37:01

问题


I'm rewriting tf.contrib.slim.nets.inception_v3 using tf.layers. Unfortunately the new tf.layers module does not work with arg_scope, as it does not have the necessary decorators. Is there better mechanism in place that I should use to set default paramters for layers? Or should I simply add a proper arguments to each layer and remove the arg_scope?

Here is an example that uses the arg_scope:

with variable_scope.variable_scope(scope, 'InceptionV3', [inputs]):
    with arg_scope(
        [layers.conv2d, layers_lib.max_pool2d, layers_lib.avg_pool2d],
        stride=1,
        padding='VALID'):

回答1:


There isn't another mechanism that lets you define default values in core TensorFlow, so you should specify the arguments for each layer.

For instance, this code:

with slim.arg_scope([slim.fully_connected], 
    activation_fn=tf.nn.relu, 
    weights_initializer=tf.truncated_normal_initializer(stddev=0.01),
    weights_regularizer=tf.contrib.layers.l2_regularizer(scale=0.0005)):
  x = slim.fully_connected(x, 800)
  x = slim.fully_connected(x, 1000)

would become:

x = tf.layers.dense(x, 800, activation=tf.nn.relu,
      kernel_initializer=tf.truncated_normal_initializer(stddev=0.01),
      kernel_regularizer=tf.contrib.layers.l2_regularizer(scale=0.0005))
x = tf.layers.dense(x, 1000, activation=tf.nn.relu,
      kernel_initializer=tf.truncated_normal_initializer(stddev=0.01),
      kernel_regularizer=tf.contrib.layers.l2_regularizer(scale=0.0005))

Alternatively:

with tf.variable_scope('fc', 
    initializer=tf.truncated_normal_initializer(stddev=0.01)):
  x = tf.layers.dense(x, 800, activation=tf.nn.relu,
      kernel_regularizer=tf.contrib.layers.l2_regularizer(scale=0.0005))
  x = tf.layers.dense(x, 1000, activation=tf.nn.relu,
      kernel_regularizer=tf.contrib.layers.l2_regularizer(scale=0.0005))

Make sure to read the documentation of the layer to see which initializers default to the variable scope initializer. For example, the dense layer's kernel_initializer uses the variable scope initializer, while the bias_initializer uses tf.zeros_initializer().




回答2:


you can use add_arg_scope from tensorflow.contrib.framework which adds the necessary decorators and makes a function usable with arg_scope. Create a wrapper around tf.layers.requiredLayer and decorate it with @add_arg_scope.

Example :

import tensorflow as tf
from tensorflow.contrib.framework import arg_scope
from tensorflow.contrib.framework import add_arg_scope

@add_arg_scope
def conv2d(inputs,filters,kernel_size,padding='VALID',activation=tf.nn.sigmoid):
    print inputs
    print filters
    print kernel_size
    print padding
    print activation
    return tf.layers.conv2d(
                  inputs=inputs,
                  filters=filters,
                  kernel_size=kernel_size,
                  padding=padding,
                  activation=activation)

inp = tf.placeholder(tf.float32,[None,224,224,3])


print '--------net1-------------'
with arg_scope([conv2d],padding='SAME',activation=tf.nn.relu):
    net = conv2d(inputs=inp,filters=64,kernel_size=[1,1])
    #print net
    #net=net
print '--------net2-------------'
net2 = conv2d(inputs=inp,filters=64,kernel_size=[1,1])


来源:https://stackoverflow.com/questions/48173368/alternative-to-arg-scope-when-using-tf-layers

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!