How to exactly add L1 regularisation to tensorflow error function

匿名 (未验证) 提交于 2019-12-03 08:48:34

问题:

Hey I am new to tensorflow and even after a lot of efforts could not add L1 regularisation term to the error term

x = tf.placeholder("float", [None, n_input]) # Weights and biases to hidden layer ae_Wh1 = tf.Variable(tf.random_uniform((n_input, n_hidden1), -1.0 / math.sqrt(n_input), 1.0 / math.sqrt(n_input))) ae_bh1 = tf.Variable(tf.zeros([n_hidden1])) ae_h1 = tf.nn.tanh(tf.matmul(x,ae_Wh1) + ae_bh1)  ae_Wh2 = tf.Variable(tf.random_uniform((n_hidden1, n_hidden2), -1.0 / math.sqrt(n_hidden1), 1.0 / math.sqrt(n_hidden1))) ae_bh2 = tf.Variable(tf.zeros([n_hidden2])) ae_h2 = tf.nn.tanh(tf.matmul(ae_h1,ae_Wh2) + ae_bh2)  ae_Wh3 = tf.transpose(ae_Wh2) ae_bh3 = tf.Variable(tf.zeros([n_hidden1])) ae_h1_O = tf.nn.tanh(tf.matmul(ae_h2,ae_Wh3) + ae_bh3)  ae_Wh4 = tf.transpose(ae_Wh1) ae_bh4 = tf.Variable(tf.zeros([n_input])) ae_y_pred = tf.nn.tanh(tf.matmul(ae_h1_O,ae_Wh4) + ae_bh4)    ae_y_actual = tf.placeholder("float", [None,n_input]) meansq = tf.reduce_mean(tf.square(ae_y_actual - ae_y_pred)) train_step = tf.train.GradientDescentOptimizer(0.05).minimize(meansq) 

after this I run the above graph using

init = tf.initialize_all_variables() sess = tf.Session() sess.run(init)  n_rounds = 100 batch_size = min(500, n_samp) for i in range(100):     sample = np.random.randint(n_samp, size=batch_size)     batch_xs = input_data[sample][:]     batch_ys = output_data_ae[sample][:]     sess.run(train_step, feed_dict={x: batch_xs, ae_y_actual:batch_ys}) 

Above is the code for a 4 layer autoencoder, "meansq" is my squared loss function. How can I add L1 reguarisation for the weight matrix (tensors) in the network?

回答1:

You can use TensorFlow's apply_regularization and l1_regularizer methods.

An example based on your question:

import tensorflow as tf  total_loss = meansq #or other loss calcuation l1_regularizer = tf.contrib.layers.l1_regularizer(    scale=0.005, scope=None ) weights = tf.trainable_variables() # all vars of your graph regularization_penalty = tf.contrib.layers.apply_regularization(l1_regularizer, weights)  regularized_loss = total_loss + regularization_penalty # this loss needs to be minimized train_step = tf.train.GradientDescentOptimizer(0.05).minimize(regularized_loss) 

Note: weights is a list where each entry is a tf.Variable.



回答2:

You can also use tf.slim.l1_regularizer() from the slim losses.



标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!