一个神经网络系统,由很多层组成,输入层用来接收信息,中间层加工处理输入信息,输出层就是计算机对这个输入信息的认知。
https://www.jianshu.com/p/e112012a4b2d
搭建神经网络基本流程
定义添加神经层的函数
1.训练的数据
2.定义节点准备接收数据
3.定义神经层:隐藏层和预测层
4.定义 loss 表达式
5.选择 optimizer 使 loss 达到最小
然后对所有变量进行初始化,通过 sess.run optimizer,迭代 1000 次进行学习:
import tensorflow as tf import numpy as np def add_layer(inputs,in_size,out_size,activation_fuction = None): Weight = tf.Variable(tf.random.normal([in_size,out_size])) biases = tf.Variable(tf.zeros([1,out_size])+0.1) wx = tf.matmul(inputs,Weight)+biases if activation_fuction is None: output = wx else : output = activation_fuction(wx) return output x_data = np.linspace(-1,1,300)[:, np.newaxis] noise = np.random.normal(0,0.05,x_data.shape) y_data = np.square(x_data)-0.5+noise xs = tf.placeholder(tf.float32, [None, 1]) ys = tf.placeholder(tf.float32, [None, 1]) hidden = add_layer(xs,1,10,activation_fuction =tf.nn.relu) prediction = add_layer(hidden,10,1,activation_fuction = None) loss = tf.reduce_mean(tf.reduce_sum(tf.square(prediction - ys),reduction_indices=[1])) train = tf.train.GradientDescentOptimizer(0.2).minimize(loss) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) for i in range(1000): # # training train_step 和 loss 都是由 placeholder 定义的运算,所以这里要用 feed 传入参数 sess.run(train, feed_dict={xs: x_data, ys: y_data}) if i%50 == 0: print(sess.run(loss,feed_dict={xs:x_data,ys:y_data}))
来源:https://www.cnblogs.com/gaona666/p/12632897.html