知识总结

(1)再次注意summary的使用

(2)x = rdm.rand(dataset_size, 2) y_ = [[x1**2 + x2**2] for (x1, x2) in x]这里的问题要注意

(3)注意batch时,全部先按照一套W进行前向传播,这时候在进行正则化时,加的是同一套W,然后反向传播改变W值,进行下一轮前向传播

代码如下

import tensorflow as tf
import numpy as np
from numpy.random import RandomState rdm = RandomState(1)
dataset_size = 128
x = rdm.rand(dataset_size, 2)
y_ = [[x1**2 + x2**2] for (x1, x2) in x] def get_weight(shape, alpha, name):
with tf.variable_scope("get_variable" + name):
var = tf.get_variable(name, shape, tf.float32, initializer=tf.truncated_normal_initializer(0.01))
tf.add_to_collection("losses", tf.contrib.layers.l2_regularizer(alpha)(var))
return var with tf.name_scope("generate_value"):
xs = tf.placeholder(tf.float32, [None, 2], name="x_input")
ys = tf.placeholder(tf.float32, [None, 1], name="y_output")
batch_size = 8
layers_dimension = [2 ,10, 10, 10 ,1]
n_layers = len(layers_dimension)
in_dimension = layers_dimension[0]
cur_layer = xs for i in range(1, n_layers):
out_dimension = layers_dimension[i]
with tf.variable_scope("layer%d" % i):
weights = get_weight([in_dimension, out_dimension], 0.001, "layers")
biases = tf.get_variable("biases", [out_dimension], tf.float32, tf.constant_initializer(0.0))
cur_layer = tf.matmul(cur_layer, weights) + biases
cur_layer = tf.nn.relu(cur_layer)
in_dimension = layers_dimension[i] with tf.name_scope("loss_op"):
mse_loss = tf.reduce_mean(tf.square(ys - cur_layer))
tf.add_to_collection("losses", mse_loss)
loss = tf.add_n(tf.get_collection("losses"))
tf.summary.scalar("loss", loss) train_op = tf.train.GradientDescentOptimizer(0.01).minimize(loss) merged = tf.summary.merge_all()
init = tf.global_variables_initializer()
with tf.Session() as sess:
writer = tf.summary.FileWriter("path/", tf.get_default_graph())
sess.run(init)
for i in range(5000):
start = i*batch_size % dataset_size
end = min(start+batch_size, dataset_size)
if i % 50 == 0:
result = sess.run(merged, feed_dict={xs: x, ys: y_})
writer.add_summary(result, global_step=i)
if i % 500 ==0:
loss_op = sess.run(loss, feed_dict={xs: x, ys: y_})
print("After %d training, loss is %g" % (i, loss_op))
_ = sess.run(train_op, feed_dict={xs: x[start:end], ys: y_[start:end]})
writer.close()



最新文章

  1. 浅谈iptables 入站 出站以及NAT实例
  2. rqt工具的使用
  3. http学习笔记(三)
  4. UVALive 4987---Evacuation Plan(区间DP)
  5. HLOI2016滚粗记
  6. SAP ALV中同一列的不同行显示不同的小数位,并能够总计,小计
  7. IOS开发之——reveal 的使用
  8. sealed(C# 参考)
  9. Part 7 Joins in sql server
  10. Leetcode-34-Search for a Range-(Medium)
  11. for、for in和while以及do while
  12. [NewLife.XCode]事务处理(算准你的每一分钱)
  13. 关于Flutter初始化流程,我必须告诉你的是...
  14. Jenkins持续集成介绍及插件安装版本更新演示(一)--技术流ken
  15. POJ 3254 Corn Fields (状压入门)
  16. JAVA基础--重新整理(1)后版
  17. 基础知识之nginx重写规则
  18. POP3_收取QQ邮箱邮件的问题
  19. jQuery ajax - serializeArray() 方法
  20. 网站统计IP PV UV实现原理

热门文章

  1. 转发:for /f命令之—Delims和Tokens用法&总结
  2. 基于卷积神经网络的面部表情识别(Pytorch实现)----台大李宏毅机器学习作业3(HW3)
  3. OpenJDK自动安装脚本 InstallOpenJDK.vbs
  4. Vasya and Shifts CodeForces - 832E (高斯消元)
  5. Sql CLR创建一个简单的表值函数
  6. shell 学习笔记9-while/until循环语句
  7. C#中Unity对象的注册方式与生命周期解析
  8. Python之TensorFlow的变量收集、自定义命令参数、矩阵运算、梯度下降-4
  9. 【转载】C#中List集合使用Last方法获取最后一个元素
  10. JavaScript的变量和常量