整体思路
这里为了好运行,举了个mnist的例子,对手写图片进行识别,每个图片是28*28大小的,使用双lstm隐层的结构进行分类预测,预测出10个数字类别的概率,整体的网络结构如下:
(1)输入层 [每个时间步的向量长度为28,一次训练时连续输入28个时间步,所以每次输入数据为28*28]
(2)第一lstm层[定义64个记忆体,其中28个记忆体收集输入层传过来的记忆,36个只是获取上一记忆体传来的信息,这层产生64个输出]
(3)第二Dropout层[对lstm层的输出进行随机一半输出的丢弃,Dropout是在层与层之间的随机连线上的丢弃]
(4)第三lstm层[定义64个记忆体,读取上一层输入,产生64个输出]
(5)第二Dropout层[对lstm的输出进行随机一半输出的丢弃,但是其接下来就是全连接层,所以觉得这种丢弃是指在64*10条连线中进行随机丢弃]
(6)全连接层[读取上层的输出,通过w*x+b计算产生这层的10个输出,经过softmax操作得到10个类别中每个类别的概率]
这样定义好了网络结构后,输入数据即可获取每个类别下的概率输出,经过交叉熵损失函数可以计算出批次内的平均损失值,之后使用优化器对网络进行训练即可。
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
数据读取
tf.reset_default_graph()
# Hyper Parameters
learning_rate = 0.01 # 学习率
n_steps = 28 # LSTM 展开步数(时序持续长度)
n_inputs = 28 # 输入节点数
n_hiddens = 64 # 隐层节点数
n_layers = 2 # LSTM layer 层数
n_classes = 10 # 输出节点数(分类数目)
# data
mnist = input_data.read_data_sets("MNIST_data", one_hot=True)
test_x = mnist.test.images
test_y = mnist.test.labels
定义网络参数
# tensor placeholder
with tf.name_scope('inputs'):
x = tf.placeholder(tf.float32, [None, n_steps * n_inputs], name='x_input') # 输入
y = tf.placeholder(tf.float32, [None, n_classes], name='y_input') # 输出
keep_prob = tf.placeholder(tf.float32, name='keep_prob_input') # 保持多少不被 dropout
batch_size = tf.placeholder(tf.int32, [], name='batch_size_input') # 批大小
# weights and biases
with tf.name_scope('weights'):
Weights = tf.Variable(tf.truncated_normal([n_hiddens, n_classes],stddev=0.1),
dtype=tf.float32, name='W')
tf.summary.histogram('output_layer_weights', Weights)
with tf.name_scope('biases'):
biases = tf.Variable(tf.random_normal([n_classes]), name='b')
tf.summary.histogram('output_layer_biases', biases)
定义网络结构
1.定义了一个包含两个lstm结构块的RNN网络,每个lstm结构块包含两部分:BasicLSTMCell定义的包含64个记忆体的隐层、随机丢弃一般参数的Dropout层
2.使用MultiRNNCell 来构建一个多隐层的结构,把加入两个lstm块的enc_cells堆叠一起,这样相当于构建了多个lstm隐层的RNN网络。
3.使用dynamic_rnn来构建动态的rnn网络,这个网络会把x输入信息输入到多隐层的网络中,获取得到最后一层每个记忆块的输出,这里动态的含义是指相对与动态rnn,其输入层的输入step长度是可以变长的,是使用while的形式。
总体下来,用整体到部分: dynamic_rnn(牵扯到数据输入的整体网络形式,输入数据,输出64的输出) -> MultiRNNCell(整体网络的定义,对多个lstm块的连接) -> attn_cell(自己定义的lstm块,包含一个BasicLSTMCell基础lstm结构和一个.Dropout操作)
# RNN structure
def RNN_LSTM(x, Weights, biases):
# RNN 输入 reshape
x = tf.reshape(x, [-1, n_steps, n_inputs])
# 定义 LSTM cell
# cell 中的 dropout
def attn_cell():
lstm_cell = tf.contrib.rnn.BasicLSTMCell(n_hiddens)
with tf.name_scope('lstm_dropout'):
return tf.contrib.rnn.DropoutWrapper(lstm_cell, output_keep_prob=keep_prob)
# attn_cell = tf.contrib.rnn.DropoutWrapper(lstm_cell, output_keep_prob=keep_prob)
# 实现多层 LSTM
# [attn_cell() for _ in range(n_layers)]
enc_cells = []
for i in range(0, n_layers):
enc_cells.append(attn_cell())
with tf.name_scope('lstm_cells_layers'):
mlstm_cell = tf.contrib.rnn.MultiRNNCell(enc_cells, state_is_tuple=True)
# 全零初始化 state
_init_state = mlstm_cell.zero_state(batch_size, dtype=tf.float32)
# dynamic_rnn 运行网络
outputs, states = tf.nn.dynamic_rnn(mlstm_cell, x, initial_state=_init_state, dtype=tf.float32, time_major=False)
# 输出
#return tf.matmul(outputs[:,-1,:], Weights) + biases
return tf.nn.softmax(tf.matmul(outputs[:,-1,:], Weights) + biases)
with tf.name_scope('output_layer'):
pred = RNN_LSTM(x, Weights, biases)
tf.summary.histogram('outputs', pred)
定义损失函数和优化器
# cost
with tf.name_scope('loss'):
#cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
cost = tf.reduce_mean(-tf.reduce_sum(y * tf.log(pred),reduction_indices=[1]))
tf.summary.scalar('loss', cost)
# optimizer
with tf.name_scope('train'):
train_op = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
# accuarcy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
with tf.name_scope('accuracy'):
accuracy = tf.metrics.accuracy(labels=tf.argmax(y, axis=1), predictions=tf.argmax(pred, axis=1))[1]
tf.summary.scalar('accuracy', accuracy)
summary合并、初始化
merged = tf.summary.merge_all()
init = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer())
启动训练
with tf.Session() as sess:
sess.run(init)
train_writer = tf.summary.FileWriter("D://logs//train",sess.graph)
test_writer = tf.summary.FileWriter("D://logs//test",sess.graph)
# training
step = 1
for i in range(2000):
_batch_size = 128
batch_x, batch_y = mnist.train.next_batch(_batch_size)
sess.run(train_op, feed_dict={x:batch_x, y:batch_y, keep_prob:0.5, batch_size:_batch_size})
if (i + 1) % 100 == 0:
#loss = sess.run(cost, feed_dict={x:batch_x, y:batch_y, keep_prob:1.0, batch_size:_batch_size})
#acc = sess.run(accuracy, feed_dict={x:batch_x, y:batch_y, keep_prob:1.0, batch_size:_batch_size})
#print('Iter: %d' % ((i+1) * _batch_size), '| train loss: %.6f' % loss, '| train accuracy: %.6f' % acc)
train_result = sess.run(merged, feed_dict={x:batch_x, y:batch_y, keep_prob:1.0, batch_size:_batch_size})
test_result = sess.run(merged, feed_dict={x:test_x, y:test_y, keep_prob:1.0, batch_size:test_x.shape[0]})
train_writer.add_summary(train_result,i+1)
test_writer.add_summary(test_result,i+1)
print("Optimization Finished!")
# prediction
print("Testing Accuracy:", sess.run(accuracy, feed_dict={x:test_x, y:test_y, keep_prob:1.0, batch_size:test_x.shape[0]}))
效果
可参考上一篇博客,自己展现tensorboard计算流图和曲线变化。