深度神经网络
  • 合理地增加神经网络的深度和广度,可以提升模型的表达能力,得到性能更加优秀的模型。
批归一化(BN)
  • 是深度神经网络训练的一个技巧,在之前的归一化中,仅仅对神经网络的输入层进行了归一化,但是随着网络的加深,中间层的数据经过矩阵乘法和非线性运算后,其分布会变大,因此加入批归一化,即对神经网络中每一层输出都进行归一化,可以改善模型的训练效果。
  • 还可以加快训练速度,提升模型精度。
Dropout
  • Dropout 即在每次训练的时候,并不用到全部层的全部神经元节点,而是每次训练时随机去掉一部分神经元节点,但并不是删除此路径上的权重(等下一次训练的时候还可能用到)。
  • Dropout 可以有效的防止过拟合:假设droput 取50%的时候,每次训练都随机的discard 50%的隐含层的节点来进行训练,这样可以防止每次都是所有的特征选择器共同作用,一直放大或者缩小某些特征,在样本数据少的情况下很容易过拟合,并且泛化能力也很弱,所以才用dropout这种方法来实现能很好的避免训练过程中出现的这些问题。
  • 注:训练过程中使用Dropout,但在测试过程中采用的依然是全连接。
在之前的代码上修改模型构建部分
  • 1.在模型构建中添加批归一化
# tf.keras.models.Sequential() 构建模型 
# 构建深度神经网络

model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28,28]))
for _ in range(20):
    model.add(keras.layers.Dense(100,activation="relu"))
    # 加入批归一化(将其放在激活函数的后面)
    model.add(keras.layers.BatchNormalization())
    
    """
    将批归一化放在激活函数的前面:
    model.add(keras.layers.Dense(100))
    model.add(keras.layers.BatchNormalization())
    model.add(keras.layers.Avtivation("relu")
    """
    
model.add(keras.layers.Dense(10,activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
             optimizer="sgd",
             metrics = ["accuracy"])
  • 2.在模型构建中添加批归一化和Dropout
# tf.keras.models.Sequential() 构建模型 
# 构建深度神经网络

model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28,28]))
for _ in range(20):
    # selu:自带归一化功能的激活函数
    model.add(keras.layers.Dense(100,activation="selu"))
# 在最后几层添加dropout,防止过拟合,rate表示丢掉神经元的比例
# AlphaDropout比Dropout的优势:
# 1.激活值均值与方差不变 2. 归一化性质也不变
model.add(keras.layers.AlphaDropout(rate=0.5))
# model.add(keras.layers.Dropout(rate=0.5))
model.add(keras.layers.Dense(10,activation="softmax"))

model.compile(loss="sparse_categorical_crossentropy",
             optimizer="sgd",
             metrics = ["accuracy"])
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten (Flatten)            (None, 784)               0         
_________________________________________________________________
dense (Dense)                (None, 100)               78500     
_________________________________________________________________
dense_1 (Dense)              (None, 100)               10100     
_________________________________________________________________
dense_2 (Dense)              (None, 100)               10100     
_________________________________________________________________
dense_3 (Dense)              (None, 100)               10100     
_________________________________________________________________
dense_4 (Dense)              (None, 100)               10100     
_________________________________________________________________
dense_5 (Dense)              (None, 100)               10100     
_________________________________________________________________
dense_6 (Dense)              (None, 100)               10100     
_________________________________________________________________
dense_7 (Dense)              (None, 100)               10100     
_________________________________________________________________
dense_8 (Dense)              (None, 100)               10100     
_________________________________________________________________
dense_9 (Dense)              (None, 100)               10100     
_________________________________________________________________
dense_10 (Dense)             (None, 100)               10100     
_________________________________________________________________
dense_11 (Dense)             (None, 100)               10100     
_________________________________________________________________
dense_12 (Dense)             (None, 100)               10100     
_________________________________________________________________
dense_13 (Dense)             (None, 100)               10100     
_________________________________________________________________
dense_14 (Dense)             (None, 100)               10100     
_________________________________________________________________
dense_15 (Dense)             (None, 100)               10100     
_________________________________________________________________
dense_16 (Dense)             (None, 100)               10100     
_________________________________________________________________
dense_17 (Dense)             (None, 100)               10100     
_________________________________________________________________
dense_18 (Dense)             (None, 100)               10100     
_________________________________________________________________
dense_19 (Dense)             (None, 100)               10100     
_________________________________________________________________
alpha_dropout (AlphaDropout) (None, 100)               0         
_________________________________________________________________
dense_20 (Dense)             (None, 10)                1010      
=================================================================
Total params: 271,410
Trainable params: 271,410
Non-trainable params: 0
_________________________________________________________________
  • 设置回调函数,训练模型
# 开启训练
# epochs:训练集遍历10次
# validation_data:每隔一段时间就会验证集验证
# 会发现loss和accuracy到后面一直不变,因为用sgd梯度下降法会导致陷入局部最小值点
# 因此将loss函数的下降方法改为 adam

# callbcaks:回调函数,在每次迭代之后自动调用一些进程,如判断loss值是否达到要求
# 因此callbacks需要加在训练的过程中,即加在fit中
# 此处使用 Tensorboard, earlystopping, ModelCheckpoint 回调函数

# Tensorboard需要一个文件夹,ModelCheckpoint需要一个文件名
# 因此先创建一个文件夹和文件名

logdir = os.path.join("dnn-selu-dropout-callbacks")
if not os.path.exists(logdir):
    os.mkdir(logdir)
# 在callbacks文件夹下创建文件。c=os.path.join(a,b),c=a/b
output_model_file = os.path.join(logdir,"fashion_mnist_model.h5")


callbacks = [
    keras.callbacks.TensorBoard(log_dir=logdir),
    keras.callbacks.ModelCheckpoint(output_model_file,
                                   save_best_only=True),
    keras.callbacks.EarlyStopping(patience=5, min_delta=1e-3),
]
history = model.fit(x_train_scaled, y_train, epochs=10,
                    validation_data=(x_valid_scaled, y_valid),
                   callbacks = callbacks)
# 查看tensorboard:
# 1.在所在的环境下,进入callbacks文件夹所在的目录
# 2.输入:tensorboard --logdir="callbacks"
# 3.打开浏览器:输入localhost:(端口号)
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
   32/55000 [..............................] - ETA: 1:49:26 - loss: 3.3157 - accuracy: 0.0625WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (0.277549). Check your callbacks.
55000/55000 [==============================] - 22s 397us/sample - loss: 0.6879 - accuracy: 0.7660 - val_loss: 0.6217 - val_accuracy: 0.8492
Epoch 2/10
55000/55000 [==============================] - 17s 306us/sample - loss: 0.4545 - accuracy: 0.8446 - val_loss: 0.5691 - val_accuracy: 0.8592
Epoch 3/10
55000/55000 [==============================] - 16s 298us/sample - loss: 0.4020 - accuracy: 0.8597 - val_loss: 0.5428 - val_accuracy: 0.8700
Epoch 4/10
55000/55000 [==============================] - 15s 282us/sample - loss: 0.3756 - accuracy: 0.8674 - val_loss: 0.5546 - val_accuracy: 0.8700
Epoch 5/10
55000/55000 [==============================] - 18s 325us/sample - loss: 0.3520 - accuracy: 0.8747 - val_loss: 0.5279 - val_accuracy: 0.8738
Epoch 6/10
55000/55000 [==============================] - 19s 338us/sample - loss: 0.3369 - accuracy: 0.8810 - val_loss: 0.5400 - val_accuracy: 0.8814
Epoch 7/10
55000/55000 [==============================] - 18s 332us/sample - loss: 0.3234 - accuracy: 0.8847 - val_loss: 0.5884 - val_accuracy: 0.8754
Epoch 8/10
55000/55000 [==============================] - 16s 284us/sample - loss: 0.3107 - accuracy: 0.8887 - val_loss: 0.5305 - val_accuracy: 0.8790
Epoch 9/10
55000/55000 [==============================] - 17s 312us/sample - loss: 0.2961 - accuracy: 0.8943 - val_loss: 0.4887 - val_accuracy: 0.8856
Epoch 10/10
55000/55000 [==============================] - 18s 323us/sample - loss: 0.2901 - accuracy: 0.8951 - val_loss: 0.6365 - val_accuracy: 0.8734
  • 训练结果
def plot_learning_curves(history):
    # 将history.history转换为dataframe格式
    pd.DataFrame(history.history).plot(figsize=(8, 5 ))
    plt.grid(True)
    # gca:get current axes,gcf: get current figure
    plt.gca().set_ylim(0, 3)
    plt.show()
plot_learning_curves(history)

# 前期loss的基本不变化的原因
# 1.参数众多,训练不充分
# 2.梯度消失

神经网络归一化函数是什么matlab 神经网络归一化方法_tensorflow

  • 测试结果
model.evaluate(x_test_scaled, y_test, verbose=2)
# evaluate 中的 verbose:
# verbose:日志显示
# verbose = 0 为不在标准输出流输出日志信息
# verbose = 1 为输出进度条记录
# 只能取 0 和 1;默认为 1
10000/1 - 1s - loss: 0.3931 - accuracy: 0.8636

[0.691246773815155, 0.8636]