0. 概述
TensorFlow训练的模型可以保存起来,方便自己使用或模型分享给他人。同时,如果模型训练非常耗时,则模型保存可以达到断点续训的功能。分享自己的模型可以有两种方式:一是将模型的源代码分享给他人,这时别人拿到代码后需要从头开始训练。二是将训练好的模型,即训练保存的模型(里面包含权重、超参数等)分享给他人,这里别人拿到模型就可以使用或者稍加训练即可使用。
TensorFlow中模型的保存有很多种方法。本案例使用tf.keras进行保存。
1. 导入所需的库
import tensorflow as tf
import os
for i in [tf]:
print(i.__name__,": ",i.__version__,sep="")
输出:
tensorflow: 2.2.0
2. 导入数据集
以MNIST数据集为例进行案例分析,故导入tf.keras自带的mnist数据集
(trainImages, trainLabels),(testImages, testLabels) = tf.keras.datasets.mnist.load_data()
for i in [trainImages, trainLabels, testImages, testLabels]:
print(i.shape)
trainImages = trainImages.reshape(-1,28*28)/255.0
testImages = testImages.reshape(-1,28*28)/255.0
for i in [trainImages, trainLabels, testImages, testLabels]:
print(i.shape)
输出:
(60000, 28, 28)
(60000,)
(10000, 28, 28)
(10000,)
(60000, 784)
(60000,)
(10000, 784)
(10000,)
3. 模型构建
# 构建模型结构
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(512, activation="relu", input_shape=(784,)),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
model.summary()
输出:
Model: "sequential_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_6 (Dense) (None, 512) 401920
_________________________________________________________________
dropout_3 (Dropout) (None, 512) 0
_________________________________________________________________
dense_7 (Dense) (None, 10) 5130
=================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
_________________________________________________________________
# 编译模型
model.compile(optimizer="adam",
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=["accuracy"])
4. 训练模型并保存
模型训练过程中保存checkpoint
checkpointPath = "training_1/cp.ckpt"
checkpointDir = os.path.dirname(checkpointPath)
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpointPath,
save_weights_only=True,
verbose=1)
model.fit(trainImages, trainLabels, epochs=10,
validation_data=(testImages, testLabels),
callbacks=[cp_callback])
输出:
Epoch 1/10
1871/1875 [============================>.] - ETA: 0s - loss: 0.2221 - accuracy: 0.9347
Epoch 00001: saving model to training_1/cp.ckpt
1875/1875 [==============================] - 5s 3ms/step - loss: 0.2219 - accuracy: 0.9347 - val_loss: 0.1122 - val_accuracy: 0.9638
Epoch 2/10
1865/1875 [============================>.] - ETA: 0s - loss: 0.0947 - accuracy: 0.9702
Epoch 00002: saving model to training_1/cp.ckpt
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0947 - accuracy: 0.9702 - val_loss: 0.0743 - val_accuracy: 0.9776
Epoch 3/10
1854/1875 [============================>.] - ETA: 0s - loss: 0.0668 - accuracy: 0.9793
Epoch 00003: saving model to training_1/cp.ckpt
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0666 - accuracy: 0.9793 - val_loss: 0.0740 - val_accuracy: 0.9765
Epoch 4/10
1859/1875 [============================>.] - ETA: 0s - loss: 0.0545 - accuracy: 0.9826
Epoch 00004: saving model to training_1/cp.ckpt
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0543 - accuracy: 0.9826 - val_loss: 0.0665 - val_accuracy: 0.9808
Epoch 5/10
1869/1875 [============================>.] - ETA: 0s - loss: 0.0435 - accuracy: 0.9864
Epoch 00005: saving model to training_1/cp.ckpt
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0435 - accuracy: 0.9864 - val_loss: 0.0603 - val_accuracy: 0.9835
Epoch 6/10
1866/1875 [============================>.] - ETA: 0s - loss: 0.0365 - accuracy: 0.9880
Epoch 00006: saving model to training_1/cp.ckpt
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0365 - accuracy: 0.9880 - val_loss: 0.0598 - val_accuracy: 0.9822
Epoch 7/10
1870/1875 [============================>.] - ETA: 0s - loss: 0.0318 - accuracy: 0.9894
Epoch 00007: saving model to training_1/cp.ckpt
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0318 - accuracy: 0.9894 - val_loss: 0.0630 - val_accuracy: 0.9813
Epoch 8/10
1874/1875 [============================>.] - ETA: 0s - loss: 0.0278 - accuracy: 0.9908
Epoch 00008: saving model to training_1/cp.ckpt
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0278 - accuracy: 0.9908 - val_loss: 0.0700 - val_accuracy: 0.9821
Epoch 9/10
1860/1875 [============================>.] - ETA: 0s - loss: 0.0229 - accuracy: 0.9924
Epoch 00009: saving model to training_1/cp.ckpt
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0227 - accuracy: 0.9925 - val_loss: 0.0824 - val_accuracy: 0.9798
Epoch 10/10
1875/1875 [==============================] - ETA: 0s - loss: 0.0224 - accuracy: 0.9925
Epoch 00010: saving model to training_1/cp.ckpt
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0224 - accuracy: 0.9925 - val_loss: 0.0665 - val_accuracy: 0.9824
Out[17]:
<tensorflow.python.keras.callbacks.History at 0x1ffa26a0>
ls{checkpointDir}
输出:
2020-06-19 16:20 71 checkpoint
2020-06-19 16:20 4,886,685 cp.ckpt.data-00000-of-00001
2020-06-19 16:20 1,222 cp.ckpt.index
5. 创建新模型并加载保存的权重参数
5.1 创建新模型
如果想要恢复、加载保存的权重参数,则需要创建一个一模一样的模型。
new_model = tf.keras.models.Sequential([
tf.keras.layers.Dense(512, activation="relu", input_shape=(784,)),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
new_model.summary()
# 编译模型
new_model.compile(optimizer="adam",
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=["accuracy"])
输出:
Model: "sequential_6"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_12 (Dense) (None, 512) 401920
_________________________________________________________________
dropout_6 (Dropout) (None, 512) 0
_________________________________________________________________
dense_13 (Dense) (None, 10) 5130
=================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
_________________________________________________________________
5.2 新模型评估
loss, acc = new_model.evaluate(testImages, testLabels, verbose=2)
print("未训练新模型精度:{:5.2f}%".format(100*acc))
输出:
313/313 - 0s - loss: 2.3328 - accuracy: 0.0862
未训练新模型精度: 8.62%
如上输出所示,新模型的权重参数是随机生成的,所以准确率只有8.62%。对于一个10分类的任务,随机猜中一类的概率也就十分之一,即10%。
5.3 加载保存的权重参数
new_model.load_weights(checkpointPath)
loss, acc = new_model.evaluate(testImages, testLabels, verbose=2)
print("未训练新模型精度:{:5.2f}%".format(100*acc))
输出:
313/313 - 0s - loss: 0.0665 - accuracy: 0.9824
未训练新模型精度:98.24%
加载保存的权重参数后再次评估,发现准确率达到98.24%。
6. 模型权重参数保存选项
checkpointPath = "training_2/cp-{epoch:04d}.ckpt"
checkpointDir = os.path.dirname(checkpointPath)
# 创建callback,每训练5轮保存一次模型权重参数
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpointPath,
verbose=1,
save_weights_only=True,
period=5)
# 创建模型
new_model = tf.keras.models.Sequential([
tf.keras.layers.Dense(512, activation="relu", input_shape=(784,)),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
new_model.summary()
# 编译模型
new_model.compile(optimizer="adam",
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=["accuracy"])
# 训练并保存模型
new_model.save_weights(checkpointPath.format(epoch=0))
new_model.fit(trainImages, trainLabels,epochs=50,callbacks=[cp_callback],
validation_data=(testImages, testLabels),
verbose=1)
输出:
Model: "sequential_8"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_16 (Dense) (None, 512) 401920
_________________________________________________________________
dropout_8 (Dropout) (None, 512) 0
_________________________________________________________________
dense_17 (Dense) (None, 10) 5130
=================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
_________________________________________________________________
Epoch 1/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.2222 - accuracy: 0.9338 - val_loss: 0.1010 - val_accuracy: 0.9687
Epoch 2/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0982 - accuracy: 0.9698 - val_loss: 0.0802 - val_accuracy: 0.9750
Epoch 3/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0700 - accuracy: 0.9781 - val_loss: 0.0696 - val_accuracy: 0.9800
Epoch 4/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0533 - accuracy: 0.9826 - val_loss: 0.0741 - val_accuracy: 0.9780
Epoch 5/50
1862/1875 [============================>.] - ETA: 0s - loss: 0.0454 - accuracy: 0.9849
Epoch 00005: saving model to training_2/cp-0005.ckpt
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0454 - accuracy: 0.9850 - val_loss: 0.0701 - val_accuracy: 0.9786
Epoch 6/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0377 - accuracy: 0.9879 - val_loss: 0.0629 - val_accuracy: 0.9805
Epoch 7/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0309 - accuracy: 0.9893 - val_loss: 0.0701 - val_accuracy: 0.9808
Epoch 8/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0273 - accuracy: 0.9908 - val_loss: 0.0659 - val_accuracy: 0.9808
Epoch 9/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0254 - accuracy: 0.9914 - val_loss: 0.0736 - val_accuracy: 0.9800
Epoch 10/50
1864/1875 [============================>.] - ETA: 0s - loss: 0.0244 - accuracy: 0.9916
Epoch 00010: saving model to training_2/cp-0010.ckpt
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0243 - accuracy: 0.9916 - val_loss: 0.0710 - val_accuracy: 0.9818
Epoch 11/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0193 - accuracy: 0.9934 - val_loss: 0.0785 - val_accuracy: 0.9812
Epoch 12/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0187 - accuracy: 0.9933 - val_loss: 0.0884 - val_accuracy: 0.9779
Epoch 13/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0199 - accuracy: 0.9934 - val_loss: 0.0668 - val_accuracy: 0.9833
Epoch 14/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0148 - accuracy: 0.9951 - val_loss: 0.0878 - val_accuracy: 0.9812
Epoch 15/50
1860/1875 [============================>.] - ETA: 0s - loss: 0.0159 - accuracy: 0.9947
Epoch 00015: saving model to training_2/cp-0015.ckpt
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0159 - accuracy: 0.9947 - val_loss: 0.0875 - val_accuracy: 0.9820
Epoch 16/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0143 - accuracy: 0.9952 - val_loss: 0.0884 - val_accuracy: 0.9812
Epoch 17/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0169 - accuracy: 0.9948 - val_loss: 0.0834 - val_accuracy: 0.9830
Epoch 18/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0133 - accuracy: 0.9956 - val_loss: 0.0943 - val_accuracy: 0.9809
Epoch 19/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0142 - accuracy: 0.9951 - val_loss: 0.0905 - val_accuracy: 0.9827
Epoch 20/50
1867/1875 [============================>.] - ETA: 0s - loss: 0.0137 - accuracy: 0.9957
Epoch 00020: saving model to training_2/cp-0020.ckpt
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0137 - accuracy: 0.9958 - val_loss: 0.0849 - val_accuracy: 0.9844
Epoch 21/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0121 - accuracy: 0.9957 - val_loss: 0.0989 - val_accuracy: 0.9823
Epoch 22/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0134 - accuracy: 0.9958 - val_loss: 0.0869 - val_accuracy: 0.9831
Epoch 23/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0132 - accuracy: 0.9959 - val_loss: 0.0996 - val_accuracy: 0.9822
Epoch 24/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0107 - accuracy: 0.9967 - val_loss: 0.1022 - val_accuracy: 0.9844
Epoch 25/50
1874/1875 [============================>.] - ETA: 0s - loss: 0.0133 - accuracy: 0.9958
Epoch 00025: saving model to training_2/cp-0025.ckpt
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0133 - accuracy: 0.9958 - val_loss: 0.1029 - val_accuracy: 0.9825
Epoch 26/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0112 - accuracy: 0.9962 - val_loss: 0.1101 - val_accuracy: 0.9824
Epoch 27/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0111 - accuracy: 0.9965 - val_loss: 0.1029 - val_accuracy: 0.9826
Epoch 28/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0105 - accuracy: 0.9965 - val_loss: 0.0998 - val_accuracy: 0.9839
Epoch 29/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0104 - accuracy: 0.9965 - val_loss: 0.1135 - val_accuracy: 0.9827
Epoch 30/50
1866/1875 [============================>.] - ETA: 0s - loss: 0.0103 - accuracy: 0.9969
Epoch 00030: saving model to training_2/cp-0030.ckpt
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0103 - accuracy: 0.9969 - val_loss: 0.1044 - val_accuracy: 0.9845
Epoch 31/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0114 - accuracy: 0.9968 - val_loss: 0.1160 - val_accuracy: 0.9825
Epoch 32/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0105 - accuracy: 0.9969 - val_loss: 0.1220 - val_accuracy: 0.9827
Epoch 33/50
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0099 - accuracy: 0.9969 - val_loss: 0.1256 - val_accuracy: 0.9823
Epoch 34/50
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0101 - accuracy: 0.9969 - val_loss: 0.1270 - val_accuracy: 0.9819
Epoch 35/50
1857/1875 [============================>.] - ETA: 0s - loss: 0.0114 - accuracy: 0.9970
Epoch 00035: saving model to training_2/cp-0035.ckpt
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0113 - accuracy: 0.9970 - val_loss: 0.1189 - val_accuracy: 0.9835
Epoch 36/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0091 - accuracy: 0.9974 - val_loss: 0.1168 - val_accuracy: 0.9827
Epoch 37/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0106 - accuracy: 0.9970 - val_loss: 0.1390 - val_accuracy: 0.9823
Epoch 38/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0096 - accuracy: 0.9970 - val_loss: 0.1324 - val_accuracy: 0.9841
Epoch 39/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0088 - accuracy: 0.9975 - val_loss: 0.1402 - val_accuracy: 0.9826
Epoch 40/50
1873/1875 [============================>.] - ETA: 0s - loss: 0.0124 - accuracy: 0.9966
Epoch 00040: saving model to training_2/cp-0040.ckpt
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0124 - accuracy: 0.9966 - val_loss: 0.1290 - val_accuracy: 0.9843
Epoch 41/50
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0096 - accuracy: 0.9974 - val_loss: 0.1353 - val_accuracy: 0.9829
Epoch 42/50
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0080 - accuracy: 0.9976 - val_loss: 0.1411 - val_accuracy: 0.9833
Epoch 43/50
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0096 - accuracy: 0.9972 - val_loss: 0.1459 - val_accuracy: 0.9839
Epoch 44/50
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0105 - accuracy: 0.9972 - val_loss: 0.1257 - val_accuracy: 0.9842
Epoch 45/50
1865/1875 [============================>.] - ETA: 0s - loss: 0.0104 - accuracy: 0.9971
Epoch 00045: saving model to training_2/cp-0045.ckpt
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0103 - accuracy: 0.9971 - val_loss: 0.1407 - val_accuracy: 0.9832
Epoch 46/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0098 - accuracy: 0.9974 - val_loss: 0.1356 - val_accuracy: 0.9841
Epoch 47/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0079 - accuracy: 0.9979 - val_loss: 0.1445 - val_accuracy: 0.9837
Epoch 48/50
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0097 - accuracy: 0.9974 - val_loss: 0.1407 - val_accuracy: 0.9831
Epoch 49/50
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0107 - accuracy: 0.9972 - val_loss: 0.1434 - val_accuracy: 0.9832
Epoch 50/50
1867/1875 [============================>.] - ETA: 0s - loss: 0.0082 - accuracy: 0.9978
Epoch 00050: saving model to training_2/cp-0050.ckpt
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0082 - accuracy: 0.9979 - val_loss: 0.1601 - val_accuracy: 0.9833
Out[31]:
<tensorflow.python.keras.callbacks.History at 0x2034e748>
ls{checkpointDir}
输出:
2020-06-19 16:46 81 checkpoint
2020-06-19 16:42 1,628,730 cp-0000.ckpt.data-00000-of-00001
2020-06-19 16:42 402 cp-0000.ckpt.index
2020-06-19 16:43 4,886,697 cp-0005.ckpt.data-00000-of-00001
2020-06-19 16:43 1,222 cp-0005.ckpt.index
2020-06-19 16:43 4,886,697 cp-0010.ckpt.data-00000-of-00001
2020-06-19 16:43 1,222 cp-0010.ckpt.index
2020-06-19 16:44 4,886,697 cp-0015.ckpt.data-00000-of-00001
2020-06-19 16:44 1,222 cp-0015.ckpt.index
2020-06-19 16:44 4,886,697 cp-0020.ckpt.data-00000-of-00001
2020-06-19 16:44 1,222 cp-0020.ckpt.index
2020-06-19 16:45 4,886,697 cp-0025.ckpt.data-00000-of-00001
2020-06-19 16:45 1,222 cp-0025.ckpt.index
2020-06-19 16:45 4,886,697 cp-0030.ckpt.data-00000-of-00001
2020-06-19 16:45 1,222 cp-0030.ckpt.index
2020-06-19 16:45 4,886,697 cp-0035.ckpt.data-00000-of-00001
2020-06-19 16:45 1,222 cp-0035.ckpt.index
2020-06-19 16:46 4,886,697 cp-0040.ckpt.data-00000-of-00001
2020-06-19 16:46 1,222 cp-0040.ckpt.index
2020-06-19 16:46 4,886,697 cp-0045.ckpt.data-00000-of-00001
2020-06-19 16:46 1,222 cp-0045.ckpt.index
2020-06-19 16:46 4,886,697 cp-0050.ckpt.data-00000-of-00001
2020-06-19 16:46 1,222 cp-0050.ckpt.index
latest = tf.train.latest_checkpoint(checkpoint_dir=checkpointDir)
latest
输出:
'training_2\\cp-0050.ckpt'
注意:TensorFlow默认只保存5条最近的checkpoint,如果超过5条会被覆盖。
上述保存的是checkpoint格式的二进制文件,里面仅包含训练的权重参数。
7. 保存整个模型
调用model.save()函数就可以将模型架构、权重参数以及训练配置等信息保存到单个文件中。这时保存的模型就不需要源代码或创建新的模型加载它,而可以直接加载整个模型,包括在同一平台或其它平台,例如在windows上训练保存的模型可以在安卓系统上加载。
保存整个模型有两种格式:一是SavedModel格式,另一种是HDF5格式。SavedModel是TensorFlow2中默认的格式
7.1 SavedModel格式
# 创建模型
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(512, activation="relu", input_shape=(784,)),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
model.summary()
# 编译模型
model.compile(optimizer="adam",
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=["accuracy"])
# 训练模型
model.fit(trainImages, trainLabels, epochs=5)
!mkdir -p saved_model
model.save("saved_model/my_model")
输出:
Model: "sequential_9"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_18 (Dense) (None, 512) 401920
_________________________________________________________________
dropout_9 (Dropout) (None, 512) 0
_________________________________________________________________
dense_19 (Dense) (None, 10) 5130
=================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
_________________________________________________________________
Epoch 1/5
1875/1875 [==============================] - 5s 2ms/step - loss: 0.2163 - accuracy: 0.9355
Epoch 2/5
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0975 - accuracy: 0.9703
Epoch 3/5
1875/1875 [==============================] - 4s 2ms/step - loss: 0.0687 - accuracy: 0.9788
Epoch 4/5
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0537 - accuracy: 0.9824
Epoch 5/5
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0424 - accuracy: 0.9867
INFO:tensorflow:Assets written to: saved_model/my_model\assets
ls{"saved_model"}
输出:
2020-06-19 17:48 <DIR> assets
2020-06-19 17:48 80,844 saved_model.pb
2020-06-19 17:48 <DIR> variables
# 加载保存的模型
new_model = tf.keras.models.load_model("saved_model\my_model")
new_model.summary()
输出:
Model: "sequential_9"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_18 (Dense) (None, 512) 401920
_________________________________________________________________
dropout_9 (Dropout) (None, 512) 0
_________________________________________________________________
dense_19 (Dense) (None, 10) 5130
=================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
_________________________________________________________________
# 评估加载的模型
loss, acc = new_model.evaluate(testImages,testLabels, verbose=2)
print('Restored model, accuracy: {:5.2f}%'.format(100*acc))
print(new_model.predict(testImages).shape)
输出:
313/313 - 0s - loss: 0.0782 - accuracy: 0.9780
Restored model, accuracy: 97.80%
(10000, 10)
7.2 HDF5格式
# 创建模型
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(512, activation="relu", input_shape=(784,)),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
model.summary()
# 编译模型
model.compile(optimizer="adam",
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=["accuracy"])
# 训练模型
model.fit(trainImages, trainLabels, epochs=5)
# 保存HDF5格式的模型
model.save("my_model.h5")
输出:
Model: "sequential_10"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_20 (Dense) (None, 512) 401920
_________________________________________________________________
dropout_10 (Dropout) (None, 512) 0
_________________________________________________________________
dense_21 (Dense) (None, 10) 5130
=================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
_________________________________________________________________
Epoch 1/5
1875/1875 [==============================] - 5s 2ms/step - loss: 0.2164 - accuracy: 0.9366
Epoch 2/5
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0980 - accuracy: 0.9697
Epoch 3/5
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0709 - accuracy: 0.9775
Epoch 4/5
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0540 - accuracy: 0.9827
Epoch 5/5
1875/1875 [==============================] - 5s 2ms/step - loss: 0.0422 - accuracy: 0.9862
# 加载保存的HDF5格式的模型
new_model = tf.keras.models.load_model('my_model.h5')
new_model.summary()
输出:
Model: "sequential_10"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_20 (Dense) (None, 512) 401920
_________________________________________________________________
dropout_10 (Dropout) (None, 512) 0
_________________________________________________________________
dense_21 (Dense) (None, 10) 5130
=================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
_________________________________________________________________
# 评估模型
loss, acc = new_model.evaluate(testImages, testLabels, verbose=2)
print('Restored model, accuracy: {:5.2f}%'.format(100*acc))
输出:
313/313 - 0s - loss: 0.0671 - accuracy: 0.9791
Restored model, accuracy: 97.91%
Keras保存模型的内容:
- 权重参数
- 模型架构
- 训练配置(即模型编译时传入的参数)
- 优化器及优化的位置