1D CNN+2D CNN+3D CNN

3D CNN过程详解区别
1维卷积,核沿1个方向移动。一维CNN的输入和输出数据是2维的。主要用于时间序列数据。
2维卷积,核沿2个方向移动。二维CNN的输入输出数据是3维的。主要用于图像数据。
3维卷积,核沿3个方向移动。三维CNN的输入输出数据是4维的。主要用于3D图像数据(MRI,CT扫描)。
参考卷积神经网络
1、padding
在卷积操作中,过滤器(核)的大小通常为奇数 3x3,5x5。好处有两点:

  • 在特征图(二维卷积)中存在一个中心像素点。有一个中心像素点会十分方便,便于指出过滤器的位置。
  • 在没有padding的情况下,经过卷积操作,输出的数据维度会减少。以二维卷积为例,输入大小nxn,过滤器大小fxf,卷积后输出的大小为(n-f+1)x(n-f+1)。
  • 为了避免这种情况发生,可以采取padding操作,padding的长度为p,由于在二维情况下,上下左右都“添加”长度为p的数据。构造新的输入大小为(n+2p)×(n+2p) , 卷积后的输出变为(n+2p−f+1)×(n+2p−f+1)。
  • 如果想使卷积操作不缩减数据的维度,那么p的大小应为(f−1)/2,其中f是过滤器的大小,该值如果为奇数,会在原始数据上对称padding,否则,就会出现向上padding 1个,向下padding 2个,向左padding 1个,向右padding 2个的情况,破坏原始数据结构。

2、stride
卷积中的步长大小为s,指过滤器在输入数据上,水平/竖直方向上每次移动的步长,在Padding 公式的基础上,最终卷积输出的维度大小为:

⌊(n+2p−f)/s+1⌋×⌊(n+2p−f)/s+1⌋
⌊⌋符号指向下取整,在python 中为floor地板除操作。

3、channel

  • 通道,通常指数据的最后一个维度(三维),在计算机视觉中,RGB代表着3个通道(channel)。
  • 举例说明:现在有一张图片的大小为6×6×3,过滤器的大小为3×3×nc, 这里nc指过滤器的channel大小,该数值必须与输入的channel大小相同,即nc=3。
  • 如果有k个3×3×nc的过滤器,那么卷积后的输出维度为4×4×k。注意此时p=0,s=1,k表示输出数据的channel大小。一般情况下,k代表k个过滤器提取的k个特征,如k=128,代表128个3×3大小的过滤器,提取了128个特征,且卷积后的输出维度为4×4×128。
  • 在多层卷积网络中,以计算机视觉为例,通常情况下,图像的长和宽会逐渐缩小,channel数量会逐渐增加。

4、pooling

  • 除了卷积层,卷积网络使用池化层来缩减数据的大小,提高计算的速度 ,同时提高所提取特征的鲁棒性。 池化操作不需要对参数进行学习,只是神经网络中的静态属性。
  • 池化层中,数据的维度变化与卷积操作类似。池化后的channel数量与输入的channel数量相同,因为在每个channel上单独执行最大池化操作。
  • f=2, s=2,相当于对数据维度的减半操作,f指池化层过滤器大小,s指池化步长。
    参考CNN详解

关于2D CNN与3D CNN实例比较
数据集:3Dmnist 环境:python3.7
tensorflow2.1
keras2.3.1
2D CNN

#载入模型
from __future__ import division, print_function, absolute_import

from keras.models import Sequential, model_from_json
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D, BatchNormalization
from keras.optimizers import RMSprop
from keras.preprocessing.image import ImageDataGenerator
from keras.utils.np_utils import to_categorical
from keras.callbacks import ReduceLROnPlateau, TensorBoard

import h5py
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')

from sklearn.metrics import confusion_matrix, accuracy_score
from sklearn.model_selection import train_test_split

#设置超参数
# set up hyperparameter
batch_size = 64
epochs = 20
#读取本地数据集
with h5py.File("full_dataset_vectors.h5","r") as h5:
    X_train, y_train = h5["X_train"][:], h5["y_train"][:]
    X_test, y_test = h5["X_test"][:], h5["y_test"][:]
#验证集图片的标签转化为one-hot的数组
y_train = to_categorical(y_train, num_classes=10)

#使用2D卷积需要用到一个三维的矩阵
X_train = X_train.reshape(-1, 16, 16, 16)
X_test = X_test.reshape(-1, 16, 16, 16)
X_train,X_val,y_train,y_val = train_test_split(X_train, y_train,
                                              test_size=0.25,
                                              random_state=42)
#定义一个二维卷积层
# Conv2D layer
def Conv(filters=16, kernel_size=(3,3), activation='relu', input_shape=None):
    if input_shape:
        return Conv2D(filters=filters, kernel_size = kernel_size, padding='Same'
                      , activation=activation, input_shape=input_shape)
    else:
        return Conv2D(filters=filters, kernel_size = kernel_size, padding='Same'
                      , activation=activation)
#定义模型架构
# Define model
def CNN(input_dim, num_classes):
    model = Sequential()

    model.add((Conv(8, (3, 3), input_shape=input_dim)))
    model.add((Conv(16, (3, 3))))
    # model.add(BatchNormalization())
    model.add(MaxPool2D(pool_size=(2, 2)))
    model.add(Dropout(0.25))

    model.add(Conv(32, (3, 3)))
    model.add(Conv(64, (3, 3)))
    model.add(BatchNormalization())
    model.add(MaxPool2D())
    model.add(Dropout(0.25))

    model.add(Flatten())

    model.add(Dense(4096, activation='relu'))
    model.add(Dropout(0.5))

    model.add(Dense(1024, activation='relu'))
    model.add(Dropout(0.5))

    model.add(Dense(num_classes, activation='softmax'))

    return model
#定义训练参数,验证方法,保存模型以及加载模型
# Train Model

def train(optimizer, scheduler, gen):
    global model
	tensorboard = TensorBoard()
    print("Training...Please wait")
    model.compile(optimizer='adam', loss="categorical_crossentropy", metrics=["accuracy"])

    model.fit_generator(gen.flow(X_train, y_train, batch_size=batch_size),
                        epochs=epochs, validation_data=(X_val, y_val),
                        verbose=2, steps_per_epoch=X_train.shape[0] // batch_size,
                        callbacks=[scheduler, tensorboard])


def evaluate():
    global model

    pred = model.predict(X_test)
    pred = np.argmax(pred, axis=1)

    print(accuracy_score(pred, y_test))

    # Heat map

    array = confusion_matrix(y_test, pred)
    cm = pd.DataFrame(array, index=range(10), columns=range(10))
    plt.figure(figsize=(20, 20))
    sns.heatmap(cm, annot=True)
    plt.show()


def save_model():
    global model

    model_json = model.to_json()
    with open('model_2D.json', 'w') as f:
        f.write(model_json)

    model.save_weights('model_2D.h5')

    print("Model Saved")


def load_model():
    f = open("model_2D.json", "r")
    model_json = f.read()
    f.close()

    loaded_model = model_from_json(model_json)
    loaded_model.load_weights('model_2D.h5')

    print("Model Loaded.")

    return loaded_model


if __name__ == '__main__':
    optimizer = RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)
    scheduler = ReduceLROnPlateau(monitor='val_acc', patience=3, verbose=1, factor=0.5, min_lr=1e-5)

    model = CNN((16, 16, 16), 10)

    gen = ImageDataGenerator(rotation_range=10, zoom_range=0.1, width_shift_range=0.1, height_shift_range=0.1)
    gen.fit(X_train)

    train(optimizer, scheduler, gen)
    evaluate()
    save_model()

结果

Training...Please wait
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 16, 16, 8)         1160      
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 16, 16, 16)        1168      
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 8, 8, 16)          0         
_________________________________________________________________
dropout (Dropout)            (None, 8, 8, 16)          0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 8, 8, 32)          4640      
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 8, 8, 64)          18496     
_________________________________________________________________
batch_normalization (BatchNo (None, 8, 8, 64)          256       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 4, 4, 64)          0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 4, 4, 64)          0         
_________________________________________________________________
flatten (Flatten)            (None, 1024)              0         
_________________________________________________________________
dense (Dense)                (None, 4096)              4198400   
_________________________________________________________________
dropout_2 (Dropout)          (None, 4096)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 1024)              4195328   
_________________________________________________________________
dropout_3 (Dropout)          (None, 1024)              0         
_________________________________________________________________
dense_2 (Dense)              (None, 10)                10250     
=================================================================
Total params: 8,429,698
Trainable params: 8,429,570
Non-trainable params: 128
_________________________________________________________________
Train for 117 steps, validate on 2500 samples
Epoch 1/20
117/117 - 9s - loss: 1.9939 - accuracy: 0.3085 - val_loss: 2.2722 - val_accuracy: 0.1252
Epoch 2/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
117/117 - 5s - loss: 1.4384 - accuracy: 0.4944 - val_loss: 1.9317 - val_accuracy: 0.3584
Epoch 3/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
117/117 - 5s - loss: 1.3030 - accuracy: 0.5395 - val_loss: 1.6192 - val_accuracy: 0.4788
Epoch 4/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
117/117 - 5s - loss: 1.2372 - accuracy: 0.5594 - val_loss: 1.3695 - val_accuracy: 0.5808
Epoch 5/20
117/117 - 5s - loss: 1.1820 - accuracy: 0.5783 - val_loss: 1.1514 - val_accuracy: 0.6184
Epoch 6/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
117/117 - 5s - loss: 1.1672 - accuracy: 0.5901 - val_loss: 1.0569 - val_accuracy: 0.6252
Epoch 7/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
117/117 - 5s - loss: 1.1454 - accuracy: 0.5925 - val_loss: 1.0906 - val_accuracy: 0.6112
Epoch 8/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
117/117 - 5s - loss: 1.1074 - accuracy: 0.6065 - val_loss: 0.9975 - val_accuracy: 0.6516
Epoch 9/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
117/117 - 5s - loss: 1.0965 - accuracy: 0.6093 - val_loss: 0.9653 - val_accuracy: 0.6644
Epoch 10/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
117/117 - 4s - loss: 1.0863 - accuracy: 0.6153 - val_loss: 1.0170 - val_accuracy: 0.6396
Epoch 11/20
117/117 - 4s - loss: 1.0773 - accuracy: 0.6182 - val_loss: 0.9661 - val_accuracy: 0.6580
Epoch 12/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
117/117 - 4s - loss: 1.0577 - accuracy: 0.6263 - val_loss: 1.0404 - val_accuracy: 0.6388
Epoch 13/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
117/117 - 4s - loss: 1.0354 - accuracy: 0.6299 - val_loss: 0.9637 - val_accuracy: 0.6656
Epoch 14/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
117/117 - 5s - loss: 1.0324 - accuracy: 0.6260 - val_loss: 0.9640 - val_accuracy: 0.6608
Epoch 15/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
117/117 - 5s - loss: 1.0233 - accuracy: 0.6352 - val_loss: 0.9413 - val_accuracy: 0.6680
Epoch 16/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
117/117 - 5s - loss: 1.0041 - accuracy: 0.6435 - val_loss: 0.9782 - val_accuracy: 0.6504
Epoch 17/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
117/117 - 5s - loss: 0.9987 - accuracy: 0.6505 - val_loss: 0.9292 - val_accuracy: 0.6696
Epoch 18/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
117/117 - 5s - loss: 0.9927 - accuracy: 0.6487 - val_loss: 0.9566 - val_accuracy: 0.6584
Epoch 19/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
117/117 - 5s - loss: 0.9940 - accuracy: 0.6501 - val_loss: 0.9418 - val_accuracy: 0.6664
Epoch 20/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
117/117 - 4s - loss: 0.9892 - accuracy: 0.6526 - val_loss: 0.9247 - val_accuracy: 0.6752
0.677
Model Saved

Process finished with exit code 0

混淆矩阵:

一维CNNtorch案例 cnn 一维数据_深度学习

3D CNN

from __future__ import division, print_function, absolute_import

from tensorflow.keras.models import Sequential, model_from_json
from tensorflow.keras.layers import Dense, Dropout, Flatten, Conv3D, MaxPool3D, BatchNormalization, Input
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from keras.utils.np_utils import to_categorical
from tensorflow.keras.callbacks import ReduceLROnPlateau, TensorBoard
#Using TensorFlow backend.

import h5py
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')

from sklearn.metrics import confusion_matrix, accuracy_score

# Hyper Parameter
batch_size = 86
epochs = 20
# Set up TensorBoard
tensorboard = TensorBoard(batch_size=batch_size)

with h5py.File("full_dataset_vectors.h5", 'r') as h5:
    X_train, y_train = h5["X_train"][:], h5["y_train"][:]
    X_test, y_test = h5["X_test"][:], h5["y_test"][:]

# Translate data to color
#给图片添加 RGB 数据通道的维度
def array_to_color(array, cmap="Oranges"):
    s_m = plt.cm.ScalarMappable(cmap=cmap)
    return s_m.to_rgba(array)[:,:-1]

def translate(x):
    xx = np.ndarray((x.shape[0], 4096, 3))
    for i in range(x.shape[0]):
        xx[i] = array_to_color(x[i])
        if i % 1000 == 0:
            print(i)
    # Free Memory
    del x

    return xx

#数据转换为
y_train = to_categorical(y_train, num_classes=10)
# y_test = to_categorical(y_test, num_classes=10)

X_train = translate(X_train).reshape(-1, 16, 16, 16, 3)
X_test  = translate(X_test).reshape(-1, 16, 16, 16, 3)


# Conv3D layer
def Conv(filters=16, kernel_size=(3,3,3), activation='relu', input_shape=None):
    if input_shape:
        return Conv3D(filters=filters, kernel_size=kernel_size, padding='Same', activation=activation, input_shape=input_shape)
    else:
        return Conv3D(filters=filters, kernel_size=kernel_size, padding='Same', activation=activation)

# Define Model
def CNN(input_dim, num_classes):
    model = Sequential()

    model.add(Conv(8, (3,3,3), input_shape=input_dim))
    model.add(Conv(16, (3,3,3)))
    # model.add(BatchNormalization())
    model.add(MaxPool3D())
    # model.add(Dropout(0.25))

    model.add(Conv(32, (3,3,3)))
    model.add(Conv(64, (3,3,3)))
    model.add(BatchNormalization())
    model.add(MaxPool3D())
    model.add(Dropout(0.25))

    model.add(Flatten())

    model.add(Dense(4096, activation='relu'))
    model.add(Dropout(0.5))

    model.add(Dense(1024, activation='relu'))
    model.add(Dropout(0.5))

    model.add(Dense(num_classes, activation='softmax'))

    return model

# Train Model
def train(optimizer, scheduler):
    global model

    print("Training...")
    model.compile(optimizer = 'adam' , loss = "categorical_crossentropy", metrics=["accuracy"])
    model.summary()
    model.fit(x=X_train, y=y_train, batch_size=batch_size, epochs=epochs, validation_split=0.15,
                    verbose=2, callbacks=[scheduler, tensorboard])

def evaluate():
    global model

    pred = model.predict(X_test)
    pred = np.argmax(pred, axis=1)

    print(accuracy_score(pred,y_test))
    # Heat Map
    array = confusion_matrix(y_test, pred)
    cm = pd.DataFrame(array, index = range(10), columns = range(10))
    plt.figure(figsize=(20,20))
    sns.heatmap(cm, annot=True)
    plt.show()

def save_model():
    global model

    model_json = model.to_json()
    with open('model/model_3D.json', 'w') as f:
        f.write(model_json)

    model.save_weights('model/model_3D.h5')

    print('Model Saved.')

def load_model():
    f = open('model/model_3D.json', 'r')
    model_json = f.read()
    f.close()

    loaded_model = model_from_json(model_json)
    loaded_model.load_weights('model/model_3D.h5')

    print("Model Loaded.")
    return loaded_model

if __name__ == '__main__':

    optimizer = RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)
    scheduler = ReduceLROnPlateau(monitor='val_acc', patience=3, verbose=1, factor=0.5, min_lr=1e-5)

    model = CNN((16,16,16,3), 10)

    train(optimizer, scheduler)
    evaluate()
    save_model()

结果

0
1000
2000
3000
4000
5000
6000
7000
8000
9000
0
1000

Training...
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv3d (Conv3D)              (None, 16, 16, 16, 8)     656       
_________________________________________________________________
conv3d_1 (Conv3D)            (None, 16, 16, 16, 16)    3472      
_________________________________________________________________
max_pooling3d (MaxPooling3D) (None, 8, 8, 8, 16)       0         
_________________________________________________________________
conv3d_2 (Conv3D)            (None, 8, 8, 8, 32)       13856     
_________________________________________________________________
conv3d_3 (Conv3D)            (None, 8, 8, 8, 64)       55360     
_________________________________________________________________
batch_normalization (BatchNo (None, 8, 8, 8, 64)       256       
_________________________________________________________________
max_pooling3d_1 (MaxPooling3 (None, 4, 4, 4, 64)       0         
_________________________________________________________________
dropout (Dropout)            (None, 4, 4, 4, 64)       0         
_________________________________________________________________
flatten (Flatten)            (None, 4096)              0         
_________________________________________________________________
dense (Dense)                (None, 4096)              16781312  
_________________________________________________________________
dropout_1 (Dropout)          (None, 4096)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 1024)              4195328   
_________________________________________________________________
dropout_2 (Dropout)          (None, 1024)              0         
_________________________________________________________________
dense_2 (Dense)              (None, 10)                10250     
=================================================================
Total params: 21,060,490
Trainable params: 21,060,362
Non-trainable params: 128
_________________________________________________________________
Train on 8500 samples, validate on 1500 samples
Epoch 1/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
8500/8500 - 9s - loss: 2.6471 - accuracy: 0.1839 - val_loss: 2.2708 - val_accuracy: 0.2000
Epoch 2/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
8500/8500 - 6s - loss: 1.6287 - accuracy: 0.4285 - val_loss: 2.8031 - val_accuracy: 0.1033
Epoch 3/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
8500/8500 - 6s - loss: 1.2695 - accuracy: 0.5579 - val_loss: 3.1281 - val_accuracy: 0.1900
Epoch 4/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
8500/8500 - 6s - loss: 1.1059 - accuracy: 0.6122 - val_loss: 3.4772 - val_accuracy: 0.2380
Epoch 5/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
8500/8500 - 6s - loss: 1.0447 - accuracy: 0.6307 - val_loss: 1.3234 - val_accuracy: 0.5460
Epoch 6/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
8500/8500 - 6s - loss: 0.9735 - accuracy: 0.6654 - val_loss: 1.3245 - val_accuracy: 0.5960
Epoch 7/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
8500/8500 - 6s - loss: 0.9088 - accuracy: 0.6832 - val_loss: 0.9973 - val_accuracy: 0.6500
Epoch 8/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
8500/8500 - 6s - loss: 0.8433 - accuracy: 0.7093 - val_loss: 1.1331 - val_accuracy: 0.6413
Epoch 9/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
8500/8500 - 6s - loss: 0.7784 - accuracy: 0.7293 - val_loss: 0.9897 - val_accuracy: 0.6687
Epoch 10/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
8500/8500 - 6s - loss: 0.7293 - accuracy: 0.7451 - val_loss: 0.9537 - val_accuracy: 0.6693
Epoch 11/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
8500/8500 - 6s - loss: 0.6554 - accuracy: 0.7719 - val_loss: 0.9934 - val_accuracy: 0.6653
Epoch 12/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
8500/8500 - 6s - loss: 0.6129 - accuracy: 0.7887 - val_loss: 0.8710 - val_accuracy: 0.6987
Epoch 13/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
8500/8500 - 6s - loss: 0.5138 - accuracy: 0.8218 - val_loss: 0.8410 - val_accuracy: 0.7220
Epoch 14/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
8500/8500 - 6s - loss: 0.4538 - accuracy: 0.8418 - val_loss: 0.8636 - val_accuracy: 0.7200
Epoch 15/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
8500/8500 - 6s - loss: 0.4126 - accuracy: 0.8579 - val_loss: 1.7215 - val_accuracy: 0.6053
Epoch 16/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
8500/8500 - 6s - loss: 0.3595 - accuracy: 0.8766 - val_loss: 0.9869 - val_accuracy: 0.7327
Epoch 17/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
8500/8500 - 6s - loss: 0.3179 - accuracy: 0.8892 - val_loss: 1.0798 - val_accuracy: 0.7173
Epoch 18/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
8500/8500 - 6s - loss: 0.3042 - accuracy: 0.8953 - val_loss: 1.0762 - val_accuracy: 0.6927
Epoch 19/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
8500/8500 - 6s - loss: 0.2573 - accuracy: 0.9146 - val_loss: 1.0316 - val_accuracy: 0.7207
Epoch 20/20
WARNING:tensorflow:Reduce LR on plateau conditioned on metric `val_acc` which is not available. Available metrics are: loss,accuracy,val_loss,val_accuracy,lr
8500/8500 - 6s - loss: 0.2203 - accuracy: 0.9236 - val_loss: 0.9373 - val_accuracy: 0.7267
0.7325
Model Saved.

Process finished with exit code 0

混淆矩阵

一维CNNtorch案例 cnn 一维数据_数据_02