《深度学习的数学》给予了我极大的启发,作者阐述的神经网络的思想和数学基础令我受益颇多,但是由于书中使用Excel作为示例向读者展示神经网络,这对我这样一个不精通Excel的人来说很头疼,因此我打算使用Python来实现书中的一个简单的卷积神经网络模型,即识别数字123的模型。
这个网络模型极其简单,比号称机器学习中的“Hello World”的手写数字识别模型更简单,它基本没有实用价值,但是我之所以推崇它,只因为它褪去了神经网络的复杂性,展示了神经网络中最基本、最根本的东西。
模型总共分为四层,第一层为输入层,第二层为卷积层,第三层为最大池化层,第四层为输出层。输入图像为96张66的单值二色图像。
卷积层的过滤器共包含3
9+3=30个参数,输出层包括12*3+3=39个参数,所以网络模型中总共69个参数。
原理和上篇文章一样,所涉及的数学知识有点改变。具体如下:
首先是卷积层神经单元的加权输入和神经单元的输出
tesseract led数字识别_python
tesseract led数字识别_python_02
然后是池化层的加权输入和神经单元的输出:
tesseract led数字识别_网络_03
tesseract led数字识别_神经网络_04
然后是输出层的加权输入和神经单元的输出:
tesseract led数字识别_python_05
tesseract led数字识别_网络_06
tesseract led数字识别_卷积_07

含义

图像为1

图像为2

图像为3

tesseract led数字识别_卷积_08

1的正解变量

1

0

0

tesseract led数字识别_卷积_09

2的正解变量

0

1

0

tesseract led数字识别_网络_10

3的正解变量

0

0

1

图像为1

图像为2

图像为3

接近1的值

接近0的值

接近0的值

接近0的值

接近1的值

接近0的值

tesseract led数字识别_python_13

接近0的值

接近0的值

接近1的值

然后是误差反向传播法中的各层的神经单元误差δ:
输出层的神经单元误差计算:
tesseract led数字识别_python_13
卷积层神经单元误差的反向递推关系式:
tesseract led数字识别_tesseract led数字识别_15
下面是平方误差关于过滤器的偏导数:(下面是像素为66、过滤器大小为33时的关系式)
卷积层:
关于权重:
tesseract led数字识别_卷积_16
关于偏置:
tesseract led数字识别_神经网络_17
输出层:
tesseract led数字识别_神经网络_18

代价函数(损失函数)tesseract led数字识别_网络_19
梯度下降法基本式:
tesseract led数字识别_python_20

新的位置:
tesseract led数字识别_tesseract led数字识别_21
基于上述的数学基础,编写了以下的Python程序。
注意:初始值和学习率对最终结果影响很大,随机的初始值并不一定合适,如果出现这种情况需要初始化。

import numpy as np
import math
#_inputs的形状为(96,6,6)
_inputs = np.array([[[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0]],
    [[0,0,0,1,0,0],[0,0,1,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0]],
    [[0,0,1,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0]],
    [[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,1,1,0,0]],
    [[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,1,1,1,0]],
    [[0,0,1,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,1,1,1,0]],
    [[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0]],
    [[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,1,1,1,0,0]],
    [[0,1,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0]],
    [[0,0,1,1,0,0],[0,1,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0]],
    #10
    [[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,1,1,1,0,0]],
    [[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,1,0]],
    [[0,0,0,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,0,1,0,0]],
    [[0,0,1,1,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,1,0,0]],
    [[0,0,0,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0]],
    [[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,0,0,0,0]],
    [[0,0,1,0,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,0,1,0,0]],
    [[0,0,0,0,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0]],
    [[0,0,0,0,0,0],[0,0,1,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0]],
    [[0,0,0,0,0,0],[0,0,1,1,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,1,0,0]],
    #20
    [[0,0,0,0,0,0],[0,0,1,1,0,0],[0,0,1,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0]],
    [[0,0,1,0,0,0],[0,1,1,0,0,0],[0,1,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0]],
    [[0,0,0,0,1,0],[0,0,0,1,1,0],[0,0,0,1,1,0],[0,0,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0]],
    [[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0]],
    [[0,0,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0]],
    [[0,1,0,0,0,0],[0,1,0,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0]],
    [[0,1,0,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,0,1,0,0],[0,0,0,0,0,0]],
    [[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,1,0,0,0,0],[0,0,0,0,0,0]],
    [[0,0,0,0,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0],[0,0,1,0,0,0]],
    [[0,0,0,0,0,0],[0,0,0,0,1,0],[0,0,0,0,1,0],[0,0,0,0,1,0],[0,0,0,0,1,0],[0,0,0,0,0,0]],
    #30
    [[0,0,0,0,0,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,0,0,0,0],[0,1,0,0,0,0],[0,0,0,0,0,0]],
    [[0,0,0,0,0,0],[0,0,1,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,1,0,0],[0,0,0,0,0,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
    [[0,1,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
    [[0,0,1,1,1,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,1,0,0],[0,1,1,1,1,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,1,1,0,0,0],[0,1,1,1,1,0]],
    [[0,1,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,1,1,0,0,0],[0,1,1,1,1,0]],
    [[0,1,1,1,1,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
    #40
    [[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
    [[0,1,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[0,1,1,0,0,0],[0,1,1,1,1,0]],
    [[0,0,1,1,1,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
    [[0,1,1,1,1,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,1,1,0,0],[0,1,1,0,0,0],[0,1,1,1,1,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[1,1,1,1,1,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,1]],
    [[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,0],[1,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
    [[0,1,1,1,0,0],[0,1,0,0,1,0],[1,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
    [[0,1,1,1,0,0],[0,1,0,0,1,0],[1,0,0,0,1,0],[0,0,0,1,0,0],[0,1,1,0,0,0],[0,1,1,1,1,0]],
    #50
    [[0,1,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,1,0,0],[1,1,1,1,1,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,1]],
    [[0,1,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,1]],
    [[0,0,1,1,0,0],[0,1,0,0,1,1],[0,0,0,0,1,1],[0,0,0,1,1,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
    [[0,0,1,1,0,0],[0,1,1,1,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
    [[0,0,1,1,0,0],[0,1,1,0,1,0],[0,0,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
    [[0,0,1,1,0,0],[0,1,1,0,1,0],[0,1,0,0,1,0],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,1,1,0,0],[0,1,1,0,0,0],[1,1,1,1,1,0]],
    [[0,0,1,1,1,0],[0,1,0,0,1,1],[0,0,0,0,1,1],[0,0,0,1,0,0],[0,0,1,0,0,0],[0,1,1,1,1,1]],
    #60
    [[0,1,1,1,0,0],[0,1,0,0,1,0],[0,0,0,1,1,0],[0,0,0,1,0,0],[0,0,1,1,0,0],[0,1,1,1,1,0]],
    [[0,1,1,1,1,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[0,0,1,1,0,0],[1,1,1,1,1,0]],
    [[0,1,1,1,0,0],[0,1,0,1,0,0],[0,0,0,1,0,0],[0,0,1,1,0,0],[0,1,1,0,0,0],[0,1,1,1,1,0]],
    [[0,0,1,1,0,0],[0,1,0,1,1,0],[0,0,0,1,1,0],[0,0,0,1,0,0],[0,0,1,1,0,0],[0,1,1,1,1,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,0,1,1,0,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,1,1,0],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,0,1,1,0,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,1,1,1],[0,0,0,0,1,1],[0,1,0,0,1,0],[0,0,1,1,0,0]],
    [[0,1,1,1,0,0],[0,1,0,0,1,0],[0,0,0,1,1,1],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,0,1,1,0,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,1,1,1],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,1,1,1,0,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,1,1,1],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,0,1,1,1,0]],
    #70
    [[0,0,1,1,1,0],[0,1,0,0,1,0],[0,0,0,1,1,1],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,0,1,1,0,0]],
    [[0,1,1,1,0,0],[0,1,0,0,1,0],[0,0,0,1,1,1],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,0,1,1,0,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,1,1,1],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,1,1,1,0,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,0,1],[0,0,0,1,1,1],[0,1,0,0,1,0],[0,0,1,1,0,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,1,1,1],[0,0,0,0,1,1],[0,1,0,0,1,0],[0,0,1,1,0,0]],
    [[0,0,1,1,1,0],[0,1,0,0,1,0],[0,0,0,0,0,1],[0,0,0,1,1,1],[0,1,0,0,1,0],[0,0,1,1,0,0]],
    [[0,0,1,1,1,0],[0,1,0,0,1,0],[0,0,0,1,1,1],[0,0,0,0,1,1],[0,1,0,0,0,1],[0,0,1,1,1,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,0,1,1,0,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[1,0,0,0,1,0],[0,1,1,1,0,0]],
    [[0,1,1,1,0,0],[1,0,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,0,1,1,0,0]],
    #80
    [[0,1,1,1,0,0],[1,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[1,1,0,0,1,0],[0,1,1,1,0,0]],
    [[0,1,1,1,0,0],[1,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,0,1,1,0,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[1,1,0,0,1,0],[0,1,1,1,0,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,1],[0,0,0,1,1,0],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,1,1,1,0,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,0],[0,0,0,1,1,0],[0,0,0,1,1,0],[0,1,0,0,0,1],[0,0,1,1,1,0]],
    [[0,1,1,1,0,0],[0,1,0,0,1,0],[0,0,0,1,1,0],[0,0,0,1,1,0],[1,1,0,0,1,0],[0,0,1,1,0,0]],
    [[0,0,1,1,0,0],[1,1,0,0,1,0],[0,0,1,1,1,0],[0,0,0,1,1,0],[1,1,0,0,1,0],[0,1,1,1,0,0]],
    [[0,1,1,1,0,0],[1,0,0,0,1,0],[0,0,1,1,1,0],[0,0,1,1,1,0],[0,0,0,0,1,0],[1,1,1,1,0,0]],
    [[1,1,1,1,0,0],[0,0,0,0,1,0],[0,0,1,1,1,0],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,1,1,1,0,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,1],[0,0,0,0,1,0],[0,0,0,1,1,0],[1,1,0,0,1,1],[0,1,1,1,1,0]],
    #90
    [[0,1,1,1,0,0],[0,1,1,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[0,1,0,0,1,0],[0,1,1,1,0,0]],
    [[0,1,1,1,0,0],[0,1,0,0,1,0],[0,0,0,1,1,0],[0,0,0,0,1,0],[0,1,1,0,1,0],[0,0,1,1,0,0]],
    [[0,0,1,1,0,0],[0,1,0,0,1,1],[0,0,0,0,1,1],[0,0,0,1,1,0],[1,1,0,0,1,0],[0,1,1,1,1,0]],
    [[1,1,1,1,1,0],[0,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[1,1,0,0,1,0],[0,1,1,1,0,0]],
    [[1,1,1,1,0,0],[1,1,0,0,1,0],[0,0,0,0,1,0],[0,0,0,1,1,0],[1,1,0,0,1,0],[1,1,1,1,0,0]],
    [[0,0,1,1,1,0],[0,1,0,0,1,1],[0,0,0,1,1,0],[0,0,0,0,1,0],[0,1,0,0,1,1],[0,0,1,1,1,0]],])
#对应的标签
#shape = (96,3)
labels = np.array([[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],
                   [1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],
                   [1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],[1,0,0],
                   [1,0,0],[1,0,0],
                   [0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],
                   [0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],
                   [0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],[0,1,0],
                   [0,1,0],[0,1,0],
                   [0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],
                   [0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],
                   [0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],[0,0,1],
                   [0,0,1],[0,0,1]])
#激活函数
def a(x):
    return 1.0 / (1 + math.exp(-x))

#激活函数的导数
def aa(x):
    return a(x) * (1 - a(x))

#学习率n
n = 0.2

#初始化权重和偏置 神经网络总共69个参数
#初始化使用的是正态分布随机数
#卷积层
F = np.random.randn(3,3,3)
#输出层
O = np.random.randn(3, 3, 2, 2)
#6个偏置
FB = np.random.randn(3)
OB = np.random.randn(3)


#根据卷积层的i行j列返回神经单元连接的池化层神经单元的位置
def Pij(i,j):
    x = 0 if i <= 1 else 1
    y = 0 if j <= 1 else 1
    return (x,y)

#处理单张图像
def proprecessing(_inputs,t):
    #wi_j代表层i的第j个神经单元,SWi_j代表层i的第j个神经单元的平均误差对权重的偏导,SB代表平均误差对偏置的偏导,Ct为64张图像的平均误差的和
    #最终要得到的权重和偏置保存在wi_j和bs中
    global F,O,FB,OB,C,Wf,Wo,Bf,Bo

    #求加权输入z
    #36个分量
    x = _inputs.flatten()

    #卷积层的48个加权输入和48个神经单元输出
    Z = np.zeros((3,4,4))
    Fa = np.zeros((3,4,4))
    for k in range(3):
        w = F[k].flatten()
        for i in range(4):
            for j in range(4):
               Z[k][i][j] = np.sum(w * _inputs[i:i + 3,j:j + 3].flatten()) + FB[k]
               Fa[k][i][j] = a(Z[k][i][j])
               
    #池化层的12个加权输入和12个神经单元输出
    #池化层的输入和输出相等
    Zp = np.zeros((3,2,2))
    for k in range(3):
        for i in range(2):
            for j in range(2):
               Zp[k][i][j] = np.max([Fa[k][2 * i][2 * j],Fa[k][2 * i][2 * j + 1],Fa[k][2 * i + 1][2 * j],Fa[k][2 * i + 1][2 * j + 1]])
               
    #输出层3个加权输入和3个神经单元的输出
    Zo = np.zeros((3,1))
    Oa = np.zeros((3,1))
    for k in range(3):
        w = O[k].flatten()
        Zo[k] = np.sum(w * Zp.flatten()) + OB[k]
        Oa[k] = a(Zo[k])
        
    #平均误差
    C += 1.0 / 2 * ((t[0] - Oa[0]) ** 2 + (t[1] - Oa[1]) ** 2 + (t[2] - Oa[2]) ** 2)
    
    #输出层的神经单元误差3个
    Do = np.zeros((3,1))
    for k in range(3):
        Do[k] = (Oa[k] - t[k]) * aa(Zo[k])
        
    #卷积层的神经单元误差48个
    Df = np.zeros((3,4,4))
    for k in range(3):
        for i in range(4):
            for j in range(4):
                l = Pij(i,j)
                zeroOrone = 1 if Fa[k][i][j] == Zp[k][l[0]][l[1]] else 0
                Df[k][i][j] = np.sum(Do.flatten() * O[:,k,l[0],l[1]].flatten()) * zeroOrone * aa(Z[k][i][j])
                
    #平方误差关于过滤器的偏导数27个
    for k in range(3):
        for i in range(3):
            for j in range(3):
                Wf[k][i][j] += np.sum(Df[k].flatten() * _inputs[i:i + 4,j:j + 4].flatten())
    #偏置 3个
    for k in range(3):
        Bf[k] += np.sum(Df[k].flatten())
        
    #平方误差关于输出层神经单元的权重的偏导数
    for n in range(3):
        for k in range(3):
            for i in range(2):
                for j in range(2):
                    Wo[n][k][i][j] += Do[n] * Zp[k][i][j]
    #偏置3个
    Bo += Do

#平方误差
for k in range(50):
    C = 0.0
    Wf = np.zeros((3,3,3))
    Wo = np.zeros((3,3,2,2))
    Bf = np.zeros((3,1))
    Bo = Bf
    for i in range(96):
        proprecessing(_inputs[i],labels[i])
    #更新权重和偏置
    F+=(-1 * n * Wf)
    O+=(-1 * n * Wo)
    FB+=(-1 * n * Bf.flatten())
    OB+=(-1 * n * Bo.flatten())
    print('第{0}次神经网络的平均误差:{1}'.format(k + 1,C))
print(F.tolist())
print(O.tolist())
print(FB.tolist())
print(OB.tolist())

得到的权重结果就不贴了,测试程序:

import numpy as np
import math

#预设的测试图像为3
_inputs = np.array([[0,1,1,1,1,0],[0,0,0,0,1,0],[0,0,1,1,0,0],[0,0,0,0,1,0],[0,0,0,0,1,0],[0,1,1,1,0,0]])
#激活函数
def a(x):
    return 1.0 / (1 + math.exp(-x))

#激活函数的导数
def aa(x):
    return a(x) * (1 - a(x))


#初始化权重和偏置 神经网络总共69个参数
#初始化使用的是正态分布随机数
#卷积层
F = np.array(空,此处的值为上面得到的)
##输出层
O = np.array(空,此处的值为上面得到的)
##6个偏置
FB = np.array(空,此处的值为上面得到的)
OB = np.array(空,此处的值为上面得到的)
#根据卷积层的i行j列返回神经单元连接的池化层神经单元的位置
def Pij(i,j):
    x = 0 if i <= 1 else 1
    y = 0 if j <= 1 else 1
    return (x,y)

#处理单张图像
def getResult(_inputs):
    #wi_j代表层i的第j个神经单元,SWi_j代表层i的第j个神经单元的平均误差对权重的偏导,SB代表平均误差对偏置的偏导,Ct为64张图像的平均误差的和
    #最终要得到的权重和偏置保存在wi_j和bs中
    global F,O,FB,OB,C,Wf,Wo,Bf,Bo

    #求加权输入z
    #36个分量
    x = _inputs.flatten()

    #卷积层的48个加权输入和48个神经单元输出
    Z = np.zeros((3,4,4))
    Fa = np.zeros((3,4,4))
    for k in range(3):
        w = F[k].flatten()
        for i in range(4):
            for j in range(4):
               Z[k][i][j] = np.sum(w * _inputs[i:i + 3,j:j + 3].flatten()) + FB[k]
               Fa[k][i][j] = a(Z[k][i][j])
               
    #池化层的12个加权输入和12个神经单元输出
    #池化层的输入和输出相等
    Zp = np.zeros((3,2,2))
    for k in range(3):
        for i in range(2):
            for j in range(2):
               Zp[k][i][j] = np.max([Fa[k][2 * i][2 * j],Fa[k][2 * i][2 * j + 1],Fa[k][2 * i + 1][2 * j],Fa[k][2 * i + 1][2 * j + 1]])
    #输出层3个加权输入和3个神经单元的输出
    Zo = np.zeros((3,1))
    Oa = np.zeros((3,1))
    for k in range(3):
        w = O[k].flatten()
        Zo[k] = np.sum(w * Zp.flatten()) + OB[k]
        Oa[k] = a(Zo[k])
    print('图像中为1的概率为:{0},为2的概率为:{1},为3的概率为:{2}'.format(Oa[0],Oa[1],Oa[2]))
getResult(_inputs)

至此就结束了。