将03的激励函数改为神经元,编辑成简单的三层神经网络中,代码如下:

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time : 2020/2/15 12:05
# @Author : LZQ
# @Software: PyCharm
import numpy as np

def sigmoid(x):
    return 1/(1+np.exp(-x))

def identity_function(x):
    return x
def init_network():
    network={}#定义字典类型存储参数
    network['W1']=np.array([[0.1,0.3,0.5],[0.2,0.4,0.6]])
    network['b1']=np.array([0.1,0.2,0.3])
    network['W2']=np.array([[0.1,0.4],[0.2,0.5],[0.3,0.6]])
    network['b2']=np.array([0.1,0.2])
    network['W3']=np.array([[0.1,0.3],[0.2,0.4]])
    network['b3']=np.array([0.1,0.2])
    return network
'''
这里出现了forward(前向)一词,它表示的是从输入到输出方向
的传递处理。后面在进行神经网络的训练时,我们将介绍后向(backward,
从输出到输入方向)的处理。
'''
def forward(network,x):
    W1,W2,W3=network['W1'],network['W2'],network['W3']
    b1,b2,b3=network['b1'], network['b2'], network['b3']

    a1=np.dot(x,W1)+b1
    z1=sigmoid(a1)

    a2=np.dot(z1,W2)+b2
    z2=sigmoid(a2)

    a3=np.dot(z2,W3)+b3
    y=identity_function(a3)

    return y

network=init_network()
x=np.array([1.0,0.5])
y=forward(network,x)
print(y)  #[0.31682708 0.69627909]

答案:

深度学习自学笔记 ~04_python