大家好,我是微学AI,今天给大家介绍一下Keras,那么什么是Keras呢? 

 

简单介绍一下Keras框架:

Keras 是一个高度封装的用于构建和训练深度学习模型的开源库,具有以下几个优点: 1. 它是一个高度抽象的框架,可以节省人工智能学习者的时间,使其可以更快的上手,更快的开发模型; 2. Keras 是基于Python的,可以让学习者更容易掌握; 3. Keras 提供了一个简单的API,可以轻松构建和训练深度学习模型,而不用编写复杂的代码; 4. Keras 支持多种后端,可以让学习者更容易在不同硬件上部署模型; 5. Keras 也支持许多现有的深度学习框架,可以让学习者更容易迁移至其它框架; 6. 对于新手,Keras 提供了一个友好的、容易理解的框架,可以让人工智能初学者更容易上手。

Keras由纯Python编写而成并基Tensorflow、Theano以及CNTK后端。Keras 为支持快速实验而生,能够把你的idea迅速转换为结果,如果你是初学者,请选择Keras框架,带你初步了解深度神经网络框架。Keras可以让用户快速构建和实验深度学习模型的一种开源框架。它提供了一种模块化的架构,可以让用户将模型的层次组织在一起,从而构建复杂的神经网络。Keras还支持多种后端,这样就可以利用不同的计算资源来加速训练过程,如GPU、CPU等。Keras为深度学习提供了更加快捷和便捷的构建方法,从而让用户能够更快地开发和调试模型。

案例:一个二维特征,影响一个函数值,例如函数

人工智能训练NLP依赖包 人工智能训练框架_keras

 ,  x,y是自变量,z与x,y存在函数f的映射关系,下面要做的事情是,随机生成一个若干个点,他们之间符和某一种函数关系,我们事先不知道,现在要利用神经网络框架,通过训练,得到预测函数f(x,y)  ,使得预测结果接近真实的数值。

代码如下:

一、导入模型

import numpy as np
from tensorflow.keras.models import Sequential  # 导入keras
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Dropout
from tensorflow.keras.optimizers import SGD
import tushare as ts
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D


# 生成相应的数据函数
def get_beans4(counts):
    xs = np.random.rand(counts, 2) * 2
    ys = np.zeros(counts)
    for i in range(counts):
        x = xs[i]
        if (np.power(x[0] - 1, 2) + np.power(x[1] - 0.3, 2)) < 0.5:
            ys[i] = 1

    return xs, ys


# 画出数据的散点图
def show_scatter(X, Y):
    if X.ndim > 1:
        show_3d_scatter(X, Y)

    else:
        plt.scatter(X, Y)
        plt.show()


# 画3d散点图
def show_3d_scatter(X, Y):
    x = X[:, 0]


    z = X[:, 1]
    fig = plt.figure()
    ax = Axes3D(fig)
    ax.scatter(x, z, Y)
    plt.show()


# 画3D图
def show_scatter_surface_with_model(X, Y, model):
    # model.predict(X)
    x = X[:, 0]
    z = X[:, 1]
    y = Y

    fig = plt.figure()
    ax = Axes3D(fig)
    ax.scatter(x, z, y)

    x = np.arange(np.min(x), np.max(x), 0.1)
    z = np.arange(np.min(z), np.max(z), 0.1)
    x, z = np.meshgrid(x, z)

    X = np.column_stack((x[0], z[0]))

    for j in range(z.shape[0]):
        if j == 0:
            continue
        X = np.vstack((X, np.column_stack((x[0], z[j]))))

    y = model.predict(X)

    # return
    # y = model.predcit(X)
    y = np.array([y])
    y = y.reshape(x.shape[0], z.shape[1])
    ax.plot_surface(x, z, y, cmap='rainbow')
    plt.show()

二、显示3D图像

人工智能训练NLP依赖包 人工智能训练框架_人工智能训练NLP依赖包_02

 三、建立神经网络模型与训练

m = 100 # 数据量
X, Y = get_beans4(m)
show_scatter(X, Y)
print(X)
print(X.shape)

建立网络模型:
model = Sequential()
model.add(Dense(units=10, activation='sigmoid', input_dim=2))
# units 神经元个数, activation激活函数类型, 输了特征维度
model.add(Dense(units=1, activation='sigmoid')) # 输出层
# 编译网络
model.compile(loss='mean_squared_error', optimizer=SGD(learning_rate=0.3), metrics=['accuracy'])
# mean_squared_error 均方误差 sgd 随机梯度下降算法 accuracy 准确度

# 训练回合数epochs, batch_size 批数量,一次训练利用多少样本
model.fit(X, Y, epochs=8000, batch_size=64)

训练中....

Epoch 1/8000 2/2 [==============================] - 0s 2ms/step - loss: 0.2580 - accuracy: 0.3900 Epoch 2/8000 2/2 [==============================] - 0s 2ms/step - loss: 0.2305 - accuracy: 0.7300 Epoch 3/8000 2/2 [==============================] - 0s 1ms/step - loss: 0.2135 - accuracy: 0.7300 Epoch 4/8000 2/2 [==============================] - 0s 998us/step - loss: 0.2050 - accuracy: 0.7300 Epoch 5/8000 2/2 [==============================] - 0s 998us/step - loss: 0.1995 - accuracy: 0.7300 Epoch 6/8000 2/2 [==============================] - 0s 960us/step - loss: 0.1964 - accuracy: 0.7300 Epoch 7/8000 2/2 [==============================] - 0s 998us/step - loss: 0.1946 - accuracy: 0.7300 Epoch 8/8000 2/2 [==============================] - 0s 2ms/step - loss: 0.1929 - accuracy: 0.7300 Epoch 9/8000 2/2 [==============================] - 0s 998us/step - loss: 0.1930 - accuracy: 0.7300 Epoch 10/8000 2/2 [==============================] - 0s 996us/step - loss: 0.1917 - accuracy: 0.7300 Epoch 11/8000 2/2 [==============================] - 0s 998us/step - loss: 0.1908 - accuracy: 0.7300 Epoch 12/8000 2/2 [==============================] - 0s 952us/step - loss: 0.1904 - accuracy: 0.7300 Epoch 13/8000 2/2 [==============================] - 0s 999us/step - loss: 0.1900 - accuracy: 0.7300 Epoch 14/8000 2/2 [==============================] - 0s 958us/step - loss: 0.1893 - accuracy: 0.7300 Epoch 15/8000 2/2 [==============================] - 0s 997us/step - loss: 0.1892 - accuracy: 0.7300 Epoch 16/8000 2/2 [==============================] - 0s 1ms/step - loss: 0.1885 - accuracy: 0.7300 Epoch 17/8000 2/2 [==============================] - 0s 997us/step - loss: 0.1880 - accuracy: 0.7300 Epoch 18/8000 2/2 [==============================] - 0s 999us/step - loss: 0.1879 - accuracy: 0.7300 Epoch 19/8000 2/2 [==============================] - 0s 1ms/step - loss: 0.1872 - accuracy: 0.7300 Epoch 20/8000 2/2 [==============================] - 0s 996us/step - loss: 0.1868 - accuracy: 0.7300 Epoch 21/8000 2/2 [==============================] - 0s 998us/step - loss: 0.1864 - accuracy: 0.7300 Epoch 22/8000 2/2 [==============================] - 0s 999us/step - loss: 0.1860 - accuracy: 0.7300 Epoch 23/8000 2/2 [==============================] - 0s 1ms/step - loss: 0.1855 - accuracy: 0.7300 Epoch 24/8000 2/2 [==============================] - 0s 999us/step - loss: 0.1852 - accuracy: 0.7300 Epoch 25/8000 2/2 [==============================] - 0s 997us/step - loss: 0.1849 - accuracy: 0.7300
......

四、预测结果

# 预测函数
pres = model.predict(X)
show_scatter_surface_with_model(X, Y, model) # 三维的

五、预测结果曲面图

人工智能训练NLP依赖包 人工智能训练框架_python_03