文章目录

  • 1)引入相关模块
  • 2)设置变量参数
  • 3)函数定义
  • 生成4位验证码函数
  • 验证码标签和图片生成函数
  • 图像转换
  • 文字转化函数
  • 向量转化函数
  • 生成训练数据
  • 4)搭建神经网络
  • 搭建网络结构
  • 配置训练参数
  • 执行训练过程
  • 5)训练结果



代码转载于:


将模型的训练过程和生成数据做了简化,搭建卷积神经网络对10000张灰色图片进行识别,在迭代到40次时,准确率可达到100%

1)引入相关模块

from captcha.image import ImageCaptcha
import random
from PIL import Image
import numpy as np
import tensorflow as tf

模块介绍:
Python中图像处理库PIL的Image模块,用Image.open(+地址)打开图像。
Python中验证码生成库captcha

from captcha.image import ImageCaptcha
chars = '14gh'# 随意选取4个字母,看验证码生成效果
image = ImageCaptcha().generate_image(chars)
image.show()
image.save("test.png")

通过CNN预测图片的值 基于cnn的图片识别技术_python

2)设置变量参数

其中
SAVE_PATH所设置的是模型的保存地址
CHAR_SET得到了由26个英文字母(大写+小写,共52个)加上10个数字组成的一个大的列表
CHAR_SET_LEN值为上述列表的长度,为62
IMAGE_HEIGHT为需要识别的验证码图片的高度为60
IMAGE_WIDTH为需要识别的验证码图片的宽度为160

number = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
alphabet = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u',
            'v', 'w', 'x', 'y', 'z']
ALPHABET = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U',
            'V', 'W', 'X', 'Y', 'Z']
SAVE_PATH = "D:/tensorflow_demo/deep learning/"
CHAR_SET = number + alphabet + ALPHABET # 得到的CHAR_SET是一个包含62个元素的大列表
CHAR_SET_LEN = len(CHAR_SET) # 26个小写字母+26个大写字母+10个数字 共62个
IMAGE_HEIGHT = 60
IMAGE_WIDTH = 160

3)函数定义

生成4位验证码函数

# 该函数的作用是随机生成4位验证码 返回的captcha_text是一个含有四位字母或数字组成的列表
def random_captcha_text(char_set=None, captcha_size=4):
    if char_set is None:
        char_set = number + alphabet + ALPHABET

    captcha_text = []
    for i in range(captcha_size):
        c = random.choice(char_set)
        captcha_text.append(c)
    return captcha_text

验证码标签和图片生成函数

def gen_captcha_text_and_image(width=160, height=60, char_set=CHAR_SET):
    image = ImageCaptcha(width=width, height=height)

    captcha_text = random_captcha_text(char_set) # 调用已定义好的random_captcha_text函数 得到了随机的四位验证码的列表
    captcha_text = ''.join(captcha_text)

    captcha = image.generate(captcha_text) # 调用验证码生成函数,captcha为生成的验证码图片

    captcha_image = Image.open(captcha)
    captcha_image = np.array(captcha_image) # 将图片用数组像素表示
    return captcha_text, captcha_image


text, image = gen_captcha_text_and_image(char_set=CHAR_SET)
MAX_CAPTCHA = len(text)
print('CHAR_SET_LEN=', CHAR_SET_LEN, ' MAX_CAPTCHA=', MAX_CAPTCHA)

图像转换

验证码的识别与是否为彩色照片无关,所以在此将其转化为灰度图像,可以减少训练的参数
该函数传入的为生成的验证码图片

def convert2gray(img):
    if len(img.shape) > 2:
        gray = np.mean(img, -1)
        return gray
    else:
        return img

文字转化函数

该函数传入的是验证码的标签文字内容,返回的是一个vector,4*62维度的,每一行对应文字为1,其余为0

def text2vec(text):
    vector = np.zeros([MAX_CAPTCHA, CHAR_SET_LEN]) # max_captcha为4,char_set_len为62
    for i, c in enumerate(text): # enumerate是枚举的含义 i是索引从0到4,c为4个验证码文字
        idx = CHAR_SET.index(c) # idx为4位验证码文字在62个大列表中的索引
        vector[i][idx] = 1.0
    return vector

PS:补充enumerate函数的用法
enumerate是python的内置函数,利用它可以获得索引和值
1)、对于一个列表,若想同时获得索引和值,可以这样编写

list1 = ['hello','world','I','love','python']
for i in range(len(list1)):
    print(i,list1[i])

0 hello
1 world
2 I
3 love
4 python

编写起来较为麻烦
2)、使用enumerate函数会更加方便

list1 = ['hello','world','I','love','python']
for index,value in enumerate(list1):
    print(index,value)

0 hello
1 world
2 I
3 love
4 python

enumerate函数还可以接收第二个参数,就是索引开始的初始值

list1 = ['hello','world','I','love','python']
for index,value in enumerate(list1,1):
    print(index,value)

1 hello
2 world
3 I
4 love
5 python

向量转化函数

输入的vec是一个4*62的矩阵,返回的text为四位验证码文字

def vec2text(vec):
    text = []
    for i, c in enumerate(vec):
        text.append(CHAR_SET[c])
    return "".join(text)

生成训练数据

获得了10000张验证码的灰色图片和标签

x_train = np.zeros([10000, IMAGE_HEIGHT, IMAGE_WIDTH, 1])  # batch_x为生成的验证码图片,通道数为1的灰色图片
y_train = np.zeros([10000, MAX_CAPTCHA, CHAR_SET_LEN])  # batch_y为标签,4*62的矩阵

for i in range(10000):
    text,image = gen_captcha_text_and_image(char_set=CHAR_SET)
    image = tf.reshape(convert2gray(image), (IMAGE_HEIGHT, IMAGE_WIDTH, 1)) # (60,160,1)
    x_train[i, :] = image
    y_train[i, :] = text2vec(text)

4)搭建神经网络

搭建网络结构

model = tf.keras.Sequential()

    model.add(tf.keras.layers.Conv2D(32, (3, 3)))
    model.add(tf.keras.layers.PReLU())
    model.add(tf.keras.layers.MaxPool2D((2, 2), strides=2))

    model.add(tf.keras.layers.Conv2D(64, (5, 5)))
    model.add(tf.keras.layers.PReLU())
    model.add(tf.keras.layers.MaxPool2D((2, 2), strides=2))

    model.add(tf.keras.layers.Conv2D(128, (5, 5)))
    model.add(tf.keras.layers.PReLU())
    model.add(tf.keras.layers.MaxPool2D((2, 2), strides=2))

    model.add(tf.keras.layers.Flatten())
    model.add(tf.keras.layers.Dense(MAX_CAPTCHA * CHAR_SET_LEN)) # Dense全连接的输出为4*62
    model.add(tf.keras.layers.Reshape([MAX_CAPTCHA, CHAR_SET_LEN]))
    model.add(tf.keras.layers.Softmax())

配置训练参数

model.compile(optimizer='Adam',
                  metrics=['accuracy'],
                  loss='categorical_crossentropy')

执行训练过程

history = model.fit(x_train, y_train, batch_size=32, epochs=50)

5)训练结果

Epoch 1/50
313/313 [==============================] - 202s 644ms/step - loss: 4.6302 - accuracy: 0.0144
Epoch 2/50
313/313 [==============================] - 204s 650ms/step - loss: 4.4329 - accuracy: 0.0154
Epoch 3/50
313/313 [==============================] - 204s 653ms/step - loss: 4.1740 - accuracy: 0.0159
Epoch 4/50
313/313 [==============================] - 213s 681ms/step - loss: 14.4912 - accuracy: 0.0166
Epoch 5/50
313/313 [==============================] - 201s 643ms/step - loss: 7.9571 - accuracy: 0.0157
Epoch 6/50
313/313 [==============================] - 158s 505ms/step - loss: 4.1492 - accuracy: 0.0167
Epoch 7/50
313/313 [==============================] - 159s 507ms/step - loss: 4.1425 - accuracy: 0.0168
Epoch 8/50
313/313 [==============================] - 165s 528ms/step - loss: 4.1385 - accuracy: 0.0175
Epoch 9/50
313/313 [==============================] - 170s 542ms/step - loss: 4.1365 - accuracy: 0.0169
Epoch 10/50
313/313 [==============================] - 169s 539ms/step - loss: 4.1319 - accuracy: 0.0196
Epoch 11/50
313/313 [==============================] - 179s 570ms/step - loss: 4.0545 - accuracy: 0.0336
Epoch 12/50
313/313 [==============================] - 172s 549ms/step - loss: 3.5705 - accuracy: 0.1045
Epoch 13/50
313/313 [==============================] - 173s 552ms/step - loss: 3.0129 - accuracy: 0.1980
Epoch 14/50
313/313 [==============================] - 171s 545ms/step - loss: 2.6300 - accuracy: 0.2831
Epoch 15/50
313/313 [==============================] - 170s 542ms/step - loss: 2.2148 - accuracy: 0.3930
Epoch 16/50
313/313 [==============================] - 181s 580ms/step - loss: 1.7232 - accuracy: 0.5208
Epoch 17/50
313/313 [==============================] - 186s 593ms/step - loss: 1.2240 - accuracy: 0.6564
Epoch 18/50
313/313 [==============================] - 179s 573ms/step - loss: 0.8103 - accuracy: 0.7691
Epoch 19/50
313/313 [==============================] - 179s 572ms/step - loss: 0.4705 - accuracy: 0.8638
Epoch 20/50
313/313 [==============================] - 195s 624ms/step - loss: 0.2919 - accuracy: 0.9146
Epoch 21/50
313/313 [==============================] - 189s 605ms/step - loss: 0.2021 - accuracy: 0.9373
Epoch 22/50
313/313 [==============================] - 179s 571ms/step - loss: 0.1775 - accuracy: 0.9453
Epoch 23/50
313/313 [==============================] - 189s 604ms/step - loss: 0.1705 - accuracy: 0.9459
Epoch 24/50
313/313 [==============================] - 188s 600ms/step - loss: 0.1571 - accuracy: 0.9515
Epoch 25/50
313/313 [==============================] - 162s 517ms/step - loss: 0.1666 - accuracy: 0.9505
Epoch 26/50
313/313 [==============================] - 162s 517ms/step - loss: 0.1300 - accuracy: 0.9607
Epoch 27/50
313/313 [==============================] - 163s 520ms/step - loss: 0.1494 - accuracy: 0.9560
Epoch 28/50
313/313 [==============================] - 162s 517ms/step - loss: 0.1653 - accuracy: 0.9508
Epoch 29/50
313/313 [==============================] - 159s 507ms/step - loss: 0.1281 - accuracy: 0.9635
Epoch 30/50
313/313 [==============================] - 160s 510ms/step - loss: 0.1336 - accuracy: 0.9619
Epoch 31/50
313/313 [==============================] - 159s 507ms/step - loss: 0.1323 - accuracy: 0.9633
Epoch 32/50
313/313 [==============================] - 159s 507ms/step - loss: 0.1346 - accuracy: 0.9621
Epoch 33/50
313/313 [==============================] - 169s 540ms/step - loss: 0.1369 - accuracy: 0.9639
Epoch 34/50
313/313 [==============================] - 169s 541ms/step - loss: 0.1353 - accuracy: 0.9644
Epoch 35/50
313/313 [==============================] - 170s 543ms/step - loss: 0.1585 - accuracy: 0.9599
Epoch 36/50
313/313 [==============================] - 170s 543ms/step - loss: 4.2418 - accuracy: 0.7325
Epoch 37/50
313/313 [==============================] - 170s 542ms/step - loss: 0.1098 - accuracy: 0.9720
Epoch 38/50
313/313 [==============================] - 169s 539ms/step - loss: 0.0211 - accuracy: 0.9938
Epoch 39/50
313/313 [==============================] - 169s 540ms/step - loss: 0.0063 - accuracy: 0.9983
Epoch 40/50
313/313 [==============================] - 169s 540ms/step - loss: 0.0025 - accuracy: 0.9994
Epoch 41/50
313/313 [==============================] - 170s 543ms/step - loss: 4.4409e-04 - accuracy: 0.9999
Epoch 42/50
313/313 [==============================] - 168s 538ms/step - loss: 1.6461e-04 - accuracy: 1.0000
Epoch 43/50
313/313 [==============================] - 169s 540ms/step - loss: 9.7764e-05 - accuracy: 1.0000
Epoch 44/50
313/313 [==============================] - 168s 538ms/step - loss: 7.7123e-05 - accuracy: 1.0000
Epoch 45/50
313/313 [==============================] - 169s 541ms/step - loss: 6.3981e-05 - accuracy: 1.0000
Epoch 46/50
313/313 [==============================] - 169s 540ms/step - loss: 5.4254e-05 - accuracy: 1.0000
Epoch 47/50
313/313 [==============================] - 169s 541ms/step - loss: 4.6260e-05 - accuracy: 1.0000
Epoch 48/50
313/313 [==============================] - 168s 537ms/step - loss: 3.9702e-05 - accuracy: 1.0000
Epoch 49/50
313/313 [==============================] - 168s 538ms/step - loss: 3.4279e-05 - accuracy: 1.0000
Epoch 50/50
313/313 [==============================] - 168s 537ms/step - loss: 2.9718e-05 - accuracy: 1.0000

PS:补充模型的保存与调用
首先需要指定模型保存的路径

SAVE_PATH = "D:/tensorflow_demo/deep learning/"

模型的保存

try:
        model = tf.keras.models.load_model(SAVE_PATH + 'model') # 将模型保存到指定路径下
    except Exception as e:
        print('#######Exception', e)

当再次需要调用时

model = tf.keras.models.load_model(SAVE_PATH + 'model')