偶然间看到了别人用蟒实现的换脸代码,很感兴趣就简单研究了下,原理其实不算复杂,最后自己试着用别的方法做了贴图的颜色修改,在此记录下,代码取之网络,用之网络,重在娱乐。

目录

实验环境:

算法步骤:

算法详解:

代码


实验环境:

  • python 3.5.4
  • numpy 1.14.3
  • dlib 19.1.0
  • opencv-python 3.4.1

算法步骤:

  1. 利用DLIB库检测图像的人脸区域,并提取特征点。
  2. 根据两张图片提取的特征点,计算变换矩阵。
  3. 根绝提取的特征点,得到特征点的凸包,以此得到脸部特征区域掩模。
  4. 修正肤色。
  5. 根据面膜贴图。

算法详解:

1.人脸区域检测及特征点提取

原文直接调用DLIB库的正脸检测dlib.get_frontal_face_detector(),返回检测到的人脸区域。然后提取该区域内的68个特征点。这里直接用了DLIB训练好的模型shape_predictor_68_face_landmarks.data。

def get_landmarks(im):
    rects = detector(im, 1)

    if len(rects) > 1:
        raise TooManyFaces
    if len(rects) == 0:
        raise NoFaces

    return numpy.matrix([[p.x, p.y] for p in predictor(im, rects[0]).parts()])

刚开始看原文代码,对各五官具体标号(FACE_POINTS等)不明白为什么这么划分,然后把特征点依次在图像中标出来,立马清楚了。

结果:

python ai 换脸 python 图片换脸_特征点

python ai 换脸 python 图片换脸_python_02

2.计算变换矩阵

    为什么需要提取变换矩阵?AB两张图像的人脸特征的大小角度等不会完全一致,直接贴上肯定不和谐,这时候我们需要利用两组特征点计算变换矩阵,使一个的面部特征尽可能与乙的相似。其实这就是仿射变换了。原文利用两组特征点计算,看我的头晕,我这里直接省事修改了下,利用两组特征点的第17,26,57(即左右眉和下嘴唇)点,使用的OpenCV的getAffineTransform函数得到仿射矩阵,然后用warpAffine修正A.这里根据女神丫丫,修正下“宝强”。

python ai 换脸 python 图片换脸_python ai 换脸_03

python ai 换脸 python 图片换脸_python ai 换脸_04

3.计算脸部特征区域面具

    计算68个特征点的凸包,填充就是掩模。记得掩模也要做仿射变换,且原文中是取两张图像掩模的并集。

python ai 换脸 python 图片换脸_贴图_05

4.修正肤色

    原文看的蛋疼,用模糊什么的算的好像,我这里改成直方图匹配,A的掩模区域和乙的掩模区域做直方图匹配。效果如下,像个猴屁股,能看出变白了。

python ai 换脸 python 图片换脸_python_06

5.根据面膜贴图

    掩模为1的像素点取宝强,掩模为0的像素点取丫丫。

output_im_hist = im1 * (1.0 - combined_mask) + histMatch_im * combined_mask

结果比较惊悚,痕迹明显......(捂脸) 

 

python ai 换脸 python 图片换脸_特征点_07

看下用原文代码的效果,感觉效果好点......(捂脸)

python ai 换脸 python 图片换脸_python ai 换脸_08

好吧,重在娱乐〜

代码

#!/usr/bin/python

import cv2 as cv
import dlib
import numpy as np
from hist_match import histMatch

import sys

PREDICTOR_PATH = 'model/shape_predictor_68_face_landmarks.dat'
SCALE_FACTOR = 1
FEATHER_AMOUNT = 11

FACE_POINTS = list(range(17, 68))
MOUTH_POINTS = list(range(48, 61))
RIGHT_BROW_POINTS = list(range(17, 22))
LEFT_BROW_POINTS = list(range(22, 27))
RIGHT_EYE_POINTS = list(range(36, 42))
LEFT_EYE_POINTS = list(range(42, 48))
NOSE_POINTS = list(range(27, 35))
JAW_POINTS = list(range(0, 17))

# Points used to line up the images.
ALIGN_POINTS = (LEFT_BROW_POINTS + RIGHT_EYE_POINTS + LEFT_EYE_POINTS +
                RIGHT_BROW_POINTS + NOSE_POINTS + MOUTH_POINTS)

# Points from the second image to overlay on the first. The convex hull of each
# element will be overlaid.
OVERLAY_POINTS = [
    LEFT_EYE_POINTS + RIGHT_EYE_POINTS + LEFT_BROW_POINTS + RIGHT_BROW_POINTS,
    NOSE_POINTS + MOUTH_POINTS,
]

TRANSFORM_POINT = [17,26,57]

# Amount of blur to use during colour correction, as a fraction of the
# pupillary distance.
COLOUR_CORRECT_BLUR_FRAC = 0.6

detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(PREDICTOR_PATH)


class TooManyFaces(Exception):
    pass


class NoFaces(Exception):
    pass


def get_landmarks(im ,winname = 'debug'):
    rects = detector(im, 1)

    if len(rects) > 1:
        raise TooManyFaces
    if len(rects) == 0:
        raise NoFaces

    draw = im.copy()
    for _, d in enumerate(rects):
        cv.rectangle(draw,(d.left(),d.top()),(d.right(),d.bottom()),(0,255,0),3)

    return np.matrix([[p.x, p.y] for p in predictor(im, rects[0]).parts()])


def annotate_landmarks(im, landmarks):
    im = im.copy()
    for idx, point in enumerate(landmarks):
        pos = (point[0, 0], point[0, 1])
        cv.putText(im, str(idx), pos,
                    fontFace=cv.FONT_HERSHEY_SCRIPT_SIMPLEX,
                    fontScale=0.4,
                    color=(0, 0, 255))
        cv.circle(im, pos, 3, color=(0, 255, 255))
    return im


def draw_convex_hull(im, points, color):
    points = cv.convexHull(points) # 得到凸包
    cv.fillConvexPoly(im, points, color=color) # 绘制填充


def get_face_mask(im, landmarks):
    im = np.zeros(im.shape[:2], dtype=np.float64)

    for group in OVERLAY_POINTS:
        draw_convex_hull(im,
                         landmarks[group],
                         color=1)

    im = np.array([im, im, im]).transpose((1, 2, 0)) #-> rgb rgbr rgb

    im = (cv.GaussianBlur(im, (FEATHER_AMOUNT, FEATHER_AMOUNT), 0) > 0) * 1.0
    im = cv.GaussianBlur(im, (FEATHER_AMOUNT, FEATHER_AMOUNT), 0)

    return im


# 读取图片文件并获取特征点
def read_im_and_landmarks(fname):
    im = cv.imread(fname, cv.IMREAD_COLOR)

    im = cv.resize(im, (im.shape[1] * SCALE_FACTOR,
                         im.shape[0] * SCALE_FACTOR))

    # 68个特征点
    s = get_landmarks(im,fname) # mat

    return im, s


def warp_im(mask, M, dshape):
    output_im = np.zeros(dshape, dtype=mask.dtype)
    cv.warpAffine(mask,
                   M[:2],
                   (dshape[1], dshape[0]),
                   dst=output_im,
                   borderMode=cv.BORDER_TRANSPARENT,
                   flags=cv.WARP_INVERSE_MAP)
    return output_im


def getAffineTransform(_srcPoint,_dstPoint):
    srcPoint = _srcPoint.astype(np.float32)
    dstPoint = _dstPoint.astype(np.float32)
    return cv.getAffineTransform(srcPoint,dstPoint)


# im1 贴到im2上
im1, landmarks1 = read_im_and_landmarks(sys.argv[2])
im2, landmarks2 = read_im_and_landmarks(sys.argv[1])

cv.imshow('face1', im1)
cv.imshow('face2', im2)


# 变换矩阵
M = getAffineTransform(landmarks1[TRANSFORM_POINT],landmarks2[TRANSFORM_POINT])

mask = get_face_mask(im2, landmarks2)

warped_mask = warp_im(mask, M, im1.shape)


combined_mask = np.max([get_face_mask(im1, landmarks1), warped_mask],
                          axis=0)  # 两张图片mask并集

warped_im2 = warp_im(im2, M, im1.shape)

histMatch_im = histMatch(warped_im2.astype(np.uint8),im1,mask=combined_mask)

output_im_hist = im1 * (1.0 - combined_mask) + histMatch_im * combined_mask

output_im_hist = output_im_hist.astype(np.uint8)

cv.imshow('changeface', output_im_hist)
cv.waitKey()

hist_match .py 

import cv2 as cv
import numpy as np

def histMatch_core(src,dst,mask = None):
    srcHist = [0] * 256
    dstHist = [0] * 256
    srcProb = [.0] * 256; # 源图像各个灰度概率
    dstProb = [.0] * 256; # 目标图像各个灰度概率


    for h in range(src.shape[0]):
         for w in range(src.shape[1]):
             if mask is None:
                 srcHist[int(src[h,w])] += 1
                 dstHist[int(dst[h,w])] += 1
             else:
                 if mask[h,w] > 0:
                     srcHist[int(src[h, w])] += 1
                     dstHist[int(dst[h, w])] += 1


    resloution = src.shape[0] * src.shape[1]

    if mask is not None:
        resloution = 0
        for h in range(mask.shape[0]):
            for w in range(mask.shape[1]):
                if mask[h, w] > 0:
                    resloution += 1

    for i in range(256):
        srcProb[i] = srcHist[i] / resloution
        dstProb[i] = dstHist[i] / resloution

     # 直方图均衡化
    srcMap = [0] * 256
    dstMap = [0] * 256

    # 累积概率
    for i in range(256):
        srcTmp = .0
        dstTmp = .0
        for j in range(i + 1):
            srcTmp += srcProb[j]
            dstTmp += dstProb[j]

        srcMapTmp = srcTmp * 255 + .5
        dstMapTmp = dstTmp * 255 + .5
        srcMap[i] = srcMapTmp if srcMapTmp <= 255.0 else 255.0
        dstMap[i] = dstMapTmp if dstMapTmp <= 255.0 else 255.0

    matchMap = [0] * 256
    for i in range(256):
        pixel = 0
        pixel_2 = 0
        num = 0 # 可能出现一对多
        cur = int(srcMap[i])
        for j in range(256):
            tmp = int(dstMap[j])
            if cur == tmp:
                pixel += j
                num += 1
            elif cur < tmp: # 概率累计函数 递增
                pixel_2 = j
                break

        matchMap[i] = int(pixel / num) if num > 0 else int(pixel_2)

    newImg = np.zeros(src.shape[:2], dtype=np.uint8)
    for h in range(src.shape[0]):
        for w in range(src.shape[1]):
            if mask is None:
                newImg[h,w] = matchMap[src[h,w]]
            else:
                if mask[h,w] > 0:
                    newImg[h, w] = matchMap[src[h, w]]
                else:
                    newImg[h, w] = src[h, w]

    return newImg



# src1 src2 mask must have the same size
def histMatch(src1,src2,mask = None,dst = None):

    sB,sG,sR = cv.split(src1)
    dB,dG,dR = cv.split(src2)

    if mask.shape[2] > 1:
        rM,gM,bM = cv.split(mask)
        nB = histMatch_core(sB, dB, rM)
        nG = histMatch_core(sG, dG, gM)
        nR = histMatch_core(sR, dR, bM)
    else:
        nB = histMatch_core(sB,dB,mask)
        nG = histMatch_core(sG,dG,mask)
        nR = histMatch_core(sR,dR,mask)

    newImg = cv.merge([nB,nG,nR])

    if dst is not None:
        dst = newImg

    return newImg

 

 参考链接:



http://matthewearl.github.io/2015/07/28/switching-eds-with-python/