一、概述
K近邻算法(K-Nearest Neighbors,简称KNN)是一种用于分类和回归的统计方法。KNN 可以说是最简单的分类算法之一,同时,它也是最常用的分类算法之一。
注意:KNN 算法是有监督学习中的分类算法,它看起来和另一个机器学习算法 K-means 有点像(K-means 是无监督学习算法),但却是有本质区别的。
二、原理
基于某种距离度量来找到输入样本在训练集中的“k个最近邻居”,并根据这些邻居的属性来预测输入样本的属性。
K 个最近邻居,毫无疑问,$\color{red}{K 的取值肯定是至关重要的}$,那么最近的邻居又是怎么回事呢?其实,KNN 的原理就是当预测一个新的值 x 的时候,根据它距离最近的 K 个点是什么类别来判断 x 属于哪个类别。
- 图中绿色的点表示要预测的那个点,假设 K=3。那么 KNN 算法就会找到与它距离最近的三个点(这里用是实线圆圈把它圈起来了),看看哪种类别多一些,比如这个例子中是红色三角形多一些,绿色点就归类为红色三角形。
- 但是,当 K=5 的时候,判定就变成不一样了。这次变成蓝色正方形多一些,所以绿点被归类成蓝色正方形(这里用是虚线圆圈把它圈起来了)。
从这个例子中,我们就能看得出 K 的取值是很重要的。
$\color{red}{可以通过交叉验证,得到最合适的K值。}$它的核心思想无非就是把一些可能的K逐个去尝试一遍,然后选出效果最好的K值
三、KNN算法中常用的距离指标
- 欧几里得距离
它也被称为L2范数距离。欧几里得距离是我们在平面几何中最常用的距离计算方法,即两点之间的直线距离。在n维空间中,两点之间的欧几里得距离计算公式为:
- 曼哈顿距离
它也被称为L1范数距离。曼哈顿距离是计算两点在一个网格上的路径距离,与上述的直线距离不同,它只允许沿着网格的水平和垂直方向移动。在n维空间中,两点 之间的曼哈顿距离计算公式为:
四、KNN算法的优势
- 算法简单,易于实现,因为算法的复杂性并不高。
- 参数量少,训练 KNN 算法时唯一需要的参数是 k 的值和我们想从评估指标中选择的距离度量的选择。
五、经典例子——海伦约会
海伦女士一直使用在线约会网站寻找适合自己的约会对象。尽管约会网站会推荐不同的任选,但她并不是喜欢每一个人。经过一番总结,她发现自己交往过的人可以进行如下分类:
- 不喜欢的人(didntLike)
- 魅力一般的人(smallDoses)
- 极具魅力的人(largeDoses)
海伦收集约会数据已经有了一段时间,她把这些数据存放在文本文件datingTestSet.txt中,每个样本数据占据一行,总共有1000行。数据信息分别为:每年飞行旅程里程、玩视频游戏时间百分比、每周消耗冰激凌公升数及喜好程度。
考虑到给定特征的数量级相差较大,因此需要对各特征进行归一化处理,对应公式为:
import numpy as np
import operator
from matplotlib.font_manager import FontProperties
import matplotlib.lines as mlines
import matplotlib.pyplot as plt
'''
Function : file2matrix(filename)
Description : to covert file into matrix
Args: filename
Rets: featureMatrix, the matrix format from file coverting
labels, the label for info
'''
def file2matrix(filename):
fread = open(filename)
info = fread.readlines()
featureMatrix = np.zeros((len(info), 3))
labels = []
index = 0
for line in info:
line = line.strip()
listline = line.split('\t')
featureMatrix[index, :] = listline[0:3]
if listline[-1] == 'didntLike':
labels.append(1)
if listline[-1] == 'smallDoses':
labels.append(2)
if listline[-1] == 'largeDoses':
labels.append(3)
index += 1
return featureMatrix, labels
'''
Function : normalize(featureMatrix)
Description : to normalize data
Args : featureMatrix
Rets : normFeatureMatrix
'''
def normalize(featureMatrix):
#get every column minVal
minVal = featureMatrix.min(0)
#get every row maxVal
maxVal = featureMatrix.max(0)
ranges = maxVal - minVal
normFeatureMatrix = np.zeros(np.shape(featureMatrix))
row = normFeatureMatrix.shape[0]
normFeatureMatrix = featureMatrix - np.tile(minVal, (row, 1))
normFeatureMatrix = normFeatureMatrix / np.tile(ranges, (row, 1))
return normFeatureMatrix
'''
Function : visualize(featureMatrix, labels)
Description : to visualize data
Args : featureMatrix
labels
Rets : None
'''
def visualize(featureMatrix, labels):
font = FontProperties(size = 14)
#fig: figure object, axs : subplot
fig, axs = plt.subplots(nrows = 2, ncols = 2, sharex = False, sharey = False, figsize = (10, 10))
labelColors = []
for i in range(len(labels)):
if i == 1:
labelColors.append('black')
if i == 2:
labelColors.append('orange')
if i == 3:
labelColors.append('red')
#subplot(0,0) scatter, s : scatter size, alpha : transparency
axs[0][0].scatter(x = featureMatrix[:,0], y = featureMatrix[:, 1], color = labelColors, s = 15, alpha = 0.5)
axs_0_title = axs[0][0].set_title(u'Route vs Game', FontProperties = font)
axs_0_x = axs[0][0].set_xlabel(u'Route (km/year)', FontProperties = font)
axs_0_y = axs[0][0].set_ylabel(u'Game (hours/week)', FontProperties = font)
plt.setp(axs_0_title, size = 9, weight = 'bold', color = 'red')
plt.setp(axs_0_x, size = 7, weight = 'bold', color = 'black')
plt.setp(axs_0_y, size = 7, weight = 'bold', color = 'black')
#subplot(0,1) scatter, s : scatter size, alpha : transparency
axs[0][1].scatter(x = featureMatrix[:,0], y = featureMatrix[:, 2], color = labelColors, s = 15, alpha = 0.5)
axs_1_title = axs[0][1].set_title(u'Route vs Icecream', FontProperties = font)
axs_1_x = axs[0][1].set_xlabel(u'Route (km/year)', FontProperties = font)
axs_1_y = axs[0][1].set_ylabel(u'Icecream (g/week)', FontProperties = font)
plt.setp(axs_1_title, size = 9, weight = 'bold', color = 'red')
plt.setp(axs_1_x, size = 7, weight = 'bold', color = 'black')
plt.setp(axs_1_y, size = 7, weight = 'bold', color = 'black')
#subplot(1,0) scatter, s : scatter size, alpha : transparency
axs[1][0].scatter(x = featureMatrix[:,1], y = featureMatrix[:, 2], color = labelColors, s = 15, alpha = 0.5)
axs_2_title = axs[1][0].set_title(u'Game vs Icecream', FontProperties = font)
axs_2_x = axs[1][0].set_xlabel(u'Game (hours/week)', FontProperties = font)
axs_2_y = axs[1][0].set_ylabel(u'Icecream (g/week)', FontProperties = font)
plt.setp(axs_2_title, size = 9, weight = 'bold', color = 'red')
plt.setp(axs_2_x, size = 7, weight = 'bold', color = 'black')
plt.setp(axs_2_y, size = 7, weight = 'bold', color = 'black')
#set legend
didntLike = mlines.Line2D([], [], color = 'black', marker = '.', markersize = 6, label = 'didntLike')
smallDoses = mlines.Line2D([], [], color = 'orange', marker = '.', markersize = 6, label = 'smallDoses')
largeDoses = mlines.Line2D([], [], color = 'red', marker = '.', markersize = 6, label = 'largeDoses')
axs[0][0].legend(handles = [didntLike, smallDoses, largeDoses])
axs[0][1].legend(handles = [didntLike, smallDoses, largeDoses])
axs[1][0].legend(handles = [didntLike, smallDoses, largeDoses])
plt.show()
'''
Function : kNN(test, featureMatrix, labels, k)
Description : to use kNN algorithm predict test result
Args: test #test vector
featureMatrix
labels
k k classes
Rets : pred_class
'''
def kNN(test, featureMatrix, labels, k):
row = featureMatrix.shape[0]
diff = np.tile(test, (row, 1)) - featureMatrix
sqdiff = diff**2
dist = sqdiff.sum(axis = 1)
dist = dist**0.5
dist_order = dist.argsort()
classes = {}
for i in range(k):
voteLabel = labels[dist_order[i]]
classes[voteLabel] = classes.get(voteLabel, 0) + 1
pred = sorted(classes.items(), key = operator.itemgetter(1), reverse = True)
return pred[0][0]
'''
Function : train()
Description : to train test data and record result
Args : None
Rets : None
'''
def train():
filename = 'info.txt'
featureMatrix, labels = file2matrix(filename)
normFeatureMatrix = normalize(featureMatrix)
inRote = 0.1
row = normFeatureMatrix.shape[0]
numTest = int(inRote * row)
errorcount = 0.0
for i in range(numTest):
result = kNN(normFeatureMatrix[i,:], normFeatureMatrix[numTest:row, :], labels[numTest:row], 4)
print('pred : %d vs real : %d'%(result, labels[i]))
if result != labels[i]:
errorcount += 1.0
print('Error rate : %f %%'%(errorcount / float(numTest)*100))
'''
Function : score()
Description : to score for input info
Args : None
Rets : None
'''
def score():
filename = 'info.txt'
featureMatrix, labels = file2matrix(filename)
#get every column minVal
minVal = featureMatrix.min(0)
#get every row maxVal
maxVal = featureMatrix.max(0)
resultList = ['didntLike', 'smallDoses', 'largeDoses']
normFeatureMatrix = normalize(featureMatrix)
route = float(input('Enter your routing precent : '))
game = float(input('Enter your gaming precent : '))
iceCream = float(input('Enter your iceCreaming precent : '))
test = np.array([route, game, iceCream])
normTest = (test - minVal) / (maxVal - minVal)
result = kNN(normTest, normFeatureMatrix, labels, 3)
print('Score : %s'%(resultList[result - 1]))
if __name__ == '__main__':
#visualize()
#train()
score()
实验结果如下:
预测结果与训练结果一致。在visualize()函数中实现了数据可视化,在train()函数中完成了测试统计,其对应的误差率为3.00%,即准确率为97.00%(相当不错)。
六、总结
虽然K近邻算法由以下几个有以下几个优点,但也不乏这几个缺点。
优点
- 简单好用,容易理解,精度高,理论成熟,既可以用来做分类也可以用来做回归;
- 可用于数值型数据和离散型数据;
- 训练时间复杂度为O(n);无数据输入假定;
- 对异常值不敏感。
缺点:
- 计算复杂性高;空间复杂性高;
- 样本不平衡问题(即有些类别的样本数量很多,而其它样本的数量很少);
- 一般数值很大的时候不用这个,计算量太大。但是单个样本又不能太少,否则容易发生误分。
- 最大的缺点是无法给出数据的内在含义。