随机森林树:

      1.定义:

                     随机森林只利用多颗决策树对样本进行训练并预测的一种分类器,可回归可分类;随机森林是基于多颗决策树的集成算法,常见决策树算法主要分为: ID3(利用信息增益进行特征选择),C4.5 信息增益比率 = g(D,A)/H(A) ,CART 基尼系数

特征的信息增益越大,表示特征对于样本的熵的减少能力更强,这个特征让数据从不确定性到确定性能力越强。

  • 2.随机森林中Bagging 和 Boosting 概念与区别: 

       集成学习中算法分为bagging算法和boosting 算法, 随机森林属于集成学习中 bagging 算法,

bagging算法: 过程:

   1. 从原始样本集中使用Bootstraping 方法(自助方法,一种有放回的抽样方法),随机抽取N个训练样本,进行K轮抽取,得到K个训练集 (k个训练集之间相互独立,元素可以有重复)

 2. 对于K个训练集,训练K个模型(这个模型具体问题而定,比如决策树,knn)

3. 对于分类问题: 由投票产生分类结果,对于回归问题,由k个模型预测结果的均值作为最后预测结果

boosting (提升法):

    训练集中样本加入权重Wi 给与不同关注度,当某个样本被错误分类之后概率高,加大该样本的权重,不断进行迭代,每一步迭代都是一个小小的弱分类器,最后通过某种策略将其组合,作为最终模型(AdaBost 给每一个弱分类器权重,线性组合成最终分类器)

区别: 1. Bagging 采用随机放回抽样, Boosting 每一轮训练集不变,变的只是权重

            2. Bagging 权重均值取样, Boosting 根据 错误比率调整样本权重

            3.Bagging中预测函数权重相等,Boosting 误差越小预测函数权重越大

Bagging + 决策树 = 随机森林

AdaBoost + 决策树= 提升树

Gradient Boosting + 决策树 = GBDT

  • 总结: 随机森林用于分类时候,采用N个决策树分类,将分类结果用简单的投票方法得到最终分类

ExtraTree: 极端随机数与随机森林树区别: ET使用所有训练样本得到决策树,分叉时候:ET属于完全随机值得到分叉

但是随机森林是计算一个随机子集中最佳属性

验证结果时候借助机器学习包里面的交叉验证,训练训练数据分为9个训练集 1个测试集进行循环的测试,得到结果的均值作为结果输出

随机森林过程中入参:

class sklearn.ensemble.RandomForestClassifier(n_estimators=10, criterion='gini', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=’auto’, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, bootstrap=True, oob_score=False, n_jobs=1, random_state=None, verbose=0, warm_start=False, class_weight=None)

几个基本重要参数: 

   n_estimators: 默认10,森林中决策树的数目

criterion: default = gini:  :

### 基于sonar.all-data 数据 随机森林处理
import numpy as np
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import RandomForestClassifier
def load_data(filename):
    data_set = []
    with open(filename , 'r') as file:
        for line in file.readlines():
            data_set.append(line.strip('\n').split(','))
    return data_set

def column_to_float(dataSet):
    featLen = len (dataSet[0]) -1
    X = []
    y = []
    for data in dataSet:
        for column in range(featLen):
            data[column] = float(data[column].strip())
        if data[-1] == 'R':
            y.append(1)
        elif data[-1] == 'M':
            y.append(0)

        X.append(np.array(data[0: featLen-1]))
    y = np.array(y)
    return X, y

if __name__ == '__main__':
    dataSet = load_data('sonar.all-data')
    X,y = column_to_float(dataSet)
    ### 借助 Sklearn -随机森林 使用交叉验证平均结果0.6, 明显 决策树情况越多明显准确率有点提升
    clf2 = RandomForestClassifier(n_estimators= 10, max_depth= 15, max_features= 15 ,min_samples_split=2, random_state= 0)
    scores2 = cross_val_score(clf2, X, y)
    print(scores2.mean())

在这个参数结果下: 使用随机森林树加交叉验证的结果调整到的预测是 0.63

采用CART 自己进行编码方式:

     最后结果大概是在0.64 左右徘徊

### 基于sonar.all-data 数据 随机森林处理
import csv
from random import randrange
from random import seed
def loadCSV(filename):#加载数据,一行行的存入列表
    dataSet = []
    with open(filename, 'r') as file:
        csvReader = csv.reader(file)
        for line in csvReader:
            dataSet.append(line)
    return dataSet

# 除了标签列,其他列都转换为float类型
def column_to_float(dataSet):
    featLen = len(dataSet[0]) - 1
    for data in dataSet:
        for column in range(featLen):
            data[column] = float(data[column].strip())


def splitDataSet(dataSet, n_folds):
    '''
       对数据进行分块操作,数据必须要均分,不能那一块数据多或者 少,但是 总条数无法均分的话,只能抛弃一些数据
       
    :param dataSet:
    :param n_folds:
    :return:
    '''
    print (len(dataSet))
    fold_size = int(len(dataSet) / n_folds)
    dataSet_copy = list(dataSet)
    dataSet_spilt = []
    for i in range(n_folds):
        fold = []
        while len(fold) < fold_size:  # 这里不能用if,if只是在第一次判断时起作用,while执行循环,直到条件不成立
            index = randrange(len(dataSet_copy))
            fold.append(dataSet_copy.pop(index))  # pop() 函数用于移除列表中的一个元素(默认最后一个元素),并且返回该元素的值。
        dataSet_spilt.append(fold)
    return dataSet_spilt

def get_subsample(dataSet, ratio):
    '''
      构造数据子集:随机森林分叉最佳时间获取子集
      构建随机子集数据
    :param dataSet:
    :param ratio: 返回的浮点数
    :return:
    '''
    subdataSet = []
    lenSubdata = round(len(dataSet) * ratio)
    while len(subdataSet) < lenSubdata:
        index = randrange(len(dataSet) -1)
        subdataSet.append(dataSet[index])
    return subdataSet

def data_split(dataSet, index, value):
    '''
      数据按照对应的特征上值进行分支
    :param dataSet:
    :param index:
    :param value:
    :return:
    '''
    left = []
    right = []
    for row in dataSet:
        if row[index] < value:
            left.append(row)
        else:
            right.append(row)
    return left, right

def split_loss(left, right, class_values):
    loss = 0.0
    for class_value in class_values:
        left_size = len(left)
        if left_size != 0:
            prop = [row[-1] for row in left].count(class_value) / float(left_size)
            loss += (prop * (1.0- prop))
        right_size = len(right)
        if right_size != 0:
            prop = [row[-1] for row in right].count(class_value) / float(right_size)
            loss += (prop * (1.0 - prop))
        return loss

def get_best_split(dataSet, n_features):
    '''
     选取任意N个特征,在这N个特征中,选取分割时候最优特征
     选择进行分支的特征数目
    :param dataSet:
    :param n_features:
    :return:
    '''
    features = []
    class_values = list(set(row[-1] for row in dataSet))
    b_indx, b_value, b_loss, b_left, b_right = 999, 999, 999, None, None
    while len(features) < n_features:
        index = randrange(len(dataSet[0]) -1)
        if index not in features:
            features.append(index) ### 随机挑选N个特征的colum

    for index in features: ##找到列中最适合做节点的索引(损失最少的)
        for row in dataSet:
            left, right = data_split(dataSet, index, row[index]) # 以它进行节点,左右分支
            loss = split_loss(left, right, class_values)
            if loss < b_loss:  # 寻找最小分割代价
                b_index, b_value, b_loss, b_left, b_right = index, row[index], loss, left, right
    return {'index': b_index, 'value': b_value, 'left': b_left, 'right': b_right}

def decide_label(data):
    output = [row[-1] for row in data]
    return max(set(output), key= output.count)

def sub_split(root, n_features, max_depth, min_size, depth):
    '''
      不断的分割,,建立一颗决策树
    :param root:
    :param n_features:
    :param max_depth:
    :param min_size:
    :return:
    '''
    left = root['left']
    right = root['right']

    del(root['left'])
    del(root['right'])

    if not left or not right:
        root['left'] = root['right'] =decide_label(left+ right)
        return
    if depth > max_depth:
        root['left'] = decide_label(left)
        root['right'] =decide_label(right)
        return
    if len(left) < min_size:
        root['left'] = decide_label(left)
    else:
        root['left'] = get_best_split(left, n_features)

        sub_split(root['left'], n_features, max_depth, min_size, depth + 1)
    if len(right) < min_size:
        root['right'] = decide_label(right)
    else:
        root['right'] = get_best_split(right, n_features)

        sub_split(root['right'], n_features, max_depth, min_size, depth + 1)

def build_tree(dataSet, n_features, max_depth, min_size):
    '''
        构造决策树
    :param dataSet:
    :param n_features:
    :param max_depth:
    :param min_size:
    :return:
    '''
    root = get_best_split(dataSet, n_features)
    sub_split(root, n_features, max_depth, min_size,1)
    return root

def predict(tree, row):

    if row[tree['index']] < tree['value']:
        if isinstance(tree['left'], dict):
            return predict(tree['left'], row)
        else:
            return tree['left']
    else:
        if isinstance(tree['right'], dict):
            return predict(tree['right'], row)
        else:
            return tree['right']

def bagging_predict(trees, row):
    predictions = [predict(tree, row) for tree in trees]
    return max(set(predictions), key=predictions.count)


def random_forest(train, test, ratio, n_features, max_depth, min_size, n_trees):
    '''
        随机森林预测,具体决策策越使用 CART
    :param train_set:
    :param test_set:
    :param ratio:
    :param n_features:
    :param max_depth:
    :param min_size:
    :param n_trees:
    :return:
    '''
    trees = []
    for i in range(n_trees):
        train = get_subsample(train, ratio) ## 从切割的数据集中选取子集
        tree = build_tree(train, n_features, max_depth, min_size)

        trees.append(tree)
    predict_values = [bagging_predict(trees, row) for row in test]
    return  predict_values


def accuracy(predict_values, actual):
    correct = 0
    for i in range(len(actual)):
        if actual[i] == predict_values[i]:
            correct += 1
    return correct / float(len(actual))

if __name__ == '__main__':
    seed(1)
    dataSet = loadCSV('sonar.all-data')
    column_to_float(dataSet)

    n_flods = 5 ## 交叉验证分割数据块
    max_depth = 15
    min_size = 1
    ratio = 1.0
    n_features = 15
    n_trees = 10
    ### 在此每一块数据进行均分,实际上数据总行数并无法进行均分
    dataSetChunk = splitDataSet(dataSet, n_flods)
    scores = []
    for chunk in dataSetChunk:
        train_set = dataSetChunk[:]
        train_set.remove(chunk)

        train_set = sum(train_set, [])
        test_set = []

        for row in chunk:
            row_copy = list(row)
            row_copy[-1] = None
            test_set.append(row_copy)
        actual = [row[-1] for row in chunk]
        predict_values = random_forest(train_set, test_set, ratio, n_features, max_depth, min_size, n_trees)
        accur = accuracy(predict_values, actual)
        scores.append(accur)
    print('Trees is %d' % n_trees)
    print('scores:%s' % scores)
    print('mean score:%s' % (sum(scores) / float(len(scores))))