#简介
本文通过使用LendingClub的数据,采用卡方分箱(ChiMerge)、WOE编码、计算IV值、单变量和多变量(VIF)分析,然后使用逻辑回归模型进行训练,在变量筛选时也可尝试添加L1约束或通过随机森林筛选变量,最后进行模型评估。
######关键词:卡方分箱,WOE,IV值,变量分析,逻辑回归

####一、数据预处理
数据清洗:数据选择、格式转换、缺失值填补
由于贷款期限(term)有多个种类,申请评分卡模型评估的违约概率必须在统一的期限中,并且不宜太长,因此选择36months的数据作为本次建模数据,60%训练,40%测试。

```
 folderOfData = os.path.join(os.getcwd(), 'data')
 allData = pd.read_csv(os.path.join(folderOfData,'application.csv'),header = 0, encoding = 'latin1')
 allData['term'] = allData['term'].apply(lambda x: int(x.replace(' months','')))
 # 处理标签:Fully Paid是正常用户;Charged Off是违约用户
 allData['y'] = allData['loan_status'].map(lambda x: int(x == 'Charged Off'))allData1 = allData.loc[allData.term == 36]
 trainData, testData = train_test_split(allData1,test_size=0.4)
 ```


进一步清洗:
1. 将int_rate利息转换为小数形式
2. 将emp_length处理为:10+为11,<1为0,空为-1
3. desc为有记录和无记录两种情况
4. 日期处理
5. 两个日期之间月数计算

```
# 将带%的百分比变为浮点数

trainData['int_rate_clean'] = trainData['int_rate'].map(lambda x: float(x.replace('%',''))/100)

# 将工作年限进行转化,否则影响排序

trainData['emp_length_clean'] = trainData['emp_length'].map(CareerYear)

# 将desc的缺失作为一种状态,非缺失作为另一种状态

trainData['desc_clean'] = trainData['desc'].map(DescExisting)

# 处理日期。earliest_cr_line的格式不统一,需要统一格式且转换成python的日期

trainData['app_date_clean'] = trainData['issue_d'].map(lambda x: ConvertDateStr(x))
 trainData['earliest_cr_line_clean'] = trainData['earliest_cr_line'].map(lambda x: ConvertDateStr(x))

# 处理mths_since_last_delinq。注意原始值中有0,所以用-1代替缺失

trainData['mths_since_last_delinq_clean'] = trainData['mths_since_last_delinq'].map(lambda x:MakeupMissing(x))
trainData['mths_since_last_record_clean'] = trainData['mths_since_last_record'].map(lambda x:MakeupMissing(x))
trainData['pub_rec_bankruptcies_clean'] = trainData['pub_rec_bankruptcies'].map(lambda x:MakeupMissing(x))
 ```

####二、变量衍生和挑选
- 衍生:
1. 考虑申请额度与收入的占比
2. 考虑earliest_cr_line到申请日期的跨度,计算月份数
```
# 考虑申请额度与收入的占比
trainData['limit_income'] = trainData.apply(lambda x: x.loan_amnt / x.annual_inc, axis = 1)
# 考虑earliest_cr_line到申请日期的跨度,计算月份数
trainData['earliest_cr_to_app'] = trainData.apply(lambda x: MonthGap(x.earliest_cr_line_clean,x.app_date_clean), axis = 1)
```
- 挑选:
我们初步挑选变量如下,分为两类:数值型(连续型)的和类别型的变量。

```
 num_features = ['int_rate_clean','emp_length_clean','annual_inc', 'dti', 'delinq_2yrs', 'earliest_cr_to_app','inq_last_6mths', \
                 'mths_since_last_record_clean', 'mths_since_last_delinq_clean','open_acc','pub_rec','total_acc','limit_income','earliest_cr_to_app']cat_features = ['home_ownership', 'verification_status','desc_clean', 'purpose', 'zip_code','addr_state','pub_rec_bankruptcies_clean']
 ```

####三、卡方分箱法
采用卡方(ChiMerge)分箱,要求分箱完成之后:
1. 不超过5箱(本模型默认不超过5箱)
2. 坏样本率(Bad Rate)单调
3. 每箱同时包含好坏样本
4. 如有特殊值如-1单独成一箱,此箱不参与Bad Rate单调性检验

连续型的变量可以直接进行分箱,对于类别型的变量分为以下几种情况:
1. 当类别型变量取值比较多时(本例中大于5),先用bad rate 进行编码,然后放入连续型变量列表中,使用连续型变量分箱的方法进行分箱。
2. 当取值较少时(本例中小于等于5),分两种情况:
(1)如果每种类别同时包含好坏样本,则无需分箱;
(2)如果有类别只包含好坏样本的一种,则需要合并;

具体操作如下:
第一步,检查类别型变量中,哪些变量取值超过5。

```
 more_value_features = []
 less_value_features = []
 # 第一步,检查类别型变量中,哪些变量取值超过5
 for var in cat_features:
     valueCounts = len(set(trainData[var]))
     print valueCounts
     if valueCounts > 5:
         more_value_features.append(var)  #取值超过5的变量,需要bad rate编码,再用卡方分箱法进行分箱
     else:
         less_value_features.append(var)
 ```


第二步,当取值<5时:如果每种类别同时包含好坏样本,无需分箱;如果有类别只包含好坏样本的一种,需要合并。

```
 merge_bin_dict = {}  #存放需要合并的变量,以及合并方法
 var_bin_list = []   #由于某个取值没有好或者坏样本而需要合并的变量
 for col in less_value_features:
     binBadRate = BinBadRate(trainData, col, 'y')[0]
     if min(binBadRate.values()) == 0 :  #由于某个取值没有坏样本而进行合并
         print '{} need to be combined due to 0 bad rate'.format(col)
         combine_bin = MergeBad0(trainData, col, 'y')
         merge_bin_dict[col] = combine_bin
         newVar = col + '_Bin'
         trainData[newVar] = trainData[col].map(combine_bin)
         var_bin_list.append(newVar)
     if max(binBadRate.values()) == 1:    #由于某个取值没有好样本而进行合并
         print '{} need to be combined due to 0 good rate'.format(col)
         combine_bin = MergeBad0(trainData, col, 'y',direction = 'good')
         merge_bin_dict[col] = combine_bin
         newVar = col + '_Bin'
         trainData[newVar] = trainData[col].map(combine_bin)
         var_bin_list.append(newVar)
 ```


第三步,当取值>5时:用bad rate进行编码,放入连续型变量里。

```
 br_encoding_dict = {}   #记录按照bad rate进行编码的变量,及编码方式
 for col in more_value_features:
     br_encoding = BadRateEncoding(trainData, col, 'y')
     trainData[col+'_br_encoding'] = br_encoding['encoding']
     br_encoding_dict[col] = br_encoding['bad_rate']
     num_features.append(col+'_br_encoding')
 ```


第四步,分箱,对连续型变量列表num_features进行卡方分箱。本文分箱后的最多的箱数为5箱。

```
 continous_merged_dict = {}
 for col in num_features:
     max_interval = 5  # 分箱后的最多的箱数
     print "{} is in processing".format(col)
     if -1 not in set(trainData[col]):   #-1会当成特殊值处理。如果没有-1,则所有取值都参与分箱
         cutOff = ChiMerge(trainData, col, 'y', max_interval=max_interval,special_attribute=[],minBinPcnt=0)
         trainData[col+'_Bin'] = trainData[col].map(lambda x: AssignBin(x, cutOff,special_attribute=[]))
         monotone = BadRateMonotone(trainData, col+'_Bin', 'y')   # 检验分箱后的单调性是否满足
         while(not monotone):
             # 检验分箱后的单调性是否满足。如果不满足,则缩减分箱的个数。
             max_interval -= 1
             cutOff = ChiMerge(trainData, col, 'y', max_interval=max_interval, special_attribute=[],
                                           minBinPcnt=0)
             trainData[col + '_Bin'] = trainData[col].map(lambda x: AssignBin(x, cutOff, special_attribute=[]))
             if max_interval == 2:
                 # 当分箱数为2时,必然单调
                 break
             monotone = BadRateMonotone(trainData, col + '_Bin', 'y')
         newVar = col + '_Bin'
         trainData[newVar] = trainData[col].map(lambda x: AssignBin(x, cutOff, special_attribute=[]))
         var_bin_list.append(newVar)


    else:
        # 如果有-1,则除去-1后,其他取值参与分箱
     

cutOff = ChiMerge(trainData, col, 'y', max_interval=max_interval, special_attribute=[-1],
                                       minBinPcnt=0)
         trainData[col + '_Bin'] = trainData[col].map(lambda x: AssignBin(x, cutOff, special_attribute=[-1]))
         monotone = BadRateMonotone(trainData, col + '_Bin', 'y',['Bin -1'])
         while (not monotone):
             max_interval -= 1


            # 如果有-1,-1的bad rate不参与单调性检验
           

cutOff = ChiMerge(trainData, col, 'y', max_interval=max_interval, special_attribute=[-1],
                                           minBinPcnt=0)
             trainData[col + '_Bin'] = trainData[col].map(lambda x: AssignBin(x, cutOff, special_attribute=[-1]))
             if max_interval == 3:


                # 考虑特殊值,当分箱数为3-1=2时,必然单调

break
             monotone = BadRateMonotone(trainData, col + '_Bin', 'y',['Bin -1'])
         newVar = col + '_Bin'
         trainData[newVar] = trainData[col].map(lambda x: AssignBin(x, cutOff, special_attribute=[-1]))
         var_bin_list.append(newVar)
     continous_merged_dict[col] = cutOff
 ```


####四、WOE编码和IV值
经常上一步的分箱后,分箱后的变量有如下几种情况:
1. 初始取值个数小于5,且不需要合并的类别型变量。
2. 初始取值个数小于5,需要合并的类别型变量,并且合并后的新变量不再需要合并。
3. 初始取值个数超过5,需要合并的类别型变量,并且合并后的新变量不再需要合并。
4. 连续型变量进行卡方分箱。

如下取到每个变量分箱后的WOE和该变量的IV值:

```
 WOE_dict = {}
 IV_dict = {}
 for var in all_var:
     woe_iv = CalcWOE(trainData, var, 'y')
     WOE_dict[var] = woe_iv['WOE']
     IV_dict[var] = woe_iv['IV']
 ```


将变量IV值进行降序排列,得到结果如下:

```
 IV_dict_sorted = sorted(IV_dict.items(), key=lambda x: x[1], reverse=True)IV_values = [i[1] for i in IV_dict_sorted]
 IV_name = [i[0] for i in IV_dict_sorted]
 plt.title('feature IV')
 plt.bar(range(len(IV_values)),IV_values)
 ```


得到的IV值如下图所示:
![image.png](https://upload-images.jianshu.io/upload_images/2130650-53e20caeddc57164.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)

####五、变量分析
单变量分析和多变量分析,均基于WOE编码后的值。
1. 选择IV值大于等于0.01的变量
2. 比较两两线性相关性。如果相关系数的绝对值高于阈值,剔除IV较低的一个。
```
#选取IV>=0.01的变量

high_IV = {k:v for k, v in IV_dict.items() if v >= 0.01}
 high_IV_sorted = sorted(high_IV.items(),key=lambda x:x[1],reverse=True)short_list = high_IV.keys()
 short_list_2 = []
 for var in short_list:
     newVar = var + '_WOE'
     trainData[newVar] = trainData[var].map(WOE_dict[var])
     short_list_2.append(newVar)

#对于上一步的结果,计算相关系数矩阵,并画出热力图进行数据可视化

trainDataWOE = trainData[short_list_2]
 f, ax = plt.subplots(figsize=(10, 8))
 corr = trainDataWOE.corr()
 sns.heatmap(corr, mask=np.zeros_like(corr, dtype=np.bool), cmap=sns.diverging_palette(220, 10, as_cmap=True),square=True, ax=ax)
 f.savefig('sns_heatmap_high_IV.png')
 ```


根据IV值挑选的变量的相关系数矩阵热力图:
![image.png](https://upload-images.jianshu.io/upload_images/2130650-45365bc505ef0bfe.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)

单变量两两间的线性相关性检验:
(1)将候选变量按照IV进行降序排列
(2)计算第i和第i+1的变量的线性相关系数
(3)对于系数超过阈值的两个变量,剔除IV较低的一个
此处阈值为0.7,大于0.7则表示有相关性。见如下代码:

```
 deleted_index = []
 cnt_vars = len(high_IV_sorted)
 for i in range(cnt_vars):
     if i in deleted_index:
         continue
     x1 = high_IV_sorted[i][0]+"_WOE"
     for j in range(cnt_vars):
         if i == j or j in deleted_index:
             continue
         y1 = high_IV_sorted[j][0]+"_WOE"
         roh = np.corrcoef(trainData[x1],trainData[y1])[0,1]
         if abs(roh)>0.7:
             x1_IV = high_IV_sorted[i][1]
             y1_IV = high_IV_sorted[j][1]
             if x1_IV > y1_IV:
                 deleted_index.append(j)
             else:
                 deleted_index.append(i)multi_analysis_vars_1 = [high_IV_sorted[i][0]+"_WOE" for i in range(cnt_vars) if i not in deleted_index]
 ```


多变量分析:VIF
一般要小于10,本次结果max_VIF为:1.5093709849027372,则多变量之间排除共线性。

```
 X = np.matrix(trainData[multi_analysis_vars_1])
 VIF_list = [variance_inflation_factor(X, i) for i in range(X.shape[1])]
 max_VIF = max(VIF_list)
 print max_VIF
 ```

####六、逻辑回归模型
要求:
1,变量显著
2,符号为负
将多变量分析后的变量带入LR模型中,

```
 y = trainData['y']
 X = trainData[multi_analysis]
 X['intercept'] = [1]*X.shape[0]
 LR = sm.Logit(y, X).fit()
 summary = LR.summary()
 pvals = LR.pvalues
 pvals = pvals.to_dict()
 ```


逐步剔除p值不显著的变量

```
 varLargeP = {k: v for k,v in pvals.items() if v >= 0.1}
 varLargeP = sorted(varLargeP.items(), key=lambda d:d[1], reverse = True)
 while(len(varLargeP) > 0 and len(multi_analysis) > 0):


    # 每次迭代中,剔除最不显著的变量,直到
    # (1) 剩余所有变量均显著
    # (2) 没有特征可选

varMaxP = varLargeP[0][0]
     print varMaxP
     if varMaxP == 'intercept':
         print 'the intercept is not significant!'
         break
     multi_analysis.remove(varMaxP)
     y = trainData['y']
     X = trainData[multi_analysis]
     X['intercept'] = [1] * X.shape[0]    LR = sm.Logit(y, X).fit()
     pvals = LR.pvalues
     pvals = pvals.to_dict()
     varLargeP = {k: v for k, v in pvals.items() if v >= 0.1}
     varLargeP = sorted(varLargeP.iteritems(), key=lambda d: d[1], reverse=True)summary = LR.summary()
 ```


逻辑回归结果如下:

```
                                         LLR p-value:                2.460e-280
 ========================================================================================================
                                            coef    std err          z      P>|z|      [0.025      0.975]
 --------------------------------------------------------------------------------------------------------
 zip_code_br_encoding_Bin_WOE            -0.9467      0.045    -21.258      0.000      -1.034      -0.859
 int_rate_clean_Bin_WOE                  -0.8742      0.055    -15.779      0.000      -0.983      -0.766
 annual_inc_Bin_WOE                      -0.7039      0.095     -7.383      0.000      -0.891      -0.517
 purpose_br_encoding_Bin_WOE             -0.8559      0.087     -9.785      0.000      -1.027      -0.684
 inq_last_6mths_Bin_WOE                  -0.7831      0.104     -7.537      0.000      -0.987      -0.579
 addr_state_br_encoding_Bin_WOE          -0.2423      0.121     -1.997      0.046      -0.480      -0.005
 limit_income_Bin_WOE                    -0.4409      0.134     -3.299      0.001      -0.703      -0.179
 mths_since_last_record_clean_Bin_WOE    -0.7616      0.141     -5.416      0.000      -1.037      -0.486
 total_acc_Bin_WOE                       -0.2963      0.173     -1.710      0.087      -0.636       0.043
 dti_Bin_WOE                             -0.7897      0.196     -4.021      0.000      -1.175      -0.405
 emp_length_clean_Bin_WOE                -0.7229      0.200     -3.611      0.000      -1.115      -0.331
 intercept                               -2.1014      0.027    -78.645      0.000      -2.154      -2.049
 ========================================================================================================
 ```


可以看到p值均显著,且系数为负。
计算auc值,结果为:0.74

```
 trainData['prob'] = LR.predict(X)
 auc = roc_auc_score(trainData['y'],trainData['prob'])  #AUC = 0.73
 ```


####七、验证模型
用同样的方法,对验证集数据进行处理后,放入模型,如下得到

auc=0.65
 ks = 0.22


表明模型有一定的预测能力和区分度

```
 testData['intercept'] = [1]*testData.shape[0]


#预测数据集中,变量顺序需要和LR模型的变量顺序一致
#例如在训练集里,变量在数据中的顺序是“负债比”在“借款目的”之前,对应地,在测试集里,“负债比”也要在“借款目的”之前

testData2 = testData[list(LR.params.index)]
 testData['prob'] = LR.predict(testData2)

#计算KS和AUC

auc = roc_auc_score(testData['y'],testData['prob'])
 ks = KS(testData, 'prob', 'y')
 ```


计算评分:

```
 basePoint = 250
 PDO = 200
 testData['score'] = testData['prob'].map(lambda x:Prob2Score(x, basePoint, PDO))
 testData = testData.sort_values(by = 'score')
 ```


结果如下,分值与频数的分布近似为正态分布。根据业务需要以及相应的风险比例,划分评分区间,合理应用评分卡模型。
![image.png](https://upload-images.jianshu.io/upload_images/2130650-eed618aaaa7b5b2e.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)