Python的准备工作

       Python 一个备受欢迎的点是社区支持很多,有非常多优秀的库或者模块。但是某些库之间有时候也存在依赖,所以要安装这些库也是挺繁琐的过程。但总有人忍受不了这种 繁琐,都会开发出不少自动化的工具来节省各位客官的时间。其中,Anaconda是一个非常好的安装工具。

1. Anaconda安装

       这是一个非常齐全的python发行版本,最新的版本提供了多达195个流行的python包,包含了我们常用的numpy、scipy等等科学计算的包。有了它,妈妈再也不用担心我焦头烂额地安装一个又一个依赖包了。Anaconda在手,轻松我有!下载地址如下:http://www.continuum.io/downloads,现在的版本有python2.7版本和python3.5版本,下载好对应版本、对应系统的anaconda,它实际上是一个sh脚本文件,大约280M左右。我下载的是linux版的python 2.7版本。

 

 

下载成功后,在终端执行(2.7版本):

 

# bash Anaconda2-2.4.1-Linux-x86_64.sh


 

在安装的过程中,会问你安装路径,直接回车默认就可以了

 

2. 将python添加到环境变量中

如果在安装Anaconda的过程中没有将安装路径添加到系统环境变量中,需要在安装后手工添加:

1、在终端输入$sudo gedit /etc/profile,打开profile文件。

2、在文件末尾添加一行:export PATH=/home/grant/anaconda2/bin:$PATH,其中,将“/home/grant/anaconda2/bin”替换为你实际的安装路径。保存。

 

3. 使环境变量生效

方法1:

让/etc/profile文件修改后立即生效 ,可以使用如下命令:

# .  /etc/profile

注意: . 和 /etc/profile 有空格

方法2:

让/etc/profile文件修改后立即生效 ,可以使用如下命令:

# source /etc/profile


附:Linux中source命令的用法

source命令:

source命令也称为“点命令”,也就是一个点符号(.)。source命令通常用于重新执行刚修改的初始化文件,使之立即生效,而不必注销并重新登录。

用法: 

source filename 或 . filename

 

4. scikit-learn 安装

​在终端执行命令:conda install scikit-learn​

​一直 “Enter" 或 ”yes" 即可完成安装。​

​真的很方便。​


5. scikit-learn 测试

 



#!usr/bin/env python
#-*- coding: utf-8 -*-

import sys
import os
import time
from sklearn import metrics
import numpy as np
import cPickle as pickle

reload(sys)
sys.setdefaultencoding('utf8')

# Multinomial Naive Bayes Classifier
def naive_bayes_classifier(train_x, train_y):
from sklearn.naive_bayes import MultinomialNB
model = MultinomialNB(alpha=0.01)
model.fit(train_x, train_y)
return model


# KNN Classifier
def knn_classifier(train_x, train_y):
from sklearn.neighbors import KNeighborsClassifier
model = KNeighborsClassifier()
model.fit(train_x, train_y)
return model


# Logistic Regression Classifier
def logistic_regression_classifier(train_x, train_y):
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(penalty='l2')
model.fit(train_x, train_y)
return model


# Random Forest Classifier
def random_forest_classifier(train_x, train_y):
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier(n_estimators=8)
model.fit(train_x, train_y)
return model


# Decision Tree Classifier
def decision_tree_classifier(train_x, train_y):
from sklearn import tree
model = tree.DecisionTreeClassifier()
model.fit(train_x, train_y)
return model


# GBDT(Gradient Boosting Decision Tree) Classifier
def gradient_boosting_classifier(train_x, train_y):
from sklearn.ensemble import GradientBoostingClassifier
model = GradientBoostingClassifier(n_estimators=200)
model.fit(train_x, train_y)
return model


# SVM Classifier
def svm_classifier(train_x, train_y):
from sklearn.svm import SVC
model = SVC(kernel='rbf', probability=True)
model.fit(train_x, train_y)
return model

# SVM Classifier using cross validation
def svm_cross_validation(train_x, train_y):
from sklearn.grid_search import GridSearchCV
from sklearn.svm import SVC
model = SVC(kernel='rbf', probability=True)
param_grid = {'C': [1e-3, 1e-2, 1e-1, 1, 10, 100, 1000], 'gamma': [0.001, 0.0001]}
grid_search = GridSearchCV(model, param_grid, n_jobs = 1, verbose=1)
grid_search.fit(train_x, train_y)
best_parameters = grid_search.best_estimator_.get_params()
for para, val in best_parameters.items():
print para, val
model = SVC(kernel='rbf', C=best_parameters['C'], gamma=best_parameters['gamma'], probability=True)
model.fit(train_x, train_y)
return model

def read_data(data_file):
import gzip
f = gzip.open(data_file, "rb")
train, val, test = pickle.load(f)
f.close()
train_x = train[0]
train_y = train[1]
test_x = test[0]
test_y = test[1]
return train_x, train_y, test_x, test_y

if __name__ == '__main__':
data_file = "mnist.pkl.gz"
thresh = 0.5
model_save_file = None
model_save = {}

test_classifiers = ['NB', 'KNN', 'LR', 'RF', 'DT', 'SVM', 'GBDT']
classifiers = {'NB':naive_bayes_classifier,
'KNN':knn_classifier,
'LR':logistic_regression_classifier,
'RF':random_forest_classifier,
'DT':decision_tree_classifier,
'SVM':svm_classifier,
'SVMCV':svm_cross_validation,
'GBDT':gradient_boosting_classifier
}

print 'reading training and testing data...'
train_x, train_y, test_x, test_y = read_data(data_file)
num_train, num_feat = train_x.shape
num_test, num_feat = test_x.shape
is_binary_class = (len(np.unique(train_y)) == 2)
print '******************** Data Info *********************'
print '#training data: %d, #testing_data: %d, dimension: %d' % (num_train, num_test, num_feat)

for classifier in test_classifiers:
print '******************* %s ********************' % classifier
start_time = time.time()
model = classifiers[classifier](train_x, train_y)
print 'training took %fs!' % (time.time() - start_time)
predict = model.predict(test_x)
if model_save_file != None:
model_save[classifier] = model
if is_binary_class:
precision = metrics.precision_score(test_y, predict)
recall = metrics.recall_score(test_y, predict)
print 'precision: %.2f%%, recall: %.2f%%' % (100 * precision, 100 * recall)
accuracy = metrics.accuracy_score(test_y, predict)
print 'accuracy: %.2f%%' % (100 * accuracy)

if model_save_file != None:
pickle.dump(model_save, open(model_save_file, 'wb'))


 

 

测试的分类器包括:

classifiers = {'NB':naive_bayes_classifier,

                  'KNN':knn_classifier,

                   'LR':logistic_regression_classifier,

                   'RF':random_forest_classifier,

                   'DT':decision_tree_classifier,

                  'SVM':svm_classifier,

                'SVMCV':svm_cross_validation,

                 'GBDT':gradient_boosting_classifier

    }

使用数据集为: 

本次使用mnist手写体库进行实验:http://deeplearning.net/data/mnist/mnist.pkl.gz。共5万训练样本和1万测试样本。

 

最终结果如下:

ubuntu 14.04 anaconda安装_bash