本文所有代码及数据可下载。

Scikit Learn 篇:Light 版

scikit learn内置了逻辑回归,对于小规模的应用较为简单,一般使用如下代码即可

from sklearn.linear_model.logistic import LogisticRegression
classifier = LogisticRegression()
classifier.fit(X_train, y_train)
predictions = classifier.predict(X_test)

如果要根据LR的系数及截距手动计算概率,可以如下操作:

def softmax(x):
e_x = np.exp(x - np.max(x)) # 防止exp()数值溢出
return e_x / e_x.sum(axis=0)
pred = [np.argmax(softmax(np.dot(classifier.coef_, X_test[i,:]) + classifier.intercept_)) for i in range(len(X_test))]
print np.sum(pred != predictions) # 检查是否存在差异

完整代码LR_sklearn_light.py可下载。

Scikit Learn 篇:Pro 版

当数据量较大时,例如量级:1千万条数据,1000个特征维度。此时直接使用LogisticRegression()的训练速度较慢,并且要求把所有数据预先加载到内存中,这有可能导致内存不足的问题。解决办法是采用mini batch的形式,每读取一个batch的数据便进行一次训练。

通常从HDFS上取得的数据都会以多个分片的形式存在,以最常用的csv文件格式为例。首先我们使用glob包生成所有数据文件的文件名list,训练集数据文件名是train_part0.csv这样的,使用 * 可以进行任意字符匹配。

filenames = sorted(glob.glob("./TrainData/train*"))

再遍历每一个文件,每次读取chunksize行的数据块作为一个batch,对于每一个batch,调用sklearn中分类器的partial_fit()方法在当前batch上进行梯度下降,直到所有数据都被使用过或达到设置的最大训练步数。

filenames = sorted(glob.glob("./TrainData/train*"))
MaxIterNum = 100
count = 0
for c, filename in enumerate(filenames):
TrainDF = pd.read_csv(filename, header = None, chunksize = 10)
for Batch in TrainDF:
count += 1
print count
y_train = np.array(Batch.iloc[:,0])
X_train = np.array(Batch.iloc[:,1:])
st1 = time.time()
classifier.partial_fit(X_train,y_train, classes=np.array([0, 1, 2]))
ed1 = time.time()
st2 = time.time()
predictions = classifier.predict(X_train)
acc = metrics.accuracy_score(y_train,predictions)
ed2 = time.time()
print ed1-st1, ed2-st2, acc
if count == MaxIterNum:
break
if count == MaxIterNum:
break

对于测试数据,同样可以采用这种batch的读取方法,并拼接起来统一进行测试,调用sklearn的accuracy_score()函数得到准确率,调用confusion_matrix()得到混淆矩阵。

X_test = np.zeros([100, np.shape(X_train)[1]])
y_test = np.zeros(100)
TestSampleNum = 0
filenames = sorted(glob.glob("./TestData/test*"))
MaxIterNum = 100
count = 0
for c, filename in enumerate(filenames):
TestDF = pd.read_csv(filename, header = None, chunksize = 10)
for Batch in TestDF:
count += 1
print count
y_test[TestSampleNum:TestSampleNum+np.shape(Batch)[0]] = np.array(Batch.iloc[:,0])
X_test[TestSampleNum:TestSampleNum+np.shape(Batch)[0],:] = np.array(Batch.iloc[:,1:])
TestSampleNum = TestSampleNum+np.shape(Batch)[0]
if count == MaxIterNum:
break
if count == MaxIterNum:
break
X_test = X_test[0:TestSampleNum,:]
y_test = y_test[0:TestSampleNum]
st1 = time.time()
predictions = classifier.predict(X_test)
acc = accuracy_score(y_test,predictions)
ed1 = time.time()
print ed1-st1, acc
A = confusion_matrix(y_test, predictions)
print A

完整代码LR_sklearn_pro.py可下载。

Tensorflow 篇

tensorflow主要用于处理较大数据量的情况。第一步是使用inputPipeLine将数据读入过程与训练过程并行。这里对训练集和测试集分别建立读取管线,注意训练集的numEpochs与测试集是不同的,这里允许重复训练多次:

def readMyFileFormat(fileNameQueue):
reader = tf.TextLineReader()
key, value = reader.read(fileNameQueue)
record_defaults = [[0]] + [[0.0]] * 4
user = tf.decode_csv(value, record_defaults=record_defaults)
userlabel = user[0]
userlabel01 = tf.cast(tf.one_hot(userlabel,ClassNum,1,0), tf.float32)
userfeature = user[1:]
return userlabel01, userfeature
def inputPipeLine_batch(fileNames, batchSize, numEpochs = None):
fileNameQueue = tf.train.string_input_producer(fileNames, num_epochs = numEpochs, shuffle = False )
example = readMyFileFormat(fileNameQueue)
min_after_dequeue = 10
capacity = min_after_dequeue + 3 * batch_size_train
YBatch, XBatch = tf.train.batch(
example, batch_size = batchSize,
capacity = capacity)
return YBatch, XBatch
filenames = tf.train.match_filenames_once(DataDir)
YBatch, XBatch = inputPipeLine_batch(filenames, batchSize = batch_size, numEpochs = 20)
pfilenames = tf.train.match_filenames_once(pDataDir)
pYBatch, pXBatch = inputPipeLine_batch(pfilenames, batchSize = batch_size, numEpochs = 1)

然后构建网络:

# LR
X_LR = tf.placeholder(tf.float32, [None, FeatureSize])
Y_LR = tf.placeholder(tf.float32, [None, ClassNum])
W_LR = tf.Variable(tf.truncated_normal([FeatureSize, ClassNum], stddev=0.1), dtype=tf.float32)
bias_LR = tf.Variable(tf.constant(0.1,shape=[ClassNum]), dtype=tf.float32)
Ypred_LR = tf.matmul(X_LR, W_LR) + bias_LR
Ypred_prob = tf.nn.softmax(Ypred_LR)
cost = -tf.reduce_mean(Y_LR*tf.log(Ypred_prob))
optimizer = tf.train.AdamOptimizer(lr).minimize(cost)

使用mini batch的梯度下降方法训练网络,并用TrainBatchNum控制最大训练步数:

# 训练
try:
for i in range(TrainBatchNum):
print i
y, x = sess.run([YBatch, XBatch], feed_dict={batch_size: batch_size_train})
flag, c = sess.run([optimizer, cost], feed_dict={X_LR: x, Y_LR: y})
print c
except tf.errors.OutOfRangeError:
print 'Done Train'

分批读取测试集后再拼接起来统一评估:

# 测试
Y = np.array([0, 0, 0])
Pred = np.array([0, 0, 0])
try:
i = 0
while True:
print i
i = i + 1
y, x = sess.run([pYBatch, pXBatch], feed_dict={batch_size: batch_size_test})
pred = sess.run(Ypred_prob, feed_dict={X_LR: x, Y_LR: y})
Pred = np.vstack([Pred,pred])
Y = np.vstack([Y,y])
except tf.errors.OutOfRangeError:
print 'Done Test'
Y = Y[1:]
Pred = Pred[1:]
acc = accuracy_score(np.argmax(Y, axis = 1),np.argmax(Pred, axis = 1))
print acc
A = confusion_matrix(np.argmax(Y, axis = 1),np.argmax(Pred, axis = 1))
print A

完整代码LR_tf.py可下载。