运行环境:python 3.6.0
一、关于concurrent.futures模块
Python 的标准库为我们提供了 threading 和multiprocessing 模块编写相应的多线程/多进程代码,但是当项目达到一定的规模,频繁创建/销毁进程或者线程是非常消耗资源的,这个时候我们就要编写自己的线程池/进程池,以空间换时间。但从Python3.2开始,标准库为我们提供了concurrent.futures模块,它提供了ThreadPoolExecutor和ProcessPoolExecutor两个类,实现了对threading和multiprocessing的进一步抽象,对编写线程池/进程池提供了直接的支持。
1.Executor和Future:
concurrent.futures 模块的基础是 Exectuor,Executor是一个抽象类,它不能被直接使用。但是它提供的两个子类ThreadPoolExecutor和ProcessPoolExecutor却是非常有用,顾名思义两者分别被用来创建线程池和进程池的代码。我们可以将相应的tasks直接放入线程池/进程池,不需要维护Queue来操心死锁的问题,线程池/进程池会自动帮我们调度。
Future这个概念相信有java和nodejs下编程经验的朋友肯定不陌生了,你可以把它理解为一个在未来完成的操作,这是异步编程的基础,传统编程模式下比如我们操作queue.get的时候,在等待返回结果之前会产生阻塞,cpu不能让出来做其他事情,而Future的引入帮助我们在等待的这段时间可以完成其他的操作。
p.s: 如果你依然在坚守Python2.x,请先安装futures模块。
pip install futures
二、操作线程池/进程池
1.使用submit来操作线程池/进程池:
# 线程池:
from concurrent.futures import ThreadPoolExecutor
import urllib.request
URLS = ['http://www.163.com', 'https://www.baidu.com/', 'https://github.com/']
def load_url(url):
with urllib.request.urlopen(url, timeout=60) as conn:
print('%r page is %d bytes' % (url, len(conn.read())))
executor = ThreadPoolExecutor(max_workers=3)
for url in URLS:
future = executor.submit(load_url,url)
print(future.done())
print('主线程')
# 运行结果
"""
False
False
False
主线程
'https://www.baidu.com/' page is 227 bytes
'http://www.163.com' page is 696441 bytes
'https://github.com/' page is 86916 bytes
"""
我们根据运行结果来分析一下。我们使用submit方法来往线程池中加入一个task,submit返回一个Future对象,对于Future对象可以简单地理解为一个在未来完成的操作。由于线程池异步提交了任务,主线程并不会等待线程池里创建的线程执行完毕,所以执行了print('主线程'),相应的线程池中创建的线程并没有执行完毕,故future.done()返回结果为False。
# 进程池:同上
from concurrent.futures import ProcessPoolExecutor
import urllib.request
URLS = ['http://www.163.com', 'https://www.baidu.com/', 'https://github.com/']
def load_url(url):
with urllib.request.urlopen(url, timeout=60) as conn:
print('%r page is %d bytes' % (url, len(conn.read())))
executor = ProcessPoolExecutor(max_workers=3)
if __name__ == '__main__': # 要加main
for url in URLS:
future = executor.submit(load_url,url)
print(future.done())
print('主线程')
# 运行结果
"""
False # 子进程只完成创建,并没有执行完成
False
False
主线程 # 子进程创建完成就会向下执行主线程,并不会等待子进程执行完毕
'http://www.163.com' page is 696441 bytes
'https://www.baidu.com/' page is 227 bytes
'https://github.com/' page is 86916 bytes
"""
2.使用map来操作线程池/进程池:
除了submit,Exectuor还为我们提供了map方法,和内建的map用法类似:
from concurrent.futures import ThreadPoolExecutor
import urllib.request
URLS = ['http://www.163.com', 'https://www.baidu.com/', 'https://github.com/']
def load_url(url):
with urllib.request.urlopen(url, timeout=60) as conn:
print('%r page is %d bytes' % (url, len(conn.read())))
executor = ThreadPoolExecutor(max_workers=3)
executor.map(load_url,URLS)
print('主线程')
# 运行结果:
"""
主线程
'https://www.baidu.com/' page is 227 bytes
'http://www.163.com' page is 696411 bytes
'https://github.com/' page is 86916 bytes
"""
从运行结果可以看出,map是按照URLS列表元素的顺序返回的,并且写出的代码更加简洁直观,我们可以根据具体的需求任选一种。
3.wait:
wait方法接会返回一个tuple(元组),tuple中包含两个set(集合),一个是completed(已完成的)另外一个是uncompleted(未完成的)。使用wait方法的一个优势就是获得更大的自由度,它接收三个参数FIRST_COMPLETED, FIRST_EXCEPTION 和ALL_COMPLETE,默认设置为ALL_COMPLETED。
如果采用默认的ALL_COMPLETED,程序会阻塞直到线程池里面的所有任务都完成,再执行主线程:
from concurrent.futures import ThreadPoolExecutor,wait,as_completed
import urllib.request
URLS = ['http://www.163.com', 'https://www.baidu.com/', 'https://github.com/']
def load_url(url):
with urllib.request.urlopen(url, timeout=60) as conn:
print('%r page is %d bytes' % (url, len(conn.read())))
executor = ThreadPoolExecutor(max_workers=3)
f_list = []
for url in URLS:
future = executor.submit(load_url,url)
f_list.append(future)
print(wait(f_list))
print('主线程')
# 运行结果:
"""
'http://www.163.com' page is 696411 bytes
'https://github.com/' page is 86916 bytes
'https://www.baidu.com/' page is 227 bytes
DoneAndNotDoneFutures(done={<Future at 0x1d8dca46e10 state=finished returned NoneType>, <Future at 0x1d8dc8a8ba8 state=finished returned NoneType>, <Future at 0x1d8dca46358 state=finished returned NoneType>}, not_done=set())
主线程
"""
如果采用FIRST_COMPLETED参数,程序并不会等到线程池里面所有的任务都完成。
from concurrent.futures import ThreadPoolExecutor,wait,as_completed
import urllib.request
URLS = ['http://www.163.com', 'https://www.baidu.com/', 'https://github.com/']
def load_url(url):
with urllib.request.urlopen(url, timeout=60) as conn:
print('%r page is %d bytes' % (url, len(conn.read())))
executor = ThreadPoolExecutor(max_workers=3)
f_list = []
for url in URLS:
future = executor.submit(load_url,url)
f_list.append(future)
print(wait(f_list,return_when='FIRST_COMPLETED'))
print('主线程')
# 运行结果:
"""
'http://www.163.com' page is 696411 bytes
DoneAndNotDoneFutures(done={<Future at 0x21403f58cc0 state=finished returned NoneType>}, not_done={<Future at 0x214040f3e10 state=running>, <Future at 0x214040f33c8 state=running>})
主线程
'https://www.baidu.com/' page is 227 bytes
'https://github.com/' page is 86916 bytes
"""
应用线程池:
from concurrent.futures import ThreadPoolExecutor,ProcessPoolExecutor
import requests
import time,os
def get_page(url):
print('<%s> is getting [%s]'%(os.getpid(),url))
response = requests.get(url)
if response.status_code==200: #200代表状态:下载成功了
return {'url':url,'text':response.text}
def parse_page(res):
res = res.result()
print('<%s> is getting [%s]'%(os.getpid(),res['url']))
with open('db.txt','a') as f:
parse_res = 'url:%s size:%s\n'%(res['url'],len(res['text']))
f.write(parse_res)
if __name__ == '__main__':
# p = ThreadPoolExecutor()
p = ProcessPoolExecutor()
l = [
'http://www.baidu.com',
'http://www.baidu.com',
'http://www.baidu.com',
'http://www.baidu.com',
]
for url in l:
res = p.submit(get_page,url).add_done_callback(parse_page) #这里的回调函数拿到的是一个对象。得
# 先把返回的res得到一个结果。即在前面加上一个res.result() #谁好了谁去掉回调函数
# 回调函数也是一种编程思想。不仅开线程池用,开线程池也用
p.shutdown() #相当于进程池里的close和join
print('主',os.getpid())
# 运行结果:
"""
<5956> is getting [http://www.baidu.com]
<2792> is getting [http://www.baidu.com]
<5956> is getting [http://www.baidu.com]
<8216> is getting [http://www.baidu.com]
<7032> is getting [http://www.baidu.com]
<8216> is getting [http://www.baidu.com]
<8216> is getting [http://www.baidu.com]
<8216> is getting [http://www.baidu.com]
主 8216
"""