学习python一周,学着写了一个爬虫,用来抓取360百科的词条,在这个过程中。因为一个小小的修改,程序出现一些问题,又花了几天时间研究,问了各路高手,都没解决,终于还是自己攻克了,事实上就是对list列表理解不够深入导致的。这个bug非常有借鉴意义,分享出现。
先看看终于抓取出的结果:
以下进入正题。先来看看文件结构。这里有5个模块:
- spider_main.py是入口函数
- url_manager.py是管理器,管理须要抓取的url和已经抓取的url
- html_downloader.py是下载器,下载相应url的网页
- html_parser.py是解析器,解析出新的url列表和当前的词条信息
- html_outputer.py是输出器。将抓取的词条title和解释summary输出成一个html表格
本程序使用的是最新的python3.4.4。使用的类库有:
- 官方的urllib
- 第三方的BeautifulSoup(自行下载安装)
这个函数是入口函数,全部方法都是字面意思,能够到相应的类中查看
from baike360_spider import url_manager, html_downloader, html_parser, html_outputer class SpiderMain(object): def __init__(self): self.urls = url_manager.UrlManager() self.downloader = html_downloader.HtmlDownloader() self.parser = html_parser.HtmlParser() self.outputer = html_outputer.HtmlOutputer() def craw(self): count = 1 self.urls.add_new_url(root_url) while self.urls.has_new_url(): try: new_url = self.urls.get_new_url() print('craw %d: %s' % (count, new_url)) html_cont = self.downloader.download(new_url) new_urls, new_data = self.parser.parse(new_url, html_cont) self.urls.add_new_urls(new_urls) self.outputer.collect_data(new_data) if count == 5: break count += 1 except Exception as e: print('craw failed') print(e) self.outputer.output_html() if __name__ == '__main__': root_url = 'http://baike.so.com/doc/1790119-1892991.html' obj_spider = SpiderMain() obj_spider.craw()二、url_manager.py 管理器的作用是处理新的须要抓取的url和已经抓取的url,因此在构造函数里面初始化两个变量,相应有4个方法,功能都非常easy,看看就懂。
class UrlManager(object): def __init__(self): self.new_urls = set() self.old_urls = set() def add_new_url(self, url): if url is None: return if url not in self.new_urls and url not in self.old_urls: self.new_urls.add(url) def add_new_urls(self, urls): if urls is None or len(urls) == 0: return for url in urls: self.add_new_url(url) def has_new_url(self): return len(self.new_urls) != 0 def get_new_url(self): new_url = self.new_urls.pop() self.old_urls.add(new_url) return new_url
三、html_downloader.py
import urllib.request class HtmlDownloader(object): def download(self, url): if url is None: return res = urllib.request.urlopen(url) if res.getcode() != 200: return return res.read()
四、html_parser.py
import re from urllib.parse import urljoin from bs4 import BeautifulSoup class HtmlParser(object): def __init__(self): self.new_urls = set() # self.res_data = dict() def _get_new_urls(self, page_url, soup): # new_urls = set() # /doc/5912108-6125016.html或/doc/3745498.html links = soup.find_all('a', href=re.compile(r'/doc/[\d-]+\.html')) for link in links: new_url = link['href'] new_full_url = urljoin(page_url, new_url) # print(new_full_url) self.new_urls.add(new_full_url) return self.new_urls def _get_new_data(self, page_url, soup): res_data = dict() # url res_data['url'] = page_url # <span class="title">Python</span> title_node = soup.find('span', class_='title') res_data['title'] = title_node.get_text() # <div class="card_content" id="js-card-content"> summary_node = soup.find('div', class_='card_content').find('p') res_data['summary'] = summary_node.get_text() # print("dd: ", self.res_data) return res_data def parse(self, page_url, html_cont): if page_url is None or html_cont is None: return soup = BeautifulSoup(html_cont, 'html.parser', from_encoding='utf-8') new_urls = self._get_new_urls(page_url, soup) new_data = self._get_new_data(page_url, soup) return new_urls, new_data
五、html_outputer.py
class HtmlOutputer(object): def __init__(self): self.datas = [] def collect_data(self, data): if data is None: return self.datas.append(data) def output_html(self): fout = open('output.html', 'w', encoding='utf-8') fout.write('<html>') fout.write("<head><meta http-equiv=\"content-type\" content=\"text/html;charset=utf-8\"></head>") fout.write('<body>') fout.write('<table border=1 cellspacing=0>') for data in self.datas: fout.write('<tr>') fout.write('<td><a href="%s">%s</a></td>' % (data['url'], data['title'])) fout.write('<td>%s</td>' % data['summary']) fout.write('</tr>') fout.write('</table>') fout.write('</body>') fout.write('</html>') fout.close()
六、*****关键。我踩过的坑!
!!!