使用scrapy爬取当当网的数据,输入搜寻的关键字(如python、C++、java等),输入查询的页数,获取到书的名称、作者、价钱、评论数等信息,并下载书籍相应图片,画水平条形图直观显示热度较高的书籍

涉及:

1. scrapy的使用

2. scrapy.FormRequest() 提交表单

3.  数据保存到mongodb,数据写入.xlsx表格

4. 设置referer防止反爬

5. 使用ImagesPipeLine下载图片

6. 获取评论数前10的书籍,画水平条形图

 

详细源码:

entrypoint.py

from scrapy.cmdline import execute

execute(["scrapy","crawl","dangdang"])

items.py

import scrapy


class DangdangSpiderItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    # 书名
    book_name=scrapy.Field()
    # 作者
    author=scrapy.Field()
    # 出版社
    publisher=scrapy.Field()
    # 价格
    price=scrapy.Field()
    # 评论数
    comments_num=scrapy.Field()
    # 图片url
    image_url=scrapy.Field()
    # 搜索内容key
    book_key=scrapy.Field()
dangdang.py
# -*- coding: utf-8 -*-
import scrapy
from lxml import etree
from DangDang_Spider.items import DangdangSpiderItem
class DangdangSpider(scrapy.Spider):
    name = 'dangdang'
    allowed_domains = ['dangdang.com']
    start_urls = 'http://search.dangdang.com/'

    total_comments_num_list=[]
    total_book_name_list=[]
    # 发起网页请求,换页仅改变了page_index的值
    def start_requests(self):
        self.key=input("请输入查询的书籍:")
        pages=input("请输入希望查询的总页数:")
        while(pages.isdigit()==False or '.' in pages):
            pages = input("输入错误,请输入整数:")
        if  int(pages)<=0 or int(pages)>100:
            pages = input("输入超出范围(1-100),请重新输入:")
        form_data={
            'key':self.key,
            'act':'input',
            'page_index':'1'
        }
        for i in range(int(pages)):
            form_data['page_index']=str(i+1)
            # 使用scrapy.FormRequest,可设置表单数据,默认method为POST,可根据具体请求修改
            yield scrapy.FormRequest(self.start_urls,formdata=form_data,method='GET',callback=self.parse)

    # xpath提取数据
    def parse(self, response):
        xml=etree.HTML(response.text)
        book_name_list=xml.xpath('//div[@id="search_nature_rg"]/ul//li/a/@title')
        author_list=xml.xpath('//div[@id="search_nature_rg"]/ul//li/p[@class="search_book_author"]/span[1]/a/@title')
        publisher_list=xml.xpath('//div[@id="search_nature_rg"]/ul//li/p[@class="search_book_author"]/span[3]/a/@title')
        price_list=xml.xpath('//div[@id="search_nature_rg"]/ul//li/p[@class="price"]/span[1]/text()')
        comments_num_list=xml.xpath('//div[@id="search_nature_rg"]/ul//li/p[@class="search_star_line"]/a/text()')
        image_url_list=xml.xpath('//div[@id="search_nature_rg"]/ul//li/a/img/@data-original')
        item = DangdangSpiderItem()
        item["book_name"] = book_name_list
        item['author'] = author_list
        item['publisher'] = publisher_list
        item['price'] = price_list
        item['comments_num'] = comments_num_list
        item['image_url']=image_url_list
        item['book_key']=self.key

        return item

settings.py

# -*- coding: utf-8 -*-

# Scrapy settings for DangDang_Spider project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://doc.scrapy.org/en/latest/topics/settings.html
#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'DangDang_Spider'

SPIDER_MODULES = ['DangDang_Spider.spiders']
NEWSPIDER_MODULE = 'DangDang_Spider.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'DangDang_Spider (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'DangDang_Spider.middlewares.DangdangSpiderSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# 打开下载管道
DOWNLOADER_MIDDLEWARES = {
    'DangDang_Spider.middlewares.DangdangSpiderDownloaderMiddleware': 423,
    'DangDang_Spider.middlewares.DangdangSpiderRefererMiddleware':1
}

# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    'DangDang_Spider.pipelines.MongoPipeline': 300,    # 实现保存数据到mongodb
    'DangDang_Spider.pipelines.FilePipeline': 400,     # 实现保存数据到excel
    'DangDang_Spider.pipelines.SaveImagePipeline':450, # 调用scrapy内部ImagesPipeline实现图片下载
    'DangDang_Spider.pipelines.PicturePipeline':500    # 统计评论数最高的10本书,画图
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
# 使用下列,Scrapy会缓存你有的Requests!当你再次请求时,如果存在缓存文档则返回缓存文档,而不是去网站请求,这样既加快了本地调试速度,也减轻了网站的压力
HTTPCACHE_ENABLED = True
HTTPCACHE_EXPIRATION_SECS = 0
HTTPCACHE_DIR = 'httpcache'
HTTPCACHE_IGNORE_HTTP_CODES = []
HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

# Mongodb参数配置 ip/port/数据库名/集合名
MONGODB_HOST = '127.0.0.1'
MONGODB_PORT = 27017
MONGODB_DBNAME = 'dangdang'
MONGODB_DOCNAME = 'dangdang_collection'

# 图片存放根目录
IMAGES_STORE='./book_image'
pipelines.py
# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
from scrapy.utils.project import get_project_settings  #  获取settings.py
import pymongo
from DangDang_Spider.items import DangdangSpiderItem

import openpyxl
import os

from scrapy.pipelines.images import ImagesPipeline
import scrapy
from scrapy.exceptions import DropItem
import matplotlib.pyplot as plt

# 保存数据到mongodb
class MongoPipeline(object):
    settings=get_project_settings()
    host = settings['MONGODB_HOST']
    port = settings['MONGODB_PORT']
    dbName = settings['MONGODB_DBNAME']
    collectionName = settings['MONGODB_DOCNAME']

    # 开始处理数据之前连接数据库
    def open_spider(self,spider):
        # 创建连接
        self.client=pymongo.MongoClient(host=self.host,port=self.port)
        # 创建数据库
        self.db=self.client[self.dbName]
        # 创建集合
        self.collection=self.db[self.collectionName]

    def process_item(self, item, spider):
        if isinstance(item,DangdangSpiderItem):
            # 处理数据,使每一组数据均包含应有信息
            book_name=item["book_name"]
            author=item['author']
            publisher=item['publisher']
            price=item['price']
            comments_num=item['comments_num']
            for book,au,pu,pr,co in zip(book_name,author,publisher,price,comments_num):
                data = {}
                data['book_name']=book
                data['author']=au
                data['publisher']=pu
                data['price']=pr
                data['comments_num']=co
                self.collection.insert_one(data)
            return item

    # 数据处理完之后关闭数据库
    def close_spider(self,spider):
        self.client.close()


# 保存数据到表格
class FilePipeline(object):
    def __init__(self):
        if os.path.exists("当当.xlsx"):
            self.wb = openpyxl.load_workbook("当当.xlsx")  # 打开已有文件
            # 创建一张新表
            # ws=wb.create_sheet()
            self.ws = self.wb["Sheet"]  # 通过名字选择表
        else:
            self.wb = openpyxl.Workbook()  # 新建Excel 实例化
            self.ws = self.wb.active  # 激活 worksheet
        self.ws.append(['书名','作者','出版社','价格','评论数'])
        self.ws.column_dimensions['A'].width = 55  # 列宽
        self.ws.column_dimensions['B'].width = 55
        self.ws.column_dimensions['C'].width = 25
        self.ws.column_dimensions['D'].width = 10
        self.ws.column_dimensions['E'].width = 15

    def process_item(self,item,spider):
        # 获取各数据列表的大小,进行排序,得到列表数据最少的长度,防止索引超出
        data_count = [len(item['book_name']), len(item['author']), len(item['publisher']), len(item['price']),
                      len(item['comments_num']), ]
        # sorted列表排序,key=绝对按什么排序,reverse=True:降序;False:升序
        data_count_least = sorted(data_count, key=lambda data_num: int(data_num), reverse=False)[0]
        for i in range(data_count_least):
            line = [str(item['book_name'][i]), str(item['author'][i]), str(item['publisher'][i]), str(item['price'][i]), str(item['comments_num'][i])]
            self.ws.append(line)
        self.wb.save("当当.xlsx")
        return item

# ImagesPipeLine下载图片
class SaveImagePipeline(ImagesPipeline):
    # 下载图片
    def get_media_requests(self, item, info):
        # 循环下载图片,meta传递数据(搜索的书关键字,书名,文件的后缀),根据url准确获取其文件类型
        for i in range(len(item['image_url'])):
            yield scrapy.Request(url=item['image_url'][i],meta={'book_key':item['book_key'],'name':item['book_name'][i],'name_suffix':item['image_url'][i].split('.')[-1]})

    # 是否下载成功
    def item_completed(self, results, item, info):
        # results是一个元组,第一个元素是布尔类型,false:失败   true:成功
        if not results[0][0]:
            raise DropItem('下载失败')   # 若结果为false,异常处理,丢弃item
        return item

    # 图片存放,文件重命名
    def file_path(self, request, response=None, info=None):
        # 获取meta传递的数据构建书名,如‘xxx.jpg’,‘xxx.png’   .replace('/','_')替换名称中的‘/’,防止其识别成文件夹
        book_name=request.meta['name'].replace('/','_')+'.'+request.meta['name_suffix']
        # 按搜索类型分别存到对应的文件夹下
        file_name=u'{0}/{1}'.format(request.meta['book_key'],book_name)
        return file_name

# 提取评论数前10的书,并画水平条形图
class PicturePipeline(object):
    comments_num=[]
    book_name=[]
    book_name_sorted=[]
    comments_num_ten=[]
    def process_item(self,item,spider):
        self.get_plot(item['book_name'],item['comments_num'])
        return item

    def get_plot(self, name_list, comments_num_list):
        # 获取所有的数据
        for comment,name in zip(comments_num_list,name_list):
            self.comments_num.append(comment)
            self.book_name.append(name)
        # 将书名和评论数组成字典
        book_dict= dict(zip(self.comments_num,self.book_name))
        # 按照字典的键进行倒序排序
        comments_num_sorted_list=sorted(book_dict.keys(),key=lambda num:int(num.split('条')[0]),reverse=True)
        # 获取评论数最高的10本书
        for i in range(10):
            for key in book_dict.keys():
                if comments_num_sorted_list[i]==key:
                    self.book_name_sorted.append(book_dict[key])
                    continue

        # 使用matplotlib.pyplot画水平条形图
        plt.rcParams['font.sans-serif'] = ['SimHei']  # 用黑体显示中文
        plt.rcParams['axes.unicode_minus'] = False  # 正常显示负号
        # 默认的像素:[6.0,4.0],分辨率为100,图片尺寸为 600*400 ;  修改后图片尺寸为:2000*800
        plt.rcParams['figure.figsize']=(10.0,4.0)   # 设置figure_size尺寸
        plt.rcParams['figure.dpi'] = 200  # 分辨率
        for i in range(10):
            self.comments_num_ten.append(int(comments_num_sorted_list[i].split('条')[0]))
        # width列表元素类型不能为str  故此转换为整形:int(comments_num_sorted_list[i].split('条')[0])
        plt.barh(range(10),width=self.comments_num_ten,label='评论数',color='red',alpha=0.8,height=0.7) # 从下往上画
        # 在柱状图上显示具体数值, ha参数控制水平对齐方式, va控制垂直对齐方式
        for y,x in enumerate(self.comments_num_ten):
            plt.text(x+1500,y-0.2,'%s'%x,ha='center',va='bottom')
        # 为Y轴设置坐标值
        plt.yticks(range(10),self.book_name_sorted,size=8)
        # 为坐标轴设置名称
        plt.ylabel('书名')
        # 设置标题
        plt.title('评论数前10的书籍')
        # 显示图例
        plt.legend()
        plt.show()

middlewares.py   

from scrapy import signals

# 设置referer防止反爬
class DangdangSpiderRefererMiddleware(object):
    @classmethod
    def process_request(self,request,spider):
        referer=request.url
        if referer:
            request.headers['referer']=referer

tips:

1. 自定义的pipeline,需在settings.py中进行设置

ITEM_PIPELINES = {
    'DangDang_Spider.pipelines.MongoPipeline': 300,    # 实现保存数据到mongodb
    'DangDang_Spider.pipelines.FilePipeline': 400,     # 实现保存数据到excel
    'DangDang_Spider.pipelines.SaveImagePipeline':450, # 调用scrapy内部ImagesPipeline实现图片下载
    'DangDang_Spider.pipelines.PicturePipeline':500    # 统计评论数最高的10本书,画图
}

2. 使用ImagesPipeLine下载图片时,需在settings.py中设置图片存放目录

# 图片存放根目录
IMAGES_STORE='./book_image'

3. 设置referer防止反爬,需在settings.py中进行设置,其运行级别设为1,优先执行

# 打开下载管道
DOWNLOADER_MIDDLEWARES = {
    'DangDang_Spider.middlewares.DangdangSpiderDownloaderMiddleware': 423,
    'DangDang_Spider.middlewares.DangdangSpiderRefererMiddleware':1
}

4. 画水平条形图matplotlib.pyplot.barh(y, width,label=, height=0.8,color='red',align='center')

width:代表条形图的宽度,即每个条形图具体的数值,其值若为str会出现错误,需进行转化

5. 图片在进行存储时需指定其文件类型(.jpg/.png等根据实际获取),避免图片保存后未能识别出文件类型,导致查看繁琐

6. 进行图片保存时,发现文件存储错乱(图片应该都在C++这个文件夹下,结果莫名多出几个文件夹),debug发现文件名中存在‘/’,系统进行了识别,在此做了简单处理,消除此现象   

python 爬取当当网 python爬虫当当网_ImagesPipeLine下载图片

运行结果:

1. 项目

python 爬取当当网 python爬虫当当网_ImagesPipeLine下载图片_02

2. 数据写入到表格

python 爬取当当网 python爬虫当当网_referer反爬_03

3. 下载图片

python 爬取当当网 python爬虫当当网_python 爬取当当网_04

4. 画水平条形图

python 爬取当当网 python爬虫当当网_python 爬取当当网_05

遗留问题:

在执行PicturePipeline画图时,会报错:ValueError: shape mismatch: objects cannot be broadcast to a single shape

暂未找到原因,有大神了解的麻烦告知,感谢