xpath

实战

站长素材图片抓取并下载

思路:

​ (1) 请求对象定制 ​ (2)获取网页的源码 ​ (3)下载

获取地址

url =‘https://sc.chinaz.com/tupian/rentiyishu.html’

#第一页https://sc.chinaz.com/tupian/rentiyishu.html #第二页https://sc.chinaz.com/tupian/rentiyishu_2.html

找到关系

if(page==1): url = 'https://sc.chinaz.com/tupian/rentiyishu.html' else: url ='https://sc.chinaz.com/tupian/rentiyishu_'+str(page)+'.html'

image-20230322212706761

其中 UA用来反爬虫

UA介绍:User Agent中文名为用户代理,简称 UA,它是一个特殊字符串头,使得服务器能够识别客户使用的操作系统 及版本、CPU 类型、浏览器及版本。浏览器内核、浏览器渲染引擎、浏览器语言、浏览器插件等

表头数据

headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36' }

请求对象定制

request = urllib.request.Request(url=url,headers=headers)

向浏览器发送请求

response = urllib.request.urlopen(request)

对源码进行解码

content = response.read().decode('utf-8')

下载

解析服务器响应文件

tree = etree.HTML(content)

解析想要图片的源码

​ src_list = tree.xpath('//div[@class="container"]//img//@data-original')

通过xpath插件检查语法

uTools_1679492081393.png

源码写最后了

import urllib.request
from lxml import etree

#(1) 请求对象定制
#(2)获取网页的源码
#(3)下载

#需求 下载的前十页的图片
#https://sc.chinaz.com/tupian/rentiyishu.html
#https://sc.chinaz.com/tupian/rentiyishu_2.html
# https://sc.chinaz.com/tupian/rentiyishu_3.html

def create_request(page):
    if(page==1):
        url = 'https://sc.chinaz.com/tupian/rentiyishu.html'
    else:
        url ='https://sc.chinaz.com/tupian/rentiyishu_'+str(page)+'.html'

    headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36'
    }

    request = urllib.request.Request(url=url,headers=headers)

    return  request


def get_content(request):
    response = urllib.request.urlopen(request)
    content = response.read().decode('utf-8')

    return content

def down_load(content):
    tree = etree.HTML(content)

    name_list = tree.xpath('//div[@class="container"]//img//@alt')

    src_list = tree.xpath('//div[@class="container"]//img//@data-original')

    # for i in range(len(name_list)):
    #     print(src_list)

    for i in range(len(name_list)):
        name =name_list[i]
        src = src_list[i]
        url = 'https:'+src

        urllib.request.urlretrieve(url=url,filename='./loveimg/'+name+'.jpg')





if __name__  == '__main__':
    start_page = int(input("请输入起始页码"))
    end_page = int(input("请输入结束页码"))

    for page in range (start_page,end_page+1):
        print(page)
        #(1)请求对象定制
        request = create_request(page)
        #(2)获取网页源码
        content = get_content(request)
        #(3) 下载
        down_load(content)