近期搜电子是的时候发现一个有趣的网站,很多精校版的电子书,由于好奇,就想做一个爬虫把名称汇总一下。(具体原因在于canvas的页面背景效果在Chrome浏览器里面特别消耗资源)自己去搜索书名,然后找下载地址。十几分钟,脚本基本写完,一晚上时间也差不多能够跑完了。

分享代码,仅供参考(比较粗糙)。

package com.fun

import com.fun.db.mysql.MySqlTest
import com.fun.frame.httpclient.FanLibrary
import com.fun.utils.Regex
import org.slf4j.Logger
import org.slf4j.LoggerFactory

class T extends FanLibrary {

static Logger logger = LoggerFactory.getLogger(T.class)


public static void main(String[] args) {
// test(322)

def list = 1..1000 as List

list.each { x ->
try {
test(x)
} catch (Exception e) {∫
logger.error(x.toString())
output(e)
}
logger.warn(x.toString())
sleep(2000)
}

testOver()
}

static def test(int id) {
// def get = getHttpGet("https://****/books/9798.html")
def get = getHttpGet("https://****/books/" + id + ".html")
def response = getHttpResponse(get)
def string = response.getString("content")
if (string.contains("您需求的文件不存在")|| string.contains("页面未找到")) return
output(string)
def all = Regex.regexAll(string, "class=\"bookpic\"> <img title=\".*?\"").get(0)
def all2 = Regex.regexAll(string, "content=\"内容简介.*?\"").get(0)
def all3 = Regex.regexAll(string, "title=\"作者:.*?\"").get(0)
def all40 = Regex.regexAll(string, "https://sobooks\\.cc/go\\.html\\?url=https{0,1}://.*?\\.ctfile\\.com/.*?\"")
def all4 = all40.size() == 0 ? "" : all40.get(0)
def all50 = Regex.regexAll(string, "https://sobooks\\.cc/go\\.html\\?url=https{0,1}://pan\\.baidu\\.com/.*?\"")
def all5 = all50.size() == 0 ? "" : all50.get(0)
output(all)
output(all2)
output(all3)
output(all4)
output(all5)
def name = all.substring(all.lastIndexOf("=") + 2, all.length() - 1)
def author = all3.substring(all3.lastIndexOf("=") + 2, all3.length() - 1)
def intro = all2.substring(all2.lastIndexOf("=") + 2, all2.length() - 1)
def url1 = all4 == "" ? "" : all4.substring(all4.lastIndexOf("=") + 1, all4.length() - 1)
def url2 = all5 == "" ? "" : all5.substring(all5.lastIndexOf("=") + 1, all5.length() - 1)
output(name, author, intro, url1, url2)
def sql = String.format("INSERT INTO books (name,author,intro,urlc,urlb,bookid) VALUES (\"%s\",\"%s\",\"%s\",\"%s\",\"%s\",%d)", name, author, intro, url1, url2, id)
MySqlTest.sendWork(sql)
}
}

个人感觉还是比较满意的。

电子书网站爬虫实践_软件测试公众号后台回复“电子书”可得网站地址和CSV文件下载地址。