这节课是巡安似海PyHacker编写指南的《打造网站后台扫描器

包括如何处理假的200页面/404智能判断等

喜欢用Python写脚本的小伙伴可以跟着一起写一写呀。

编写环境:Python2.x

00x1:

需要用到的模块如下:

import request


00x2:

先将请求的基本代码写出来:

import requests


def dir(url):
headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3314.0 Safari/537.36 SE 2.X MetaSr 1.0'}
req = requests.get(url=url,headers=headers)
print req.status_code


dir('http://www.hackxc.cc')

【PyHacker编写指南】打造网站后台扫描器_python

00x3:

设置超时时间,以及忽略不信任证书

import urllib3
urllib3.disable_warnings()
req = requests.get(url=url,headers=headers,timeout=3,verify=False)
再加个异常处理

【PyHacker编写指南】打造网站后台扫描器_python_02【PyHacker编写指南】打造网站后台扫描器_python_03

调试一下

【PyHacker编写指南】打造网站后台扫描器_chrome_04

再进行改进,如果为200则输出

if req.status_code==200:
print "[*]",req.url


00x4:

难免会碰到假的200页面,我们再处理一下

处理思路:

首先访问hackxchackxchackxc.php和xxxxxxxxxx记录下返回的页面的内容长度,然后在后来的扫描中,返回长度等于这个长度的判定为404

def dirsearch(u,dir):
try:
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3314.0 Safari/537.36 SE 2.X MetaSr 1.0'}
#假的200页面进行处理
hackxchackxchackxc = '/hackxchackxchackxc.php'
hackxchackxchackxc_404 =requests.get(url=u+hackxchackxchackxc,headers=headers)
# print len(hackxchackxchackxc_404.content)
xxxxxxxxxxxx = '/xxxxxxxxxxxx'
xxxxxxxxxxxx_404 = requests.get(url=u + xxxxxxxxxxxx, headers=headers)
# print len(xxxxxxxxxxxx_404.content)


#正常扫描
req = requests.get(url=u+dir,headers=headers,timeout=3,verify=False)
# print len(req.content)
if req.status_code==200:
if len(req.content)!=len(hackxchackxchackxc_404.content)and len(req.content)!= len(xxxxxxxxxxxx_404.content):
print "[+]",req.url
else:
print u+dir,404
except:
pass

很nice

【PyHacker编写指南】打造网站后台扫描器_chrome_05


00x5:

再让结果自动保存

【PyHacker编写指南】打造网站后台扫描器_safari_06


0x06:

完整代码:

#!/usr/bin/python
#-*- coding:utf-8 -*-
import requests
import urllib3
urllib3.disable_warnings()


urls = []
def dirsearch(u,dir):
try:
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3314.0 Safari/537.36 SE 2.X MetaSr 1.0'}
#假的200页面进行处理
hackxchackxchackxc = '/hackxchackxchackxc.php'
hackxchackxchackxc_404 =requests.get(url=u+hackxchackxchackxc,headers=headers)
# print len(hackxchackxchackxc_404.content)
xxxxxxxxxxxx = '/xxxxxxxxxxxx'
xxxxxxxxxxxx_404 = requests.get(url=u + xxxxxxxxxxxx, headers=headers)
# print len(xxxxxxxxxxxx_404.content)


#正常扫描
req = requests.get(url=u+dir,headers=headers,timeout=3,verify=False)
# print len(req.content)
if req.status_code==200:
if len(req.content)!=len(hackxchackxchackxc_404.content)and len(req.content)!= len(xxxxxxxxxxxx_404.content):
print "[+]",req.url
with open('success_dir.txt','a+')as f:
f.write(req.url+"\n")
else:
print u+dir,404
else:
print u + dir, 404
except:
pass


if __name__ == '__main__':
url = raw_input('\nurl:')
print ""
if 'http' not in url:
url = 'http://'+url
dirpath = open('rar.txt','r')
for dir in dirpath.readlines():
dir = dir.strip()
dirsearch(url,dir)


喜欢的朋友们点个关注叭~

【PyHacker编写指南】打造网站后台扫描器_chrome_07