目录

  • 前言
  • 数据管理
  • 单个IP扫描
  • 循环扫描
  • 漏洞分析
  • 总结

前言

最近整理文件夹,发现了本科完成的这个题目。希望对大家有借鉴意义。
严格来讲,本文可以实现对漏洞资产的管理。

  • 将利用censys下载的bacnet协议资产(包括物联网设备IP信息、运营商信息、版本信息、位置信息等)写入ElasticSearch数据库。并添加vul_name字段用于保存该设备的所有漏洞名称,以便查询某物联网设备的所有漏洞详情信息。写入数据库是为了Python与数据库连接批量化访问数据。
  • 利用Python(Nessus API)对全部IP进行漏洞扫描得到所有plugin_id(需要去重处理),然后根据plugin_id得到漏洞总资产,最后将全部漏洞信息(漏洞介绍、漏洞名称、危险等级、CVE编号、漏洞发布时间、漏洞发现时间、解决办法,连同bacnet协议等)写入ElasticSearch数据库以便快速进行漏洞检索。

数据管理

关于如何将json格式的数据写入elasticsearch数据库, 下图为elasticsearch中部分数据截图。

neroblack python教程 nessus python_IP

单个IP扫描

首先了解Nessus是如何扫描的,下图可以看到扫描98.100.184.253用 时5分钟,使用的策略是Basic Network Scan,使用的扫描器为Local Scaner, 扫描得到的信息数为7个,从右下角可知有两个危险等级为medium的漏洞,点击查看详情(漏洞描述、解决方案、CVE编号、端口等)

neroblack python教程 nessus python_数据库_02

neroblack python教程 nessus python_neroblack python教程_03

循环扫描

  发起扫描与获取扫描结果都需要进行登录,使用Nessus的用户名和密码及/session接口:POST /session(向/session接口发送POST请求,请求的payload参数作为用户名和密码)。若用户名和密码正确,/session接口会返回一个token,将token放入请求的头信息中,请求中带上头信息即可以使用。

  发起扫描的API接口为/scan/{scan_id}/launch,用POST方法进行调用,扫描的目标需要放在请求的payload中。扫描开始后,需要监听扫描任务来确定任务是否成功发起、意外或终止,使用的API为GET/scan/{scan_id},根据API的返回值即可得到本次扫描的状态,每个一段时间发送请求查询任务状态即可。
当监听到任务结束时,就可以处理本次扫描结果。/scans/{scan_id}接口除了返回扫描任务的状态之外,也返回了一个漏洞列表response[‘vulnerabilities’]和一个主机列表response[‘hosts’],即从这两个列表获取漏洞扫描的详细结果。该部分内容由函数download完成。

vulnerabilities 字段返回的内容如下

"vulnerabilities" :
 [ { "plugin_id": {integer}, 
     "plugin_name ": {string} ,
     " plugin_family ": {string}, 
     "count" : {integer}, 
     "vuln_index": {integer}, 
     "severity_index " : {integer }]

其为本次漏洞扫描的一个总览,包含了漏洞数目的统计和漏洞的基本信息,其中最为重要的是plugin_id,可以用于获取漏洞详情。用函数extract提取plugin_id。

漏洞信息详情查询的接口为GET /plugins/plugin/{id},该接口的返回值如下

{ "id " : { integer } , "name " : {string} , "family_name" : {string}, "attributes " : [{"attribute_name" : {string} , "attribute_value " : {string}}]}

该返回值中attributes字典列表可以用来表示漏洞各类信息,包括漏洞的CVE编号,危险等级,描述,详情,解决方案等。用函数get_vul_detail获取漏洞详情。

详细代码如下:

# -*- coding: UTF-8 -*-
import requests
import json
import time
import sys
import os
import re
from elasticsearch import Elasticsearch
#对数据库里所有IP扫描得到不重复的pluginID再根据pluginID得到漏洞详细信息文件

requests.packages.urllib3.disable_warnings()
es = Elasticsearch()
url = 'https://localhost:8834'
verify = False
token = ''
username = 'zhang'#你的账号
password = '**********'#你的密码
para = {
    "_source":"ip"
}

def build_url(resource):
    return '{0}{1}'.format(url, resource)

def connect(method, resource, data=None):
    """
    Send a request
    Send a request to Nessus based on the specified data. If the session token
    is available add it to the request. Specify the content type as JSON and
    convert the data to JSON format.
    """
    headers = {'X-Cookie': 'token={0}'.format(token),
               'content-type': 'application/json'}

    data = json.dumps(data)

    if method == 'POST':
        r = requests.post(build_url(resource), data=data, headers=headers, verify=verify)
    elif method == 'PUT':
        r = requests.put(build_url(resource), data=data, headers=headers, verify=verify)
    elif method == 'DELETE':
        r = requests.delete(build_url(resource), data=data, headers=headers, verify=verify)
    else:
        r = requests.get(build_url(resource), params=data, headers=headers, verify=verify)

    # Exit if there is an error.
    if r.status_code != 200:
        e = r.json()
        print e['error']
        sys.exit()

    # When downloading a scan we need the raw contents not the JSON data.
    if 'download' in resource:
        return r.content
    else:
        return r.json()

def login(usr, pwd):
    """
    Login to nessus.
    """

    login = {'username': usr, 'password': pwd}
    data = connect('POST', '/session', data=login)

    return data['token']

def logout():
    """
    Logout of nessus.
    """

    connect('DELETE', '/session')

def get_policies():
    """
    Get scan policies
    Get all of the scan policies but return only the title and the uuid of
    each policy.
    """

    data = connect('GET', '/editor/policy/templates')

    return dict((p['title'], p['uuid']) for p in data['templates'])

def get_history_ids(sid):
    """
    Get history ids
    Create a dictionary of scan uuids and history ids so we can lookup the
    history id by uuid.
    """
    data = connect('GET', '/scans/{0}'.format(sid))

    return dict((h['uuid'], h['history_id']) for h in data['history'])

def get_scan_history(sid, hid):
    """
    Scan history details
    Get the details of a particular run of a scan.
    """
    params = {'history_id': hid}
    data = connect('GET', '/scans/{0}'.format(sid), params)

    return data['info']

def add(name, desc, targets, pid):
    """
    Add a new scan
    Create a new scan using the policy_id, name, description and targets. The
    scan will be created in the default folder for the user. Return the id of
    the newly created scan.
    """

    scan = {'uuid': pid,
            'settings': {
                'name': name,
                'description': desc,
                'text_targets': targets}
            }

    data = connect('POST', '/scans', data=scan)

    return data['scan']

def update(scan_id, name, desc, targets, pid=None):
    """
    Update a scan
    Update the name, description, targets, or policy of the specified scan. If
    the name and description are not set, then the policy name and description
    will be set to None after the update. In addition the targets value must
    be set or you will get an "Invalid 'targets' field" error.
    """
    scan = {}
    scan['settings'] = {}
    scan['settings']['name'] = name
    scan['settings']['desc'] = desc
    scan['settings']['text_targets'] = targets

    if pid is not None:
        scan['uuid'] = pid

    data = connect('PUT', '/scans/{0}'.format(scan_id), data=scan)

    return data

def launch(sid):
    """
    Launch a scan
    Launch the scan specified by the sid.
    """

    data = connect('POST', '/scans/{0}/launch'.format(sid))

    return data['scan_uuid']

def status(sid, hid):
    """
    Check the status of a scan run
    Get the historical information for the particular scan and hid. Return
    the status if available. If not return unknown.
    """

    d = get_scan_history(sid, hid)
    return d['status']

def export_status(sid, fid):
    """
    Check export status
    Check to see if the export is ready for download.
    """

    data = connect('GET', '/scans/{0}/export/{1}/status'.format(sid, fid))

    return data['status'] == 'ready'

def export(sid, hid):
    """
    Make an export request
    Request an export of the scan results for the specified scan and
    historical run. In this case the format is hard coded as nessus but the
    format can be any one of nessus, html, pdf, csv, or db. Once the request
    is made, we have to wait for the export to be ready.
    """
    data = {'history_id': hid,
            'format': 'nessus'}

    data = connect('POST', '/scans/{0}/export'.format(sid), data=data)

    fid = data['file']

    while export_status(sid, fid) is False:
        time.sleep(5)

    return fid

def download(sid, fid):
    """
    Download the scan results
    Download the scan results stored in the export file specified by fid for
    the scan specified by sid.
    """
    data = connect('GET', '/scans/{0}/export/{1}/download'.format(sid, fid))
    filename = 'nessus_{0}_{1}.nessus'.format(sid, fid)

    print('Saving scan results to {0}.'.format(filename))
    with open(filename, 'w') as f:
        f.write(data)
    return filename

def extract(file):
    outfile = 'output.txt'
    count = 0
    lines_seen = set()
    in_file = open(file, 'r')
    out_file = open(outfile, 'a')
    lines = in_file.readlines()

    for line in lines:
        if line not in lines_seen:
            str_name = line.split(" ")[0]
            str1 = 'pluginId'
            if (str1 in str_name):
                out_file.write(line)
                count += 1
            lines_seen.add(line)
    in_file.close()
    out_file.close()
    os.remove(file)
    return outfile

def get_vul_detail(file):
    vul_detail = {
        "cve_number": "",
        "vul_name": "",
        "vul_intro": "",
        "vul_detail": "",
        "vul_level": 0,
        "solution":"",
        "release_time": "",
        "discover_time": ""
    }
    header = {'X-Cookie': 'token={0}'.format(token),
               'content-type': 'application/json'}
    endfile = 'endfile.txt'
    id_file = open(file, 'r')
    detail_file = open(endfile,'w')
    end = open('end.txt','w+')
    str1 = r'\''
    str2 = r'"'

    lines = id_file.readlines()
    for line in lines:
        m = re.findall(r'(\w*[0-9]+)\w*', line)
        plugin_id = m[0]
        url = 'https://localhost:8834/plugins/plugin/{plugin_id}'.format(plugin_id=plugin_id)
        respone = requests.get(url, headers=header, verify=False)
        if respone is not None:
            result = json.loads(respone.text)
            # 漏洞名称
            vul_detail['vul_name'] = str(result['name']).encode('utf-8')
            # 遍历attributes生成结果
            for attr in result['attributes']:
                attr_name = attr['attribute_name']
                # cve编号
                if attr_name == 'cve':
                    vul_detail['cve_number'] = str(attr['attribute_value']).encode('utf-8')
                    continue
                # 漏洞描述
                elif attr_name == 'synopsis':
                    vul_detail['vul_intro'] = str(attr['attribute_value']).encode('utf-8')
                    continue
                # 漏洞详情
                elif attr_name == 'description':
                    vul_detail['vul_detail'] = str(attr['attribute_value']).encode('utf-8')
                    continue
                # 漏洞等级
                elif attr_name == 'risk_factor':
                    vul_detail['risk_factor'] = str(attr['attribute_value']).encode('utf-8')
                    continue
                # 漏洞描述
                elif attr_name == 'solution':
                    vul_detail['solution'] = str(attr['attribute_value']).encode('utf-8')
                    continue
                # 漏洞发布时间
                elif attr_name == 'plugin_publication_date':
                    vul_detail['release_time'] = str(attr['attribute_value']).encode('utf-8')
                    continue
                # 漏洞发现时间
                elif attr_name == 'vuln_publication_date':
                    vul_detail['discover_time'] = str(attr['attribute_value']).encode('utf-8')
                    continue
            detail_file.write(str(vul_detail)+'\n')
    detail_file.close()
    detail_file = open(endfile, 'r')
    for ss in detail_file.readlines():
        tt = re.sub(str1, str2, ss)
        end.write(tt)
    id_file.close()
    detail_file.close()
    end.close()
    os.remove(endfile)
    #os.remove(file)

if __name__ == '__main__':
    file_puginid = ''
    print('Login')
    token = login(username, password)
    m = 0
    flag = 0

    array_search = es.search(index="indextest", doc_type="string", params=para, size=5, request_timeout=60)
    jsons = array_search["hits"]["hits"]
    s = []
    for hits in jsons:
        s.append(hits["_source"]["ip"])

    for ip in s:
        IPdone = open('IP.txt', 'a+')
        lines = IPdone.readlines()
        ss = str(ip).encode('utf-8')+'\n'
        size = os.path.getsize('IP.txt')
        if size == 0:
            pass
        else:
            for line in lines:
                if line == ss:
                    m = 1
                else:
                    m = 0
        IPdone.close()

        if m == 0:
            flag = 1
            IPdone = open('IP.txt', 'a')
            IPdone.write(ss)
            IPdone.close()
            # print('Adding new scan.')
            policies = get_policies()
            policy_id = policies['Basic Network Scan']
            scan_data = add('Test Scan', 'Create a new scan with API', '192.168.1.1', policy_id)
            scan_id = scan_data['id']

            # print('Updating scan with new targets.')
            update(scan_id, scan_data['name'], scan_data['description'], ip)
            print(ip)
            # print('Launching new scan.')
            scan_uuid = launch(scan_id)
            history_ids = get_history_ids(scan_id)
            history_id = history_ids[scan_uuid]
            while status(scan_id, history_id) != 'completed':
                time.sleep(5)

            # print('Exporting the completed scan.')
            file_id = export(scan_id, history_id)
            filename = download(scan_id, file_id)
            file_puginid = extract(filename)
    if flag == 1:
        get_vul_detail(file_puginid)

主函数解释如下:

  • 连接ElasticSearch数据库,并从库中获取所有IP(可以不用一次获取所有IP,只需修改size大小即可,若size = 5即表示一次扫描数据库中前5个IP),用一个list保存,并实现了对扫描过的IP不进行重复扫描的功能(由于每次扫描都是从前往后开始,故用IP.txt保存扫描过的IP以判断是否扫描过该IP)。
  • 循环对每一个IP发起一次扫描,policy为Basic Network Scan,scaner为Local Scaner,用download函数及extract函数获取plugin_id(此处为获取漏洞总资产故进行了去重处理),并根据plugin_id用get_vul_detail函数(根据attributes字典列表)获取漏洞详情信息。

最后得到end.txt文本文件,存放的是扫描结果。

neroblack python教程 nessus python_数据库_04

将end.txt另存为json格式,再将结果写回数据库。

漏洞分析

数据库中部分漏洞资产如下图

neroblack python教程 nessus python_json_05

以第四个漏洞为例作简单的分析:

  • 漏洞描述:Coils from a Modicon field device, such as a PLC, RTU, or IED, can be read using function code 1.
  • 漏洞名称:Modbus/TCP Coil Access
  • 漏洞等级:Medium
  • 漏洞发布时间:2006/12/11
  • CVE编号:CVE-2000-1200
  • 解决方案:Restrict access to the Modbus port (TCP/502) to authorized Modbus clients.
  • 漏洞详情:Using function code 1, Modbus can reads the coils in a Modbus slave, which is commonly used by SCADA and DCS field devices. Coils refer to the binary output settings and are typically mapped to actuators. A sample of coil settings read from the device are provided by the plugin output. The ability to read coils may help an attacker profile a system and identify ranges of registers to alter via a write coil message.

即ModBus网络是一个工业通信系统,由带智能终端的可编程序控制器和计算机通过公用线路或局部专用线路连接而成。其系统结构既包括硬件、亦包括软件。它可应用于各种数据采集和过程监控。该漏洞CVE编号为CVE-2000-1200,名称为Modbus/TCP线圈接入,是指来自Modicon现场设备(如PLC、RTU或IED)的线圈可以使用功能代码1读取。可通过将对Modbus端口(TCP/502)的访问限制为授权的Modbus客户端来解决。使用功能代码1,Modbus可以读取Modbus从站中的线圈,这是SCADA和DCS现场设备常用的。线圈是指二进制输出设置,通常映射到执行器。插件输出提供了从设备读取的线圈设置示例。读取线圈的能力可能有助于攻击者分析系统并通过写入线圈消息识别要更改的寄存器范围。

总结

利用自动化扫描,可以得到大量物联网设备与漏洞的对应关系。即可以通过数据库查询得到所有物联网设备基本信息及其所有漏洞信息,也可以通过数据库查询得知哪些物联网设备有相同的漏洞。便于研究人员快速检索分析。实现这二者的对应关系之后,还可以进行漏洞预警和资产分布统计。当新的漏洞被曝光之后,可以利用数据库中的信息快速统计分析,找到全球全国受漏洞影响的资产分布,以便快速应对。