python 从HTTP标头的Burp发现中创建一个表

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了python 从HTTP标头的Burp发现中创建一个表相关的知识,希望对你有一定的参考价值。

'''
This script takes a Burp Suite XML report's file path as an argument. The easiest way to generate this report is
to go to the Scanner -> Issue Activity section in Burp and sort by Issue type. Then select all the following findings:
SSL cookie without secure flag set
Strict transport security not enforced
Cookie without HttpOnly flag set
Frameable response (potential Clickjacking)
Once selected right click and report issues as XML report.
'''

import csv
import os
from bs4 import BeautifulSoup
import sys

if len(sys.argv) < 2:
    print('Specify the path to the XML file as an argument.')
    exit()

table_headers = ['Host',
           'HttpOnly',
           'Strict-Transport-Security',
           'Secure',
           'X-Frame-Options']

#this function is used to make an html table and is called in the function make_header_table
def make_table(headers,rows):
    default_table_headers = "<p></p><table class=\"default-table\" style=\"border-collapse: collapse;\" border=\"0\" cellspacing=\"0\" cellpadding=\"0\"><thead>"
    default_table_body = "</thead><tbody>"
    default_table_footer = "</tbody></table><p></p>"
    html = []
    html.append(default_table_headers)
    html.append("<tr style=\"height: 15pt;\">")
    for column in headers:
        html.append("<td><p>{}</p></td>".format(column))
    html.append("</tr>")
    html.append(default_table_body)
    for row in rows:
        html.append("<tr>")
        for item in row:
            html.append("<td>{}</td>".format(item))
        html.append("</tr>")
    html.append(default_table_footer)
    return html


def make_header_table(issues):
    def cookie_check(custom_set):
        if len(custom_set) > 0:
            things = []
            for thing in custom_set:
                things.append("Missing: {}".format(thing))
            row.append("<br>".join(things))
        else:
            row.append('Present')
    header_hosts = []
    for i in issues:
        header_hosts.append(i.find('host').text)
    set_hh = set(header_hosts)
    table = []
    for host in set_hh:
        row = []
        http_only_cookies = []
        strict_transport = 0
        xframe = 0
        secure_cookies = []
        row.append(host)
        for issue in issues:
            if issue.find('host').text == host:
                if issue.find('name').text == 'Strict transport security not enforced':
                    strict_transport = 1
                if issue.find('name').text == 'Frameable response (potential Clickjacking)':
                    xframe = 1
                if issue.find('name').text == 'Cookie without HttpOnly flag set':
                    http_only_cookies.append(issue.find('issuedetailitem').text.split(' ')[1])
                if issue.find('name').text == 'SSL cookie without secure flag set':
                    secure_cookies.append(issue.find('issuedetailitem').text.split(' ')[1])
        set_httponly = set(http_only_cookies)
        set_secure = set(secure_cookies)
        cookie_check(set_httponly)
        if strict_transport == 1:
            row.append('Missing')
        else:
            row.append('Present')
        cookie_check(set_secure)
        if xframe == 1:
            row.append("Missing")
        else:
            row.append("Present")
        table.append(row)
    return table



filepath = sys.argv[1]

#open file as soup
soup = BeautifulSoup(open(filepath, 'r'),features='html.parser')

#find all the issues (findings) in our XML
issues = soup.findAll('issue')

file_dir = "\\".join(filepath.split("\\")[:-1]) + "\\"
print(file_dir)
filename = filepath.split("\\")[-1] + ".html"
print(filename)

#Not a csv anymore after changing to HTML table
with open(file_dir+filename, 'w', newline='') as csvfile:
    print("Writing to: {}".format(csvfile.name))
    for line in make_table(table_headers,make_header_table(issues)):
        csvfile.write(line)
    #spamwriter = csv.writer(csvfile, delimiter=',', dialect='excel')
    make_header_table(issues)
    print('Complete')

以上是关于python 从HTTP标头的Burp发现中创建一个表的主要内容,如果未能解决你的问题,请参考以下文章

Leetcode刷题Python从列表list中创建一颗二叉树

Leetcode刷题Python从列表list中创建一颗二叉树

Python:从 urllib2.urlopen 调用中获取 HTTP 标头?

如何从地图中提取键?

SQL中如何从两张表中创建一张表

《Python黑帽子:黑客与渗透测试编程之道》读书笔记:扩展burp代理