<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Explore Security &#187; pentesting</title>
	<atom:link href="http://www.exploresecurity.com/tag/pentesting/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.exploresecurity.com</link>
	<description>IT security tools, techniques and commentary</description>
	<lastBuildDate>Wed, 15 Jun 2022 09:21:02 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.6.1</generator>
		<item>
		<title>A Thick Spread of Logjam</title>
		<link>http://www.exploresecurity.com/a-thick-spread-of-logjam/</link>
		<comments>http://www.exploresecurity.com/a-thick-spread-of-logjam/#comments</comments>
		<pubDate>Mon, 20 Jul 2015 22:26:59 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[SSL/TLS]]></category>
		<category><![CDATA[Logjam]]></category>
		<category><![CDATA[OpenSSL]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[TLS]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=404</guid>
		<description><![CDATA[This article dissects the recent Logjam paper from a pentesting viewpoint to reveal a number of different factors that affect the impact of the vulnerability. One of those factors includes the use of common primes and I&#8217;ve included a small change to OpenSSL that you can use to look out for this. Two for the [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>This article dissects the recent Logjam paper from a pentesting viewpoint to reveal a number of different factors that affect the impact of the vulnerability. One of those factors includes the use of common primes and I&#8217;ve included a small change to OpenSSL that you can use to look out for this.<span id="more-404"></span></p>
<h3>Two for the price of one</h3>
<p>For me, the penny dropped when I realised that the <a href="https://weakdh.org/imperfect-forward-secrecy.pdf">Logjam paper</a> sets out <em>two</em> strategies. The first is a <em>cryptanalytic</em> attack that uses pre-computation to speed up the process of cracking a Diffie-Hellman (DH) key exchange. The second is a <em>protocol</em> attack that allows weaker export-grade versions of DH-based cipher suites to be selected, so long as they are supported by the server. A man-in-the-middle attacker who can make use of the second technique can downgrade vulnerable connections, which helps the first attack by forcing the use of weaker cryptography. But an attacker doesn&#8217;t <em>have</em> to rely on the downgrade trick for an attack on the security of the connection to succeed. It all depends on what the attacker wants to achieve and what position they are in.</p>
<h3>Attack scenarios</h3>
<p>If the attacker wants to modify the data in transit then the attack must be performed in real time. The attacker must be a man-in-the-middle and have access to sufficient computing resources to perform the necessary cryptanalysis in a short space of time. To ease the cryptanalysis, if the attacker can downgrade the connection to using a 512-bit prime, then so much the better. That&#8217;s only possible if the server supports export-grade DH cipher suites. In the opinion of the Logjam authors, any 512-bit prime should be considered vulnerable but a 768-bit, and certainly a 1024-bit, DH prime would require a serious amount of effort to attack.</p>
<p>In contrast, decrypting (but not modifying) the secure traffic from a connection that begins with a DH key exchange can be done <em>passively</em> in slower time: the attacker can capture the traffic and run the cryptanalysis offline. In this case, the attacker must contend with whatever strength of DH prime was selected during the TLS negotiation, which is unlikely to be 512-bit. For passive attacks, cipher suite preference is therefore more important: if the server prefers a common cipher suite that <em>isn&#8217;t</em> based on the standard DH key exchange (e.g. AES256-SHA) then a passive attack is very unlikely to succeed.</p>
<h3>Client-side fixes</h3>
<p>As with a number of previous TLS bugs, browsers have applied plasters to the sores to try to mitigate the impact. Although the downgrade attack exploited a weakness in the TLS <em>protocol</em> rather than an implementation flaw, refusing to accept 512-bit DH primes is a quick and effective solution. But taking that strategy all the way up to 1024-bit primes is dangerous in terms of user experience &#8211; users could start complaining that sites are suddenly inaccessible. So the success of client-side &#8220;patching&#8221; will depend on the vendors, the minimum size of DH prime their browser accepts and the user’s update regime.</p>
<h3>Common primes</h3>
<p>Another factor to consider is the prime itself. The Logjam paper noted that servers tend to re-use it &#8211; and, not only that, but the same primes are in circulation across different implementations. The pre-computation work for the cryptanalysis is based on a single DH prime so it&#8217;s in the attacker&#8217;s interest to do the number crunching for primes that are most widely used. A server that uses a common prime is thus more of a target. But what are these common primes? The Logjam paper itself makes explicit reference to two 512-bit primes. A number of larger primes can be found by inspecting the JavaScript behind the <a href="https://weakdh.org/sysadmin.html">server test page</a> on the Logjam site. You can test for a dozen common primes by adding a bit of code to the file <code>apps/s_cb.c</code> before compiling OpenSSL. Since version 1.0.2 the default output includes a line, when appropriate, beginning “Server Temp Key”, e.g.<br />
<a href="http://www.exploresecurity.com/wp-content/uploads/2015/07/server_temp_key.png"><img class="aligncenter size-full wp-image-405" alt="server_temp_key" src="http://www.exploresecurity.com/wp-content/uploads/2015/07/server_temp_key.png" width="387" height="97" /></a><br />
Insert the following lines after the line that outputs the bit length, which is <code>BIO_printf(out, "DH, %d bits\n", EVP_PKEY_bits(key));</code> (line 519 for the current version 1.0.2d):<br />
<code></code></p>
<pre>if (EVP_PKEY_bits(key) &gt; 1024)
   break;
const char *common[12]; // common primes from https://weakdh.org/imperfect-forward-secrecy.pdf, https://weakdh.org/docheck.js
common[0] = "9FDB8B8A004544F0045F1737D0BA2E0B274CDF1A9F588218FB435316A16E374171FD19D8D8F37C39BF863FD60E3E300680A3030C6E4C3757D08F70E6AA871033";
common[1] = "D4BCD52406F69B35994B88DE5DB89682C8157F62D8F33633EE5772F11F05AB22D6B5145B9F241E5ACC31FF090A4BC71148976F76795094E71E7903529F5A824B";
common[2] = "E9E642599D355F37C97FFD3567120B8E25C9CD43E927B3A9670FBEC5D890141922D2C3B3AD2480093799869D1E846AAB49FAB0AD26D2CE6A22219D470BCE7D777D4A21FBE9C270B57F607002F3CEF8393694CF45EE3688C11A8C56AB127A3DAF";
common[3] = "D67DE440CBBBDC1936D693D34AFD0AD50C84D239A45F520BB88174CB98BCE951849F912E639C72FB13B4B4D7177E16D55AC179BA420B2A29FE324A467A635E81FF5901377BEDDCFD33168A461AAD3B72DAE8860078045B07A7DBCA7874087D1510EA9FCC9DDD330507DD62DB88AEAA747DE0F4D6E2BD68B0E7393E0F24218EB3";
common[4] = "BBBC2DCAD84674907C43FCF580E9CFDBD958A3F568B42D4B08EED4EB0FB3504C6C030276E710800C5CCBBAA8922614C5BEECA565A5FDF1D287A2BC049BE6778060E91A92A757E3048F68B076F7D36CC8F29BA5DF81DC2CA725ECE66270CC9A5035D8CECEEF9EA0274A63AB1E58FAFD4988D0F65D146757DA071DF045CFE16B9B";
common[5] = "E6969D3D495BE32C7CF180C3BDD4798E91B7818251BB055E2A2064904A79A770FA15A259CBD523A6A6EF09C43048D5A22F971F3C20129B48000E6EDD061CBC053E371D794E5327DF611EBBBE1BAC9B5C6044CF023D76E05EEA9BAD991B13A63C974E9EF1839EB5DB125136F7262E56A8871538DFD823C6505085E21F0DD5C86B";
common[6] = "C9BBF5F774A8297B0F97CDDA3A3468C7117B6BF799A13D9F1F5DAC487B2241FE95EFB13C2855DFD2F898B3F99188E24EDF326DD68C76CC85537283512D46F1953129C693364D8C71202EABB3EBC85C1DF53907FBD0B7EB490AD0BC99289686800C46AB04BF7CDD9AD425E6FB25592EB6258A0655D75E93B2671746AE349E721B";
common[7] = "CD5C22FAEA0C53C39E602242C088FA0EA31586F472E9B04606AEDFB35F56C4948095F687B388575FA1700DB3D02253025A523AC76E9646F755A12338653AE071CB64F185591C34C6673FAC9B78DC4D71E53F3A5CCA6326F89C5400FBF8272A76367C630E234A905E4E558CDA968A46A136AD3088DD295F934EC36ADB5F69C3F3";
common[8] = "92402435C3A12E44D3730D8E78CADFA78E2F5B51A956BFF4DB8E56523E9695E63E32506CFEB912F2A77D22E71BB54C8680893B82AD1BCF337F7F7796D3FB968181D9BA1F7034ABFB1F97B3104CF3203F663E81990B7E090F6C4C5EE1A0E57EC174D3E84AD9E72E6AC7DA6AEA12DF297C131854FBF21AC4E879C23BBC60B4F753";
common[9] = "FFFFFFFFFFFFFFFFC90FDAA22168C234C4C6628B80DC1CD129024E088A67CC74020BBEA63B139B22514A08798E3404DDEF9519B3CD3A431B302B0A6DF25F14374FE1356D6D51C245E485B576625E7EC6F44C42E9A637ED6B0BFF5CB6F406B7EDEE386BFB5A899FA5AE9F24117C4B1FE649286651ECE65381FFFFFFFFFFFFFFFF";
common[10] = "D6C094AD57F5374F68D58C7B096872D945CEE1F82664E0594421E1D5E3C8E98BC3F0A6AF8F92F19E3FEF9337B99B9C93A055D55A96E425734005A68ED47040FDF00A55936EBA4B93F64CBA1A004E4513611C9B217438A703A2060C2038D0CFAAFFBBA48FB9DAC4B2450DC58CB0320A0317E2A31B44A02787C657FB0C0CBEC11D";
common[11] = "B10B8F96A080E01DDE92DE5EAE5D54EC52C99FBCFB06A3C69A6A9DCA52D23B616073E28675A23D189838EF1E2EE652C013ECB4AEA906112324975C3CD49B83BFACCBDD7D90C4BD7098488E9C219A73724EFFD6FAE5644738FAA31A4FF55BCCC0A151AF5F0DC8B4BD45BF37DF365C1A65E68CFDA76D4DA708DF1FB2BC2E4A4371";

char *prime;
prime = BN_bn2hex(key-&gt;pkey.dh-&gt;p);
BIO_printf(out, "---&gt; DH prime: %s\n", prime);
int i;
for (i = 0; i &lt; 12; i++) {    if (!strcmp(prime, common[i])) {        BIO_puts(out, "---&gt; Warning - common prime!\n");
       break;
   }
}</pre>
<p>So you should have something like:<br />
<a href="http://www.exploresecurity.com/wp-content/uploads/2015/07/openssl_code.png"><img class="aligncenter size-full wp-image-406" alt="openssl_code" src="http://www.exploresecurity.com/wp-content/uploads/2015/07/openssl_code.png" width="404" height="410" /></a><br />
Now you’ll get output such as:<br />
<a href="http://www.exploresecurity.com/wp-content/uploads/2015/07/common_prime.png"><img class="aligncenter size-full wp-image-407" alt="common_prime" src="http://www.exploresecurity.com/wp-content/uploads/2015/07/common_prime.png" width="375" height="101" /></a><br />
Remember that you may need to employ the <code>-cipher DH</code> parameter to force OpenSSL to use a DH-based cipher suite. If export-grade as well as stronger DH suites are supported then you&#8217;ll also have to use something like <code>-cipher EXP</code> on a second connection to ensure you test the commonality of both primes.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/a-thick-spread-of-logjam/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>When HTML Encoding Helped XSS</title>
		<link>http://www.exploresecurity.com/when-html-encoding-helped-xss/</link>
		<comments>http://www.exploresecurity.com/when-html-encoding-helped-xss/#comments</comments>
		<pubDate>Fri, 30 Jan 2015 10:19:17 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[session fixation]]></category>
		<category><![CDATA[web apps]]></category>
		<category><![CDATA[xss]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=342</guid>
		<description><![CDATA[Recently I was pentesting a web app that had an unauthenticated XSS vulnerability but there was some heavy filtering in place. Nonetheless I was able to achieve session fixation using a combination of a technique I previously explained and some fun filter workarounds &#8211; including using the application&#8217;s own defensive HTML encoding to create a [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Recently I was pentesting a web app that had an unauthenticated XSS vulnerability but there was some heavy filtering in place. Nonetheless I was able to achieve session fixation using a combination of a technique I <a href="http://www.exploresecurity.com/session-fixation-and-xss-working-hand-in-hand/">previously explained</a> and some fun filter workarounds &#8211; including using the application&#8217;s own defensive HTML encoding to create a working XSS payload!<span id="more-342"></span></p>
<h3>The problem</h3>
<p>The application used a bespoke session management cookie. I&#8217;ll call it MYSESSIONID. On login, it wasn&#8217;t renewed. I couldn&#8217;t push a session cookie onto the victim in a classic session fixation attack. However, I had XSS in an unauthenticated page &#8211; but not the login page. The filtering in place used a combination of removal and encoding. Characters that were stripped out included:</p>
<p><code>+ ; ( ) ? &lt; &gt;</code></p>
<p>Characters that were allowed included:</p>
<p><code>" ' = [ ] / , .</code></p>
<p>So even though MYSESSIONID wasn&#8217;t protected with the <em>HttpOnly</em> flag, I just couldn&#8217;t construct a payload to steal it. Instead I looked to set one of my own. Here&#8217;s a breakdown of the attack:</p>
<h3>1. Get a valid cookie</h3>
<p>The application did not accept arbitrary session management cookies so the attacker sends a request to get a valid one. In this case, simply having no MYSESSIONID wasn&#8217;t enough, the cookie had to be present but an invalid value did the trick:</p>
<p><code>Cookie: MYSESSIONID=aaaaaaaaaaaaaaaaaaa:xx01</code></p>
<p>returned</p>
<p><code>Set-Cookie: MYSESSIONID=NDnQrZ6JsMHyJTBCw8n:xx01; Path=/; Domain=.example.com</code></p>
<h3>2. The XSS</h3>
<p>The malicious link looked something like this (the highlighted bits are explained below):</p>
<p><code>https://www.example.com/app/folder/page?vuln=foo"<span style="color: red;">%0a</span>document.cookie%3d"MYSESSIONID%3dNDnQrZ6JsMHyJTBCw8n:xx01<span style="color: blue;">:%0d</span><span style="color: red;">path%3d/app/</span></code></p>
<p>When clicked, the XSS flaw wrote the following to the return page inside a JavaScript code block:</p>
<p><code>var a = "foo"<br />
document.cookie="MYSESSIONID=NDnQrZ6JsMHyJTBCw8n:xx01:&amp;#13;path=/app/";</code></p>
<p>The <span style="color: red;">%0a</span> at the front of the XSS payload was used to start a new line and this was sufficient to act as a statement terminator after <code>var a = "foo"</code> (semi-colons were being stripped). But in order to inject a <code>path</code> attribute (discussed below) I did need a semi-colon in the cookie string. By running every character through a Burp Intruder attack, I saw which characters were allowed, which were stripped and which were returned encoded. By inserting <span style="color: blue;">:%0d</span> into the XSS payload :&amp;#13; was returned &#8211; yes, %0d was encoded but %0a (used above) came back fine! Being inside a string inside a JavaScript block &amp;#13; wasn&#8217;t seen as an HTML entity by the browser and thus wasn&#8217;t interpreted. This provided the semi-colon needed to create a <code>path</code> attribute.</p>
<p>The colon at the front was used because it looked like the session cookie was delimited in that way. That &#8220;xx01&#8243; might refer, for example, to an internal server for load-balancing. Anyway, whatever it did, the application tolerated the unusual suffix to the session cookie. So that explains the :%0d appended to the cookie value in the XSS payload. Now for the <code><span style="color: red;">path%3d/app/</span></code>&#8230;</p>
<h3>3. The victim logins in</h3>
<p>So, at this point, the attacker has set the MYSESSIONID cookie on the victim to be <code>NDnQrZ6JsMHyJTBCw8n:xx01:&amp;#13</code> via a reflected XSS attack. Now the victim goes to the login page at https://www.example.com/app/login or is bounced there by navigating to a part of the site that enforces authentication. At login two MYSESSIONID cookies are passed up. This is because one had been set earlier in a <code>Set-Cookie</code> response header the first time the victim hit the site, even if that was by visiting the XSS&#8217;ed page. The genuine MYSESSIONID has a <code>path</code> of / and a <code>domain</code> of .example.com. If I had set a cookie by XSS with no attributes my cookie would have had a <code>path</code> of /app/folder/ (to match the path of the page which set the cookie) and a <code>domain</code> of www.example.com (to match the domain of said page). This would mean my cookie would never be sent up to /app/login for authentication, hence the need to set a <code>path</code> as part of the XSS.</p>
<p>Furthermore, when two MYSESSIONID values were sent up, the application took the first value so I had to make sure my cookie was first. By setting a <code>path</code> of /app/, it trumped the real MYSESSIONID for having a better path match to /app/login. Thus it was listed first in the POST request with the credentials and became authenticated:</p>
<p><code>Cookie: MYSESSIONID=NDnQrZ6JsMHyJTBCw8n:xx01:&amp;#13; MYSESSIONID=4GRc4jiKNeQIfsqh2:xx01</code></p>
<p>In contrast, the <code>domain</code> of a cookie does not govern precedence in a standardised way, it varies between browser. From memory I think my cookie (with a more specific domain match) was sent up first by IE but second by Chrome and Firefox. It&#8217;s not something you want to rely on. Neither could I overwrite the cookie because for that to happen the name, path and domain must match. That would mean having to change both attributes from their defaults but in this case I could only change one. This is because I&#8217;d need a second semi-colon to set a second attribute and in doing so, using the encoding trick above, the first attribute would be spoilt, e.g. I&#8217;d get</p>
<p><code>var a = "foo"<br />
document.cookie="MYSESSIONID=NDnQrZ6JsMHyJTBCw8n:xx01:&amp;#13;path=/app/<span style="color: red;">&amp;#13</span>;domain=.example.com";</code></p>
<p>Developing this proof-of-concept for this specific injection point was quite fiddly and took some persistence but it was worth it. For all of their filtering &#8211; and because they did not change the session cookie after authentication &#8211; this was a nice practical attack using an unauthenticated XSS. One take-away thought then: be sure to probe the XSS defences in full because you never know what might come back and how it could be of help!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/when-html-encoding-helped-xss/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>SS-Hell: the Devil is in the Details</title>
		<link>http://www.exploresecurity.com/ss-hell-the-devil-is-in-the-details/</link>
		<comments>http://www.exploresecurity.com/ss-hell-the-devil-is-in-the-details/#comments</comments>
		<pubDate>Thu, 06 Nov 2014 12:58:05 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[SSL/TLS]]></category>
		<category><![CDATA[Whitepaper]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[SSL]]></category>
		<category><![CDATA[TLS]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=474</guid>
		<description><![CDATA[Penetration test reports commonly contain mention of vulnerabilities in SSL/TLS (hereafter referred to as just SSL). In many cases, this is due to system administrators not understanding the details of these services&#8217; configuration and assuming that simply using SSL provides security. The issues identified during penetration tests are usually low in severity, as an average [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Penetration test reports commonly contain mention of vulnerabilities in SSL/TLS (hereafter referred to as just SSL). In many cases, this is due to system administrators not understanding the details of these services&#8217; configuration and assuming that simply <em>using</em> SSL provides security. The issues identified during penetration tests are usually low in severity, as an average attacker would find them difficult to exploit, but they should be taken seriously to ensure that data in transit is properly secured. If something&#8217;s worth doing, it&#8217;s worth doing well. This <a href="https://research.nccgroup.com/wp-content/uploads/2020/07/ss-hell-v11-final.pdf">whitepaper</a>, co-written with <a href="https://twitter.com/wilgeralexander">Will Alexander</a>, discusses how organisations can avoid SSL issues commonly found during penetration tests, ensure that data in transit is properly secured and ultimately instil in users a sense of confidence that their information is adequately protected.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/ss-hell-the-devil-is-in-the-details/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Thoughts on Testing for POODLE</title>
		<link>http://www.exploresecurity.com/thoughts-on-testing-for-poodle/</link>
		<comments>http://www.exploresecurity.com/thoughts-on-testing-for-poodle/#comments</comments>
		<pubDate>Sat, 01 Nov 2014 00:09:21 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[SSL/TLS]]></category>
		<category><![CDATA[OpenSSL]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[POODLE]]></category>
		<category><![CDATA[SSLv3]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=311</guid>
		<description><![CDATA[I recently did an internal presentation on POODLE &#8211; what the flaw is and how to test for it &#8211; and a version of the slides can be found here. Obviously much has been written about the vulnerability, its mitigations and what the future holds. What follows expands on the testing aspect of the presentation, [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>I recently did an internal presentation on POODLE &#8211; what the flaw is and how to test for it &#8211; and a version of the slides can be found <a href='http://www.slideshare.net/exploresecurity/ss-lv3-and-poodle'>here</a>. Obviously much <a href='https://www.imperialviolet.org/2014/10/14/poodle.html'>has<a> <a href='http://blog.cryptographyengineering.com/2014/10/attack-of-week-poodle.html'>been</a> <a href='http://blogs.opera.com/security/2014/10/security-changes-opera-25-poodle-attacks/'>written</a> about the vulnerability, its mitigations and what the future holds. What follows expands on the testing aspect of the presentation, with a few pointers on manual checks if you feel you need to verify or clarify &#8211; and possibly even add to &#8211; what the tools are telling you.<span id="more-311"></span></p>
<h3>SSLv3 support with block ciphers (in CBC mode) supported</h3>
<p>All SSL/TLS tools check for SSLv3 support. You can do this manually with:</p>
<p><code>openssl s_client -ssl3 –connect &lt;host&gt;:443</code></p>
<p>This confirms SSLv3 support but obviously it only reports 1 cipher suite. This is where the tools come in. However, remember that POODLE only affects block ciphers in cipher block chaining (CBC) mode (which I&#8217;ll just abbreviate to &#8220;block ciphers&#8221; now, as I believe all the block ciphers that can run under SSLv3 operate in CBC mode). So review the list of supported cipher suites: if the server only supports RC4 ciphers then don&#8217;t report POODLE as an issue (instead report SSLv3, which is still old and creaky, and RC4!).</p>
<h3>Server preference</h3>
<p>Even if the server supports block ciphers, it may <em>prefer</em> RC4-based ciphers so the likelihood of exploitation is going to be negligible. I recently wrote up <a href='http://www.exploresecurity.com/testing-for-cipher-suite-preference/'>what to do</a> if you find that your tools disagree over which cipher suite is preferered.</p>
<h3>TLS_FALLBACK_SCSV</h3>
<p>I also recently <a href='http://www.exploresecurity.com/poodle-and-the-tls_fallback_scsv-remedy/'>posted</a> in detail about how the TLS_FALLBACK_SCSV remediation worked. In short it&#8217;s a signal to the server from the client that it is connecting with a lower protocol version than it supports. If the server supports something better, then that should have been negotiated during the earlier connection attempts, so the server can abort the connection as being suspicious.</p>
<p>With the release of OpenSSL v1.0.1j it&#8217;s easy to test for TLS_FALLBACK_SCSV support:</p>
<p><code>openssl s_client -ssl3 -fallback_scsv -connect &lt;host&gt;:443</code></p>
<p>This is telling the server than I&#8217;d like to connect using SSLv3 &#8211; but grudgingly. I&#8217;m using <code>-ssl3</code> in the context of POODLE but TLS_FALLBACK_SCSV offers wider protection than this (checking support for it will continue to be worthwhile long after we&#8217;ve forgotten about POODLE.) Below you can see the fake cipher suite value advertising the fallback (which Wireshark couldn&#8217;t decode into something meaningful as it didn&#8217;t recognise the new cipher suite value 0&#215;5600 at the time):</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/10/tls_fallback_scsv.png"><img src="http://www.exploresecurity.com/wp-content/uploads/2014/10/tls_fallback_scsv.png" alt="tls_fallback_scsv" width="712" height="98" class="aligncenter size-full wp-image-313" /></a></p>
<p>If the OpenSSL connection succeeds as usual (as shown below &#8211; a cipher suite has been chosen) then the server doesn&#8217;t support TLS_FALLBACK_SCSV.</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/10/openssl_connects.png"><img src="http://www.exploresecurity.com/wp-content/uploads/2014/10/openssl_connects.png" alt="openssl_connects" width="590" height="99" class="aligncenter size-full wp-image-314" /></a></p>
<p>If the connection fails with the new <code>inappropriate_fallback</code> alert then the server does support TLS_FALLBACK_SCSV:</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/10/tls_fallback_alert.png"><img src="http://www.exploresecurity.com/wp-content/uploads/2014/10/tls_fallback_alert.png" alt="tls_fallback_alert" width="1182" height="78" class="aligncenter size-full wp-image-315" /></a></p>
<p>Enabling TLS_FALLBACK_SCSV is all very well but it does depend on client support too &#8211; so if the server has SSLv3 enabled with block ciphers supported (and preferred) then it&#8217;s not out of the woods. A few browsers do already support it &#8211; Chrome 33 (Feb 2014), Firefox 35 (Jan 2015), Opera 25 (Oct 2014) &#8211; so it&#8217;s better than nothing, and of course support for it among browsers will only improve. Acknowledging TLS_FALLBACK_SCSV support is therefore worthwhile &#8211; both today and in the future. A client may even feel aggrieved if they&#8217;ve gone to the trouble of enabling TLS_FALLBACK_SCSV but get no credit for it in their pentest report!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/thoughts-on-testing-for-poodle/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Testing for Cipher Suite Preference</title>
		<link>http://www.exploresecurity.com/testing-for-cipher-suite-preference/</link>
		<comments>http://www.exploresecurity.com/testing-for-cipher-suite-preference/#comments</comments>
		<pubDate>Fri, 31 Oct 2014 22:00:01 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[SSL/TLS]]></category>
		<category><![CDATA[OpenSSL]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[POODLE]]></category>
		<category><![CDATA[SSL]]></category>
		<category><![CDATA[TLS]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=296</guid>
		<description><![CDATA[It&#8217;s often important to know which SSL/TLS cipher suite is preferred by a server to alter the risk rating of a particular issue. For POODLE, if the server prefers RC4 ciphers over SSLv3 connections then it&#8217;s very unlikely that a connection will be vulnerable to POODLE. Similarly, if a server prefers block ciphers then reporting [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>It&#8217;s often important to know which SSL/TLS cipher suite is preferred by a server to alter the risk rating of a particular issue. For <a href='http://www.exploresecurity.com/thoughts-on-testing-for-poodle/'>POODLE</a>, if the server prefers RC4 ciphers over SSLv3 connections then it&#8217;s very unlikely that a connection will be vulnerable to POODLE. Similarly, if a server prefers block ciphers then reporting RC4 support should be appropriately adjusted. Occasionally tools conflict over which cipher suite is preferred so I thought I&#8217;d write up how to resolve this manaully in the spirit of the <a href='http://www.exploresecurity.com/ssltls-checklist-for-pentesters/'>SSL/TLS manual cheatsheet</a>.<span id="more-296"></span></p>
<h3>How is a cipher suite chosen?</h3>
<p>Quick overview: the connection starts with a Client Hello in which the client advertises which cipher suites it supports in order of preference (most preferred first). This list will be tailored according to any local configuration, as well as to the SSL/TLS protocol version the client is hoping to use, which is also advertised in the Client Hello:</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/10/client_hello.png"><img src="http://www.exploresecurity.com/wp-content/uploads/2014/10/client_hello.png" alt="client_hello" width="527" height="289" class="aligncenter size-full wp-image-300" /></a></p>
<p>The protocol version is the highest the client supports &#8211; unless the browser has gone down the fallback route, which is the mechanism <a href='http://www.exploresecurity.com/poodle-and-the-tls_fallback_scsv-remedy/'>abused by POODLE</a> to make the SSLv3 attack more practical. Cipher suites can vary with protocol version simply because older protocols can&#8217;t always meet the needs of newer cipher suites. For example, only TLSv1.2 supports cipher suites that use SHA-256 for message integrity.</p>
<p>In receipt of the Client Hello, the server now has two options: it can either (a) opt for the client&#8217;s most preferred cipher suite that it too supports, or (b) ignore the client&#8217;s preference and opt for the cipher suite nearest the top of its <em>own</em> list that the client supports. For example, say the client has sent up a list of cipher suites which we&#8217;ll just call 1,2,3,4,5,6,7 and the server supports 8,3,4,2,6. In the case of (a) the server&#8217;s order is unimportant and it chooses 2; in the case of (b) the server chooses 3. The choice the server makes is returned in the Server Hello message:</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/10/server_hello.png"><img src="http://www.exploresecurity.com/wp-content/uploads/2014/10/server_hello.png" alt="server_hello" width="490" height="147" class="aligncenter size-full wp-image-302" /></a></p>
<p>Something to note in the above example is that, in the case of the server having a preference, you would never find out that cipher suite 8 is in fact the preferred choice because it isn&#8217;t supported by the client and thus it&#8217;s never offered in the Client Hello. Server preference is thus not only dictated by the server: it depends on what the client knows too.</p>
<h3>Conflicting results</h3>
<p>On my last test I had a conflict between SSLyze and SSLscan over which cipher suite was preferred over SSLv3. SSLyze thought it was RC4-SHA (I&#8217;m using the OpenSSL notation here)&#8230;</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/10/sslyze.png"><img src="http://www.exploresecurity.com/wp-content/uploads/2014/10/sslyze.png" alt="sslyze" width="1252" height="145" class="aligncenter size-full wp-image-297" /></a></p>
<p>&#8230;whereas SSLscan went for DES-CBC3-SHA:</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/10/sslscan.png"><img src="http://www.exploresecurity.com/wp-content/uploads/2014/10/sslscan-300x45.png" alt="sslscan" width="300" height="45" class="aligncenter size-medium wp-image-298" /></a></p>
<h3>Manually testing for preference</h3>
<p>To resolve this was simple. I ran:</p>
<p><code>openssl s_client -ssl3 -connect &lt;host&gt;:443</code></p>
<p>OpenSSL reported that DES-CBC3-SHA had been chosen. Just to be sure &#8211; which I explain below &#8211; I let the two cipher suites in question compete with one another using the <code>-cipher</code> switch, which allows you to put specific cipher suites in the Client Hello. OpenSSL orders them exactly how you list them according to the scheme set out in <code>man ciphers</code>. So I ran:</p>
<p><code>openssl s_client -ssl3 -cipher DES-CBC3-SHA:RC4-SHA -connect &lt;host&gt;:443</code></p>
<p>and then switched the order of the cipher suites:</p>
<p><code>openssl s_client -ssl3 -cipher RC4-SHA:DES-CBC3-SHA -connect &lt;host&gt;:443</code>.</p>
<p>In both cases DES-CBC3-SHA was chosen so I was confident that SSLscan was right.</p>
<h3>Why did SSLyze get it wrong this time?</h3>
<p>Up to now I had tried SSLyze versions 0.9 and 1.0dev and both had reported RC4-SHA as the preferred cipher suite. I then tried an earlier 0.6beta version and found it correctly reported DES-CBC3-SHA. Rather than delve into the code I first took the easy option and fired up Wireshark while running one of the later versions of SSLyze. When enumerating the supported cipher suites, I could see that DES-CBC3-SHA was tested individually; however, when it came to checking for preference, DES-CBC3-SHA was left out of the list in the Client Hello. Obviously the server couldn&#8217;t choose it in this case, hence the preference was misreported. I <a href='https://github.com/nabla-c0d3/sslyze/issues/10'>reported</a> this as a bug and Alban Diquet explained that:</p>
<blockquote><p>The reason why DES-CBC3-SHA isn&#8217;t sent within the preference test is that specific servers will not reply at all if the client hello is larger than 255 bytes (due to a bug in a specific brand of load balancers). To reduce the size of the hello, I had to disable some stuff including specific cipher suites.</p></blockquote>
<p>In this case the server only supported 3 cipher suites over SSLv3 so this misidentification could have been avoided. And this got me thinking&#8230;</p>
<h3>Algorithm for testing preference</h3>
<p>For each supported SSL/TLS protocol version, this is my version 0.1 of a method a tool could use to work out cipher suite preference:</p>
<ol>
<li>Determine which cipher suites are supported individually (i.e. repeatedly send a Client Hello with just one cipher suite and see if it&#8217;s accepted).</li>
<li>Once you know which suites are supported, send them all up in one Client Hello and see which one is picked. If you&#8217;re worried about the buggy load balancers mentioned above then use a subset for now.</li>
<li>If the chosen cipher suite is the one that was at the top of the list then there are two alternative explanations: either (a) the server picked the client&#8217;s preferred suite as it has no preference of its own, or (b) the server really does prefer that cipher suite and it just happened to be at the top. (This is why I ran more than one test above.) So repeat the test in step 2, this time changing the most preferred cipher suite at the top of the order. If the same cipher suite is chosen then it&#8217;s a case of (b) and the server definitely has a preference; otherwise, the first cipher suite should be chosen and it&#8217;s case of (a) where the server is happy to be guided by the client&#8217;s preference<sup>1</sup>.</li>
<li>If the cipher suite list has been cut short to appease buggy load balancers, repeat step 2 with the next set of cipher suites. If a preference has been expressed so far, that cipher suite should be included with the next set to allow it to compete.</li>
<li>If a preference has been found and you really wanted to go the whole hog, you could determine the order in full by starting again at step 2, missing out the cipher suite previously identified as preferred.
</ol>
<p>I put some of this to Alban Diquet (as part of the bug report) and he replied &#8220;yes I thought of adding the exact check you described but that&#8217;s pretty much at the bottom of my TODO list&#8221;. I think he was referring to step 3 but, anyway, if you ever have conflicting output from your tools over cipher suite preference, hopefully this posting will help you to resolve the issue.</p>
<p>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;</p>
<p><sup>1</sup> Actually, there is a possibility of &#8220;no preference&#8221; being misreported if you consider a server that supports cipher suite &#8220;wildcards&#8221; in such as way that there is no preference within a set of cipher suites that match a wildcard. I don&#8217;t think any popular implementation features this (hence the footnote) but for the sake of completeness imagine a server that prefers AES-* then RC4-*. The test tool sends up AES-SHA, AES-MD5, RC4-SHA, RC4-MD5 and AES-SHA is chosen. As per step 3, the tool then sends up AES-MD5, RC4-SHA, RC4-MD5, AES-SHA. This time AES-MD5 is chosen, giving the illusion of no server preference but in fact the server does have a preference, it&#8217;s just by groups. To cover this, if no server preference has been detected after step 3, repeat step 2 rotating the cipher suite at the top of the list each time; if at any point the cipher suite selected is <em>not</em> the first on the list then the server <em>does</em> have a preference. Admittedly this could add a fair bit of overhead!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/testing-for-cipher-suite-preference/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Testing for TCP/IP Timestamps</title>
		<link>http://www.exploresecurity.com/testing-for-tcp-ip-timestamps/</link>
		<comments>http://www.exploresecurity.com/testing-for-tcp-ip-timestamps/#comments</comments>
		<pubDate>Wed, 08 Oct 2014 22:56:29 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[TCP timestamps]]></category>
		<category><![CDATA[TCP/IP timestamps]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=261</guid>
		<description><![CDATA[It always used to be a stock joke in my old workplace that if you were having a tough time finding issues in a pentest then you could always rely on &#8220;TCP/IP timestamps&#8221;. Recently I did a re-test (based on another company&#8217;s report) that included this issue and found that it&#8217;s easy for this to [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>It always used to be a stock joke in my old workplace that if you were having a tough time finding issues in a pentest then you could always rely on &#8220;TCP/IP timestamps&#8221;. Recently I did a re-test (based on another company&#8217;s report) that included this issue and found that it&#8217;s easy for this to be a false positive. I thought I&#8217;d write up this finding &#8211; as much for the journey I took through Nessus, Nmap, hping and Wireshark as for the result itself.<span id="more-261"></span></p>
<h3>What are these timestamps anyway?</h3>
<p>Timestamps are an optional addition to the TCP layer to provide information on round-trip times and to help with sequencing &#8211; see <a href='http://www.ietf.org/rfc/rfc1323.txt'>RFC 1323</a>. The side-effect of supporting TCP timestamps (I&#8217;ll drop the IP now) is that in certain situations the uptime of the server can be estimated. The most likely impact of this is that an attacker could try to determine the host’s patch status as certain updates require a reboot.</p>
<p>The report I was re-testing had TCP timestamps as an issue &#8211; and you could tell it was a straight Nessus finding. I decided that, if timestamps were indeed enabled, I would at least try to find the uptime and make some comment on it.</p>
<h3>Testing with Nessus and Nmap</h3>
<p>I ran a Nessus scan and sure enough it reported &#8220;TCP/IP Timestamps Supported&#8221; but it didn&#8217;t state any estimated uptime. I then ran Nmap with -O and -v but it didn&#8217;t report anything either. In Nmap&#8217;s <a href='http://nmap.org/book/osdetect-usage.html'>help for OS detection</a> it states that:</p>
<blockquote><p>The uptime guess is labeled a &#8220;guess&#8221; because various factors can make it completely inaccurate. Some operating systems do not start the timestamp counter at zero, but initialize it with a random value, making extrapolation to zero meaningless. Even on systems using a simple counter starting at zero, the counter eventually overflows and wraps around. With a 1,000 Hz counter increment rate, the counter resets to zero roughly every 50 days. So a host that has been up for 102 days will appear to have been up only two days. Even with these caveats, the uptime guess is accurate much of the time for most operating systems, so it is <b>printed when available, but only in verbose mode. The uptime guess is omitted if the target gives zeros or no timestamp options in its SYN/ACK packets, or if it does not reply at all. The line is also omitted if Nmap cannot discern the timestamp increment rate or it seems suspicious (like a 30-year uptime).</b></p></blockquote>
<p>I&#8217;ve bolded the important bits: firstly, remember to add -v to display any timestamp information; secondly, there are a number of reasons why Nmap might have omitted it. I came across this <a href='http://maciejkola.wordpress.com/security-tips-2/why-uptime-can-be-dangerous/'>article</a> which talks about adding the -d switch to debug the time calculation. The output from this article was:</p>
<p><code>root@tester# nmap -d -v -O victim.com<br />
Starting Nmap 5.51 ( http://nmap.org ) at 2012-09-28 10:07 EDT<br />
Initiating OS detection (try #1) against 1.2.3.4<br />
OS detection timingRatio() == (1348841228.595 – 1348841228.095) * 1000 / 500 == 1.000<br />
Retrying OS detection (try #2) against 1.2.3.4<br />
OS detection timingRatio() == (1348841231.064 – 1348841230.563) * 1000 / 500 == 1.002<br />
...[and more of the same]...</code></p>
<p>In fact this took me down a dead-end. After some head scratching I realised that those timingRatio() lines don&#8217;t relate to the uptime calculation. Timestamp values are whole numbers and there&#8217;s no time unit attached to them. According to the RFC the &#8220;timestamp clock&#8230;must be at least approximately proportional to real time&#8221; and it recommends &#8220;a timestamp clock frequency in the range 1 ms to 1 sec per tick&#8221;. In the output above 1348841228.595, for example, is in fact an Epoch time (seconds since 1 Jan 1970) and is equal to Fri, 28 Sep 2012 14:07:08 GMT. Since GMT=EDT+4 you can see this time is exactly when the scan was run. A comment in Nmap&#8217;s source code (osscan2.cc) for the function timingRatio() explains the debug line: &#8220;Compute the ratio of amount of time taken between sending 1st TSEQ  probe and 1st ICMP probe compared to the amount of time it should have taken. Ratios far from 1 can cause bogus results&#8221;. So although Nmap&#8217;s online help was giving reasons why the uptime wasn&#8217;t being reported, debugging wasn&#8217;t telling me why.</p>
<h3>Manual testing: hping and Wireshark</h3>
<p>One thing I did get from the article above was how to fire off a packet to elicit a TCP timestamp response:</p>
<p><code>hping3 www.example.com -p 80 -S --tcp-timestamp -c 1<br />
&nbsp;&nbsp;hping3 &nbsp;&nbsp;a network packet generator and analyser<br />
&nbsp;&nbsp;-p &nbsp;&nbsp;an open port on the target<br />
&nbsp;&nbsp;-S &nbsp;&nbsp;set the SYN flag<br />
&nbsp;&nbsp;--tcp-timestamp &nbsp;&nbsp;add the TCP timestamp option<br />
&nbsp;&nbsp;-c 1 &nbsp;&nbsp;stop after receiving 1 response packet</code></p>
<p>Top tip: if you&#8217;re using hping in a VM, make sure the network interface isn&#8217;t set to NAT on a host that has TCP timestamps disabled (like my Win7 box) &#8211; it will waste at least 20 minutes of your life! Anyway, the response to my hping included:</p>
<p><code>TCP timestamp: tcpts=0</code></p>
<p>Now I had a theory. The server was responding with a TCP timestamp value, which made Nessus report it, but Nmap didn&#8217;t report the uptime because the value was 0. Of course there is a chance that the timestamp <em>could</em> be 0 but certainly not for two consecutive replies. If you do get a non-zero value, by the way, you can run 2 hpings separated by &#8220;sleep&#8221; to calculate the tick rate and then estimate the uptime:</p>
<p><code>hping3 www.example.com -p 80 -S --tcp-timestamp -c 1; sleep 5; hping3 www.example.com -p 80 -S --tcp-timestamp -c 1</code></p>
<p>Taking <code>tcpts[0]</code> to be the first timestamp reply and <code>tcpts[1]</code> to be the second, the uptime in seconds is:</p>
<p><code>tcpts[0] / ( ( tcpts[1] - tcpts[0] ) / 5 )</code></p>
<p>Or in words: subtract the first timestamp value from the second, divide by 5 and then divide that result into the first timestamp.</p>
<p>For completeness I thought I&#8217;d find the packets in Wireshark:<br />
<a href="http://www.exploresecurity.com/wp-content/uploads/2014/10/null_timestamp.png"><img src="http://www.exploresecurity.com/wp-content/uploads/2014/10/null_timestamp.png" alt="null_timestamp" width="292" height="352" class="aligncenter size-full wp-image-267" /></a><br />
The timestamp option in a TCP packet contains two values: <code>TSval</code> (the source&#8217;s time) and <code>TSecr</code> (an echo of the time the destination last sent). The best filter I found to look for positive timestamps was <code>ip.src == &lt;IP_of_target&gt; &#038;&#038; tcp.options.timestamp.tsval &#038;&#038; <span style="white-space: nowrap;">!(tcp.options.timestamp.tsval == 0)</span></code>. The second part ensures that a <code>TSval</code> value is there since the third will return TRUE if the field isn&#8217;t there as well as when it&#8217;s non-zero. In this case, the filter returned no packets, as expected.</p>
<h3>Back to Nessus</h3>
<p>The following is a compressed extract of the relevant code from the Nessus plugin <em>tcp_timestamps.nasl</em> (version 1.19, latest at time of writing):</p>
<p><code>function test(seq) { ...<br />
tsval = tcp_extract_timestamp(pkt["options"]);<br />
if (isnull(tsval)) return NULL;<br />
return make_list(ms, tsval);<br />
}<br />
...<br />
v1 = test(seq:1);<br />
sleep(1);	# Bigger sleep values make the test more precise<br />
v2 = test(seq: 2);<br />
dseq = v2[1] - v1[1];<br />
# Disable the uptime computation (unreliable)<br />
if ( TRUE || dseq == 0 || v2[1] < 0)<br />
{<br />
  security_note();<br />
}<br />
[else calculate and print uptime]</code></p>
<p>So Nessus reported the issue just because a <code>TSval</code> field was returned (it wasn't NULL), hence the false positive. For Nessus, an individual timestamp of 0 isn't a concern (which you could argue is justified) but if the difference between two timestamps is 0 then no uptime is computed. However, you can see that the <code>TRUE</code> that precedes this check effectively disables the uptime calculation completely, as the comment notes.</p>
<h3>Conclusion</h3>
<p>If Nessus reports TCP timestamps, it might not be a valid finding - and even if it is you won't get an uptime; if Nmap doesn't report it, there's probably a good reason. To be absolutely sure, hping can be used for a definitive test (along with Wireshark if you like to see your packets raw). My final word has to be this: I can't believe I've spent this much time on TCP timestamps.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/testing-for-tcp-ip-timestamps/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>The Small Print for OpenSSL legacy_renegotiation</title>
		<link>http://www.exploresecurity.com/the-small-print-for-openssl-legacy_renegotiation/</link>
		<comments>http://www.exploresecurity.com/the-small-print-for-openssl-legacy_renegotiation/#comments</comments>
		<pubDate>Mon, 29 Sep 2014 23:26:37 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[SSL/TLS]]></category>
		<category><![CDATA[OpenSSL]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[SSL]]></category>
		<category><![CDATA[TLS]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=245</guid>
		<description><![CDATA[The other day my attention was drawn to a switch in OpenSSL called -legacy_rengotation. I pulled up the built-in help for s_client and, sure enough, there it was. So I trawled back through the release notes and it looked to have been there since version 0.9.8m. I couldn&#8217;t believe that I hadn&#8217;t spotted this before: [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>The other day my attention was drawn to a switch in OpenSSL called <code>-legacy_rengotation</code>. I pulled up the built-in help for <code>s_client</code> and, sure enough, there it was. So I trawled back through the release notes and it looked to have been there since version 0.9.8m. I couldn&#8217;t believe that I hadn&#8217;t spotted this before: it looked like the perfect way to test for insecure renegotiation without the faff of having to recompile OpenSSL or use an older version. But after a bit of testing this proved to be a red herring&#8230;<span id="more-245"></span></p>
<p>The first thing I wanted to do was prove the negative &#8211; that is, if the <code>-legacy_rengotation</code> switch did what it seemed to promise, then without it renegotiation should fail. Using OpenSSL 1.0.1i I connected to a server that was missing the secure renegotiation patch and ran the test (more information <a href="http://www.exploresecurity.com/ssltls-checklist-for-pentesters/">here</a>):</p>
<p><code># openssl s_client -connect insecure.example.com:443<br />
...<br />
New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA<br />
Server public key is 2048 bit<br />
<span style="color: #ff0000;">Secure Renegotiation IS NOT supported</span><br />
Compression: NONE<br />
...<br />
HEAD / HTTP/1.0<br />
R<br />
RENEGOTIATING<br />
depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Primary Certification Authority<br />
verify error:num=20:unable to get local issuer certificate<br />
<span style="color: #ff0000;">verify return:0</span></code></p>
<p>It worked. Wait a minute, that shouldn&#8217;t have happened! So I tried OpenSSL 1.0.1e and then another vulnerable server &#8211; and it always connected. After some digging around I found an <a href="https://www.openssl.org/docs/ssl/SSL_CTX_set_options.html#Patched_OpenSSL_client_and_unpat">article</a> on the OpenSSL site. It stated that:</p>
<blockquote><p>If the option SSL_OP_LEGACY_SERVER_CONNECT or SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION is set then initial connections and renegotiation between patched OpenSSL clients and unpatched servers succeeds. If neither option is set then initial connections to unpatched servers will fail.</p>
<p>The option SSL_OP_LEGACY_SERVER_CONNECT is currently set by default even though it has security implications: otherwise it would be impossible to connect to unpatched servers (i.e. all of them initially) and this is clearly not acceptable. Renegotiation is permitted because this does not add any additional security issues: during an attack clients do not see any renegotiations anyway.</p></blockquote>
<p>There was the small print. So as far as <code>s_client</code> is concerned <code>-legacy_renegotiation</code> makes no difference by default because it will renegotiate with insecure servers anyway. To double-check that <code>-legacy_renegotiation</code> and <code>SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION</code> were in fact related, I took a quick look at the source code for <code>s_client.c</code> and the following lines shone out:</p>
<p><code>else if (strcmp(*argv,"-legacy_renegotiation") == 0)<br />
&nbsp;&nbsp;off|=SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION;<br />
else if (strcmp(*argv,"-legacy_server_connect") == 0)<br />
&nbsp;&nbsp;{ off|=SSL_OP_LEGACY_SERVER_CONNECT; }<br />
else if (strcmp(*argv,"-no_legacy_server_connect") == 0)<br />
&nbsp;&nbsp;{ clr|=SSL_OP_LEGACY_SERVER_CONNECT; }</code></p>
<p>As expected, the first line sees <code>-legacy_renegotiation</code> controlling <code>SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION</code>, which we now know has no effect if <code>SSL_OP_LEGACY_SERVER_CONNECT</code> is set. The code also suggests that <code>SSL_OP_LEGACY_SERVER_CONNECT</code> can be controlled with switches, which aren&#8217;t listed in the built-in help. Using the switch <code>-no_legacy_server_connect</code>, as the OpenSSL doc states, stops you from connecting to the server at all:</p>
<p><code># openssl s_client -no_legacy_server_connect -connect insecure.example.com:443<br />
CONNECTED(00000003)<br />
<span style="color: #ff0000;">140264951338664:error:1412F152:SSL routines:SSL_PARSE_SERVERHELLO_TLSEXT:unsafe legacy renegotiation disabled:t1_lib.c:1732:</span><br />
140264951338664:error:140920E3:SSL routines:SSL3_GET_SERVER_HELLO:parse tlsext:s3_clnt.c:1053:<br />
---<br />
no peer certificate available<br />
---<br />
No client certificate CA names sent<br />
---<br />
SSL handshake has read 63 bytes and written 7 bytes<br />
---<br />
<span style="color: #ff0000;">New, (NONE), Cipher is (NONE)</span></code></p>
<p>If you skimmed over the OpenSSL quote above, you may now be thinking <i>&#8220;why is it so black and white; why can&#8217;t I connect to an unpatched server but s_client refuse renegotiation?&#8221;</i> As the OpenSSL doc notes &#8211; and if you think back to the attack <a href="http://www.g-sec.lu/practicaltls.pdf">details</a> &#8211; the victim client doesn&#8217;t actually initiate a renegotiation, it&#8217;s all the attacker&#8217;s doing. OpenSSL isn&#8217;t leaving you vulnerable by letting you renegotiate to unpatched servers, it&#8217;s the very act of connecting to them that leaves you exposed. That&#8217;s where the <code>-no_legacy_server_connect</code> switch comes in: it gives you the option of terminating connections to unpatched servers if you don&#8217;t want to take any risks (and you can understand why they&#8217;ve not made that the default). From a pentest viewpoint, <code>&#8209;legacy_renegotiation</code> should be avoided when testing for insecure renegotiation.</p>
<p>I pinged Ivan Ristic (of <a href="https://www.ssllabs.com/">SSL Labs</a> fame) about this for a sanity check, since he was nice enough to get in touch following the release of my <a href="http://www.exploresecurity.com/ssltls-checklist-for-pentesters/">cheatsheet</a>. (Quick trailer: it turns out that Ivan is planning to release some of the manual testing aspects from his book <a href="https://www.feistyduck.com/books/bulletproof-ssl-and-tls/">Bulletproof SSL and TLS</a> as freeware in the near future.) He agreed that <code>-legacy_renegotiation</code> was something of a red herring as far as manual testing using OpenSSL <code>s_client</code> was concerned &#8211; and I think that&#8217;s now going to make it into the next version of his book!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/the-small-print-for-openssl-legacy_renegotiation/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>SQL Injection in Search Fields</title>
		<link>http://www.exploresecurity.com/sql-injection-in-search-fields/</link>
		<comments>http://www.exploresecurity.com/sql-injection-in-search-fields/#comments</comments>
		<pubDate>Thu, 25 Sep 2014 22:07:40 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[SQL]]></category>
		<category><![CDATA[SQL injection]]></category>
		<category><![CDATA[web apps]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=234</guid>
		<description><![CDATA[A quick posting about a fun SQL injection I cracked last week (of course, it&#8217;s only when you&#8217;ve cracked them that they&#8217;re fun!). A colleague had found the classic sign of a problem &#8211; add a single quote and you get an error &#8211; but was having no luck doing anything more. I was getting [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>A quick posting about a fun SQL injection I cracked last week (of course, it&#8217;s only when you&#8217;ve cracked them that they&#8217;re fun!). A colleague had found the classic sign of a problem &#8211; add a single quote and you get an error &#8211; but was having no luck doing anything more. I was getting nowhere with my test so I thought I&#8217;d take a look for a change of scene. The input field was in a search box so, for example, <code>search=keyword'</code> returned an error but <code>search=keyword''</code> was fine. Anything more exciting than that, however, such as <code>search=keyword' and '1'='1</code>, didn&#8217;t seem to work as expected: in this case, an error was returned instead of the same set of results that the normal <code>search=keyword</code> produced.<span id="more-234"></span></p>
<p>The first thing I did was to try to terminate the query as simply as possible with no funny business. So in went <code>search=keyword'--</code> but back came an error. It turned out that the injection point was inside a doubly nested query as <code>search=keyword'))--</code> worked, producing the same results as <code>search=keyword</code>. After a bit of faffing about it occurred to me that spaces might be the issue. So I tried <code>search=keyword'and'1'='1</code> (no spaces in there) and it worked! No error was returned &#8211; but it didn&#8217;t produce the same results as <code>search=keyword</code>, it returned no results at all. What produced the same results as <code>search=keyword</code> was <code>search=keyword'or'1'='1</code>. Okay, park that for now. I had found the main problem &#8211; and it was immediately clear what was going on.</p>
<p>With a developer&#8217;s hat on, what would you do if a user ran a search with multiple keywords? The obvious answer would be to split up the search terms with space as a delimiter, run a query on each one and then return all the results together. If that was true then <code>search=keyword' and '1'='1</code> was running a database query against three terms: <code>keyword'</code>, <code>and</code>, <code>'1'='1</code>. The first of these would fail (just like <code>search=keyword'</code> did), as would the last if it got that far. So next I tried <code>search=keyword'/**/and/**/'1'='1</code> using the inline SQL comment characters and got the same result. Again, using AND returned no results but using OR was like a normal query with <code>search=keyword</code>. I had seen this kind of behaviour once before but I couldn&#8217;t remember what the context was, which is why I&#8217;ve written it down this time!</p>
<h3>AND vs OR</h3>
<p>In general, AND within a SQL statement (and thus in SQL injection too) is restrictive, narrowing the result set, whereas OR is inclusive, widening the result set. But, as with all SQL injection, it all depends on the underlying query. So what could be happening here?</p>
<p>Again, with the developer hat on, what else might you do with a user&#8217;s search terms? Well, it would be nice if you searched a little more widely, using them as stubs. In fact some of the SQL errors were giving this away (thanks, guys): <em>Incorrect syntax near &#8216;%&#8217;</em>. The % character is, of course, a wildcard used with LIKE. So when I searched for <code>keyword</code>, somewhere in the resulting query was <code>LIKE '%keyword%'</code>. This perfectly explains the AND vs OR behaviour&#8230;</p>
<p>When I injected <code>search=keyword'and'1'='1</code> the resulting query included <code>LIKE '%keyword'and'1'='1%'</code>. So the AND clause I&#8217;d added was always evaluating to FALSE and hence no results were returned. Whereas injecting <code>search=keyword'or'1'='1</code> produced <code>LIKE '%keyword'or'1'='1%'</code>. Even though one half of the OR clause was evaluating to FALSE, overall it returned TRUE when I got a positive hit on the keyword.</p>
<p>Since the injection point was inside a doubly nested query and this was a black box test, I had no idea what the real query was, but this certainly made sense. I tried a few more injections to test the theory just for the hell of it:</p>
<ol>
<li>When I terminated the statement, AND and OR did their &#8220;usual&#8221; thing. Which is to say that <code>search=keyword'/**/and/**/1=1))--</code> produced the same result as <code>search=keyword</code> whereas <code>keyword'/**/or/**/1=1))--</code> produced lots of results. This is because I was now commenting out the final % along with the rest of the statement.</li>
<li>When I injected <code>search=keyword'and'1%'='1</code> I got the same results as if there had been no injection. This was the real proof. Now the resulting query would have included <code>LIKE '%keyword'and'1%'='1%'</code> so my AND clause evaluated to TRUE when I got a positive hit on the keyword.</li>
<li>Finally, for what it was worth, <code>search=word'and'1%'='1</code> produced the same result, showing that a % preceded the injection point.</li>
</ol>
<h3>sqlmap</h3>
<p>One of the things that makes a great tool is the ability to customise it for a particular attack scenario. And sqlmap offers that in abundance. In this case a &#8220;tamper&#8221; script, which transforms the payloads in some way, worked a treat. One of the built-in tamper scripts is &#8220;space2comment&#8221; &#8211; bingo! In fact running sqlmap with this script allowed it to find the injection point. Without the script, though, sqlmap would have been stuck because, to quote the <a href='https://github.com/sqlmapproject/sqlmap/wiki/Usage#tamper-injection-data'>wiki page</a>, &#8220;sqlmap itself does no obfuscation of the payload sent, except for strings between single quotes replaced by their CHAR()-alike representation&#8221;.</p>
<p>All this was a good reminder that, when things are getting tough, thinking like a developer can help to turn near-misses into exploitable flaws. Having said that, I&#8217;ve seen code in the past that I could never have guessed, when it was clear the developer wasn&#8217;t thinking at all!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/sql-injection-in-search-fields/feed/</wfw:commentRss>
		<slash:comments>6</slash:comments>
		</item>
		<item>
		<title>SSL/TLS Checklist for Pentesters</title>
		<link>http://www.exploresecurity.com/ssltls-checklist-for-pentesters/</link>
		<comments>http://www.exploresecurity.com/ssltls-checklist-for-pentesters/#comments</comments>
		<pubDate>Mon, 07 Jul 2014 21:58:26 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[SSL/TLS]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[SSL]]></category>
		<category><![CDATA[TLS]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=201</guid>
		<description><![CDATA[I gave a presentation at BSides MCR 2014 on the subject of SSL and TLS checks from a pentesting viewpoint. The idea was to focus on the pitfalls of testing, why the tools may let you down and how to check for issues manually (as much as possible), often using OpenSSL. The slides can be [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>I gave a presentation at BSides MCR 2014 on the subject of SSL and TLS checks from a pentesting viewpoint. The idea was to focus on the pitfalls of testing, why the tools may let you down and how to check for issues manually (as much as possible), often using OpenSSL.<span id="more-201"></span></p>
<p>The slides can be found <a href="https://www.slideshare.net/exploresecurity/ssltls-checklist-for-pentesters-bsides-mcr-2014">here</a>.</p>
<h3>Cheatsheet</h3>
<p>UPDATE 7th September 2014: I&#8217;ve now written a table that pulls together the manual checks discussed in the presentation &#8211; plus a few more (which will appear in any future presentations). Since tables are a pain in WordPress and I don&#8217;t want to risk a plugin at this time of night, you can find a (lazy) HTML table <a href="http://www.exploresecurity.com/wp-content/uploads/custom/SSL_manual_cheatsheet.html">here</a>.</p>
<p>For updates on content and future presentations <a href="https://twitter.com/exploresecurity">follow me</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/ssltls-checklist-for-pentesters/feed/</wfw:commentRss>
		<slash:comments>5</slash:comments>
		</item>
		<item>
		<title>Three Cheers for DirBuster</title>
		<link>http://www.exploresecurity.com/three-cheers-for-dirbuster/</link>
		<comments>http://www.exploresecurity.com/three-cheers-for-dirbuster/#comments</comments>
		<pubDate>Fri, 04 Jul 2014 09:29:23 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[DirBuster]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[web apps]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=183</guid>
		<description><![CDATA[Not exactly wizard stuff today, more like back to basics perhaps &#8211; but sometimes they&#8217;re worth revisiting. I&#8217;ve had some good DirBuster finds three tests in a row so I thought I&#8217;d write them up as a case study. It&#8217;s a reminder that there&#8217;s some very low-hanging fruit out there that may not always get [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Not exactly wizard stuff today, more like back to basics perhaps &#8211; but sometimes they&#8217;re worth revisiting. I&#8217;ve had some good DirBuster finds three tests in a row so I thought I&#8217;d write them up as a case study. It&#8217;s a reminder that there&#8217;s some very low-hanging fruit out there that may not always get picked. I&#8217;ve also put together a walk-through for many of DirBuster&#8217;s features and I aim to show that, as with many tools, a few minutes of manual work can produce a faster set of more meaningful results.<span id="more-183"></span></p>
<p>If you know what DirBuster is then you can skip this paragraph. If you don&#8217;t, then DirBuster is designed to brute-force directory and file names on web servers, the point being to find content to which there are no links. It&#8217;s an <a href="https://www.owasp.org/index.php/About_OWASP">OWASP</a> project and you can find it <a href="https://www.owasp.org/index.php/Category:OWASP_DirBuster_Project">here</a>. While you can run it in a pure brute-force mode, you&#8217;ll most likely be using a dictionary to maximise your chances of finding something in the time available. DirBuster comes with a set of dictionaries that were generated by crawling the internet for real directory and file names.</p>
<h2>Cheer number 1</h2>
<p>On a test of a web portal DirBuster found pages at <code>/users/</code> and <code>/organisations/</code>. The portal was a closed system used by the owner to exchange financial information with many other organisations in (what was supposed to be) an isolated way. Sorry to be vague but you understand why! Navigating to <code>/users/</code> opened up a whole user management area, with full names, email addresses, roles, last login etc. At <code>/organisations/</code> there was an organisation management area, from where you could access the same user details from other organisations. Oops. While unauthorised data access was possible, attempts to execute administrative functions failed &#8211; but the fact that these functions were exposed was useful in itself because there was no CSRF protection. Moreover it was simple to target an administrator (of any organisation) because you could look them up from the user listings. The only saving grace was that you had to be authenticated &#8211; a point I&#8217;ll return to later.</p>
<h2>Cheer number 2</h2>
<p>On a public website for a high-street company, DirBuster found the page <code>/staff/</code>. This revealed a staff discount page where you could go through and order stuff at significant discounts, meaning lost revenue to the client. Of course, this sort of thing has a habit of getting out on to discount sites and the like. The page was available unauthenticated (although since anyone could register for an account, that&#8217;s by the bye).</p>
<p>Cheer number 2½: DirBuster also found a page that had a special offer for readers of a particular publication. Not as important this one since it was obviously there for the taking but it clearly wasn&#8217;t designed to be available to all.</p>
<h2>Cheer number 3</h2>
<p>On a test of a web portal, while authenticated, DirBuster found a positive response from <code>/admin</code>. This turned out to be an authorisation flaw and a short time later, after some fuzzing of user IDs, I had some 2,300 usernames and email addresses as well as plaintext passwords for about a third of those accounts. This portal was used by many different organisations &#8211; and a user from one of them could log in to another user&#8217;s account from another organisation. Oops.</p>
<p>In fact I had a fourth cheer yesterday, where I found a page that allowed me to self-register unauthenticated on (what was supposed to be) a <em>closed</em> site! But &#8220;four cheers for DirBuster&#8221; sounds a bit naff.</p>
<h2>Walk-through</h2>
<p>The rest (and majority) of this article is a walk-through of the main DirBuster configuration options. <em><span style="text-decoration: underline;">Note</span> that I&#8217;m describing a general case in what follows and obviously there may be times when you need to do things differently. That&#8217;s an important part of pentesting: adapting your test to suit the target</em>. Having said that, let&#8217;s take a look at the starting screen (of version 1.0 RC1, on which this article is based):</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-main-screen.png"><img class="aligncenter size-full wp-image-188" alt="DirBuster main screen" src="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-main-screen.png" width="769" height="589" /></a></p>
<h3>Target URL</h3>
<p>For the &#8220;Target URL&#8221; consider HTTP vs HTTPS. HTTP is obviously faster but a website will often redirect some or all requests to the HTTPS equivalent whether the page is actually there or not, which will spoil your results. You can enable &#8220;Follow Redirects&#8221; from the Options menu but that&#8217;s a considerable overhead if it&#8217;s happening with every request. If the redirect happens only when the page exists then a HTTP-based scan should be speedier. My personal preference is that if the site is happy delivering HTTP pages over HTTPS, which is normal, I&#8217;ll go for HTTPS. Despite the overhead slowing down the request rate, it does tend to rule out excessive redirects since it would be unusual for a HTTPS request to be redirected to a HTTP equivalent. Redirects may also confuse the &#8220;fail case&#8221;, which DirBuster uses to decide how it knows whether or not a guess is correct, which could lead to false negatives as well as false positives. More on this later.</p>
<p>A similar situation may arise with the <em>domain</em> in that <span style="text-decoration: underline;">https://site.com/page</span> may always redirect to <span style="text-decoration: underline;">https://<strong>www</strong>.site.com/page</span> so use <span style="text-decoration: underline;">https://www.site.com:443</span> as your base URL.</p>
<h3>Work Method</h3>
<p>The default &#8220;Auto Switch&#8221; mode is probably best for the majority of cases. DirBuster will first try to see if it can get sensible results from HEAD requests, the reason being that the responses will be smaller. Even though it makes a GET request on 200 responses, this will save time when the 404 message (or equivalent) is relatively large. On the site I was looking at when writing this bit, the full HTML 404 response was about 19kB bigger than the disembodied 404 set of headers you&#8217;d get with HEAD. A crude bit of testing showed this took on average twice as long to arrive and be processed, adding 200ms to the response time. Given that you&#8217;re getting 404s most of the time this could mean a saving, even with the small dictionary, of over 1.4 gigabytes or 4 hours of waiting!</p>
<h3>Number Of Threads</h3>
<p>Running DirBuster with a high number of threads can slow down the target server, which may not go down too well if you&#8217;re testing a live site. You&#8217;ll probably find the default (10) to be a little over-enthusiastic, especially as you&#8217;ll be running other tests simultaneously. If you examine the number of threads in the DirBuster process (javaw.exe) while it&#8217;s running, you&#8217;ll see it jump up by more than the number you set in this field. I haven&#8217;t looked at the source code but I&#8217;m assuming that DirBuster is indeed honouring this field. I imagine that the &#8220;number of threads&#8221; refers to &#8220;Workers&#8221; that handle the actual requests and responses over the network while the other threads, for example, manage different queues depending on what you tick at the bottom of the screen.</p>
<p><span style="line-height: 1.714285714; font-size: 1rem;">As an aside, I&#8217;ve noticed that when you run a number of scans without re-starting DirBuster, the number of threads at rest tends to increase. I&#8217;m not sure if this is an issue that could degrade performance but just bear it in mind. </span><span style="line-height: 1.714285714; font-size: 1rem;">(I did try to contact the project lead, James Fisher, to ask about threading but I got no reply. And it&#8217;s not that big a deal to warrant rummaging through the source code!)</span></p>
<p>I have DirBuster running on another monitor so I can keep an eye on the requests per second and any sudden scrolling, which usually means errors! Bear in mind that, say, 20 requests per second over HTTP<strong>S</strong> will be working the server harder than 20 requests per second over HTTP. A nice feature is that once the scan is running, you can dynamically change the number of threads.</p>
<h3>Dictionary</h3>
<p>Assuming you opt for &#8220;List based brute force&#8221; you&#8217;ll now need to choose a dictionary &#8211; and for this you need to know whether or not your directories are case sensitive. Although you can often guess this from the server in use, e.g. IIS isn&#8217;t case sensitive, it&#8217;s always best to check. So test a page that you know to exist, i.e. does <code>/page</code> return the same as <code>/Page</code>? Even when the server is case-sensitive, a look over the site map in your web proxy may show that all the pages you&#8217;ve requested are in fact lower case. But don&#8217;t go thinking that using the case-sensitive lists will take all that much longer. Clicking &#8220;List Info&#8221; brings up some statistics on the dictionaries, a portion of which is shown below:</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-dict-list-info.png"><img class="aligncenter size-full wp-image-187" alt="DirBuster dict list info" src="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-dict-list-info.png" width="531" height="354" /></a></p>
<p>You can see that the case-sensitive lists are nowhere near even twice the size of the lowercase versions, which you might have imagined as a minimum. That&#8217;s because the lists are based on <em>real</em> names found by crawling the internet. The file &#8220;directory-list-2.3-small.txt&#8221; has 87,650 entries while the lowercase version has 81,629 entries so it&#8217;s only 6,021 entries longer (about 7% bigger). For the medium-sized lists the numbers are 220,546 vs 207,629 so the case-sensitive version is 12,917 entries longer (about 6% bigger). So using the case-sensitive lists may not involve as big a hit as you might expect. (You can also see from the List Info what the actual difference is between big, medium and little: the entries were found on at least 1, 2 and 3 hosts respectively.)</p>
<p>Before you even start your attack you could consider putting together a small dictionary of a few directories and files you&#8217;ve found, together with some gibberish entries, to use on a test run. If you don&#8217;t see the results you expect, review your configuration bearing in mind some of the points from this article. A short test run might save you hours of wasted effort.</p>
<h3>Starting options</h3>
<p>The &#8220;Standard start point&#8221; will assume directories end with / and files end with whatever you configure underneath. The &#8220;URL Fuzz&#8221; option allows you to insert the dictionary entries into the URL in a non-standard way. A good illustration is to discuss why there&#8217;s an Apache user enumeration list included in the set of dictionaries (apache-user-enum-2.0.txt). This is because if the <code>userdir</code> module is enabled (more on this <a href="http://httpd.apache.org/docs/2.4/mod/mod_userdir.html">here</a>) you can go hunting for usernames based on the fact that the user &#8220;bob&#8221; will have a folder mapped to <span style="text-decoration: underline;">http://site.com/~bob/</span>. So in this example the URL to fuzz would be <code>/~{dir}/</code> where <code>{dir}</code> is a placeholder for the words in the chosen dictionary.</p>
<p>The remaining options are self-explanatory but there are still a few things to consider. Obviously the more options you tick the longer the scan will take. So look first at the style of URL the website uses. For example, you might find that requests to <code>/page</code> produce redirects to <code>/page/</code> or that both of these return the same response. Either way, don&#8217;t run &#8220;Brute Force Dirs&#8221; together with &#8220;Brute Force Files&#8221;+&#8221;Use Blank Extension&#8221; because you&#8217;re doing twice the amount of work to get the same result. Conversely if you spot that there doesn&#8217;t seem to be much content in directories, i.e. none of the pages end with a / character, then don&#8217;t run &#8220;Brute Force Dirs&#8221;, rely on &#8220;Brute Force Files&#8221; instead.</p>
<p>If you enable the &#8220;Be Recursive&#8221; option, remember that DirBuster&#8217;s multi-threaded approach means that all those queues of work will be competing for a limited set of Workers. It&#8217;s easy to get into a situation where the Workers are looking in sub-folders of no real interest, slowing down the search for better candidates. In a time-limited test you could try looking at just the root content first by disabling this option. Where you go from there can be both manual and automated &#8211; and there&#8217;s always the option to create a custom dictionary for further scans based on the results of the first scan.</p>
<h3>Options Menu</h3>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-options.png"><img class="aligncenter size-full wp-image-189" alt="DirBuster options" src="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-options.png" width="309" height="246" /></a></p>
<p>I&#8217;ve already mentioned &#8220;Follow Redirects&#8221; &#8211; in general, tick this only if you have to because it has the capacity to slow down the scan. Without this ticked, you&#8217;ll see 301 and 302 responses in the final results and you can just manually target the ones of interest later.</p>
<p>Choosing &#8220;Debug Mode&#8221; will only make a difference if you&#8217;re launching DirBuster from a command window that remains open in the background:</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-debug.png"><img class="aligncenter size-full wp-image-186" alt="DirBuster debug" src="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-debug.png" width="736" height="264" /></a></p>
<p>The references to Worker[n] are to the threads doing the networking so for n threads that you set you&#8217;ll see Workers from [0] to [n-1].</p>
<p>The option &#8220;Parse HTML&#8221;, which is on by default, instructs DirBuster to read the HTML of files that it discovers, looking for files and folders it then doesn&#8217;t have to guess. These can be found, for example, in the <code>href</code> attributes of <code>&lt;a&gt;</code> tags. You might decide this is overkill since DirBuster will quickly begin to download a lot of stuff you&#8217;ll see elsewhere during testing e.g. in Burp&#8217;s Proxy and Site Map. Overall this may add an overhead for results you simply don&#8217;t need &#8211; at least not from this tool on the first scan. There&#8217;s another possible benefit to disabling this when running authenticated scans, which we&#8217;ll come to momentarily.</p>
<h3>Advanced Options</h3>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-advanced-options.png"><img class="aligncenter size-full wp-image-185" alt="DirBuster advanced options" src="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-advanced-options.png" width="620" height="626" /></a></p>
<p>I&#8217;ll skip the first two tabs, which are self-explanatory, and start with the tab that&#8217;s active in the screenshot above&#8230;</p>
<h4>Http Options</h4>
<p>First, DirBuster allows you to add custom headers to your requests so you could, for example, add an authenticated session management cookie. <em>Whoa! Did you say run an automated scanning tool authenticated?</em> Yes I did. After getting a feel of the site you may be comfortable doing this &#8211; it can pull out some interesting finds (as shown by the case studies at the start of this article). Anything you find authenticated that you didn&#8217;t find unauthenticated is really worth a look. Although the risk of side effects is much lower than running a full-on active web application scanner authenticated across a site, of course I have to say that it&#8217;s not without risk! I disable &#8220;Parse HTML&#8221; and &#8220;Be Recursive&#8221; as a safety measure.</p>
<p>Underneath is the &#8220;Http User Agent&#8221; and you can see the default looks nothing like a real User-Agent string. If you&#8217;re getting odd results from DirBuster that you&#8217;re not seeing in Burp, you could try changing that option, e.g. to &#8220;Mozilla/5.0 (Windows NT 6.1; WOW64; rv:27.0) Gecko/20100101 Firefox/27.0&#8243;.</p>
<p>Lastly, the option to use a proxy is useful for troubleshooting &#8211; as well as learning! You could also take advantage of your upstream proxy&#8217;s features to handle more complex cases (adding an overhead, of course).</p>
<h4><span style="line-height: 1.714285714; font-size: 1rem;">Scan Options</span></h4>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-Advanced-Options-Scan-Options.png"><img class="aligncenter size-full wp-image-190" alt="DirBuster Advanced Options - Scan Options" src="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-Advanced-Options-Scan-Options.png" width="549" height="163" /></a></p>
<p>Here lies the all-important &#8220;Fail Case String&#8221;, which by default is &#8220;thereIsNoWayThat-You-CanBeThere&#8221;. The response from this page is used to determine whether or not a guessed page/directory is there so it&#8217;s critical for the success of the scan. DirBuster will request this often in fact &#8211; for every file type in every directory that it finds. So starting from / with all the scan options enabled (directories, files, recursive and blank), having found <code>/admin/users/</code>, for example, DirBuster will request:</p>
<p><code>/admin/users/thereIsNoWayThat-You-CanBeThere/<br />
/admin/users/thereIsNoWayThat-You-CanBeThere<br />
/admin/users/thereIsNoWayThat-You-CanBeThere.php</code></p>
<p>If you&#8217;re getting strange results from DirBuster, consider changing this string. It may even be worth getting into the habit of manually testing the fail case string as a directory and page before you start a lengthy scan.</p>
<h4>DirBuster Options</h4>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-Advanced-Options-Options.png"><img class="aligncenter size-full wp-image-192" alt="DirBuster Advanced Options - Options" src="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-Advanced-Options-Options.png" width="593" height="219" /></a>The last tab serves as a reminder that most of the Options and Advanced Options discussed above get reset when you re-start DirBuster. Only the proxy settings persist beyond the options listed in this tab, which cover the default number of threads, dictionary and file extensions. These options will be pre-populated when you start DirBuster from fresh. Although you&#8217;ll lose many of your options on restart, being forced to reconsider them maybe isn&#8217;t such a bad thing.</p>
<h2>And finally</h2>
<p>It&#8217;s worth starting DirBuster relatively early on in the test because it can take a while to complete, and obviously you want some time left over to explore anything interesting it finds. Keep an eye on the results while it&#8217;s running to make sure you&#8217;re getting something sensible &#8211; and that you&#8217;re not causing a slew of 500 errors. Version 1.0 RC1 will pause automatically after 20 <em>consecutive</em> errors but that&#8217;s <em>client-side</em> errors, not 500 responses. Equally if you&#8217;re getting mostly redirects, try to alter your parameters or, as a last resort, enable the &#8220;Follow Redirects&#8221; option.</p>
<p>Despite &#8211; or because of &#8211; your efforts to optimise your scan, you can often get a large number of hits. On the reporting side, the CSV option is useful because you get the Location, Response Code and Content Length on one line so you can quickly begin to process this and weed out the cruft.</p>
<p>Finally, note that you can invoke a command line interface by running DirBuster in headless mode. Check out the options with <code>java -jar &lt;DirBuster_jar_file&gt; -h</code>. The parameters don&#8217;t comprehensively match the GUI options, though, so if you need a command-line scanner of this type and DirBuster isn&#8217;t up to the job, try <code>dirb</code> (on Kali).</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/three-cheers-for-dirbuster/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
	</channel>
</rss>
