<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Explore Security &#187; penetration testing</title>
	<atom:link href="http://www.exploresecurity.com/tag/penetration-testing-2/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.exploresecurity.com</link>
	<description>IT security tools, techniques and commentary</description>
	<lastBuildDate>Wed, 15 Jun 2022 09:21:02 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.6.1</generator>
		<item>
		<title>The Case of the Customer Data and the Third-Party Analytics Site</title>
		<link>http://www.exploresecurity.com/the-case-of-the-customer-data-and-the-third-party-analytics-site/</link>
		<comments>http://www.exploresecurity.com/the-case-of-the-customer-data-and-the-third-party-analytics-site/#comments</comments>
		<pubDate>Fri, 17 May 2019 15:32:32 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Data Privacy]]></category>
		<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[GDPR]]></category>
		<category><![CDATA[penetration testing]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=460</guid>
		<description><![CDATA[I recently had an article published on the NCC Group blog about a pentest during which it became clear that customer personal data was being sent to a third party. It&#8217;s common for websites to use metrics/analytics services to collect usage data on how users are interacting with the sites. I thought it would be [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>I recently had an article <a href="https://www.nccgroup.trust/uk/about-us/newsroom-and-events/blogs/2019/may/the-case-of-the-customer-data-and-the-third-party-analytics-site/">published</a> on the NCC Group blog about a pentest during which it became clear that customer personal data was being sent to a third party. It&#8217;s common for websites to use metrics/analytics services to collect usage data on how users are interacting with the sites. I thought it would be useful to highlight the danger of sensitive data being accidentally sent to third parties without due care and attention paid to GDPR. Have a read!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/the-case-of-the-customer-data-and-the-third-party-analytics-site/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Missing XSS flaws with innerHTML and jQuery</title>
		<link>http://www.exploresecurity.com/missing-xss-flaws-with-innerhtml-and-jquery/</link>
		<comments>http://www.exploresecurity.com/missing-xss-flaws-with-innerhtml-and-jquery/#comments</comments>
		<pubDate>Wed, 28 Mar 2018 15:11:42 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[jQuery]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[web apps]]></category>
		<category><![CDATA[xss]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=440</guid>
		<description><![CDATA[I thought a couple of gotchas when testing for XSS could be a useful post. Not new, especially, but I haven&#8217;t posted for a while. It shows how you might think you&#8217;ve covered an area of a web application and declared it free from XSS but in fact there might be a bug left behind. [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>I thought a couple of gotchas when testing for XSS could be a useful post. Not new, especially, but I haven&#8217;t posted for a while. It shows how you might think you&#8217;ve covered an area of a web application and declared it free from XSS but in fact there might be a bug left behind.<span id="more-440"></span></p>
<p>These days a typical web app pulls in data from background XHR requests, the responses come back in a data format like JSON, and the data is parsed client-side and inserted into the DOM of the page. This means, of course, that the encoding of the data in the response is not sufficient by itself to know whether there’s a potential XSS risk – it depends how the data is finally encoded when inserted into the page. But consider the following:</p>
<ol>
<li>You enter <code>&lt;script&gt;alert(1)&lt;/script&gt;</code> hoping for a quick and dirty win – nothing</li>
<li>You suspect input validation so you enter something that you hope will pass validation but test encoding, e.g. <code>xss &lt; &gt; test</code></li>
<li>You inspect the page using the browser dev tools and see it has been inserted as <code>xss &amp;lt; &amp;gt; test</code></li>
<li>So the encoding looks good, and you move on -</li>
</ol>
<p>But wait! Even dangerous DOM manipulation can perform encoding in a text context. For example, <code>innerHTML</code> will observe the <a href="https://w3c.github.io/DOM-Parsing/#xml-serializing-a-text-node">rules</a> and, in a text context, will encode <code>&lt;</code> as <code>&amp;lt;</code> and <code>&gt;</code> as <code>&amp;gt;</code> (and <code>&amp;</code> as <code>&amp;amp;</code> in fact), otherwise those characters are inserted/parsed literally. So the encoding seen in step 3 could be explained by this, meaning that the potential for XSS still exists. It’s precisely because jQuery uses <code>innerHTML</code> as part of functions like <a href="http://api.jquery.com/append/"><em>append()</em></a> that the documentation is full of caveats about not using data from &#8220;untrusted sources&#8221;.</p>
<p>Let’s assume the developers haven’t thought about XSS at all. So why did step 1 fail? This is because the <a href="https://www.w3.org/TR/2008/WD-html5-20080610/dom.html#innerhtml0">spec</a> says &#8220;script elements inserted using <code>innerHTML</code> do not execute when they are inserted&#8221;. If the first test case had been <code>&lt;img src=x onerror=alert(1)&gt;</code> the XSS would have been revealed in step 1.</p>
<p>Hope that’s of some use!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/missing-xss-flaws-with-innerhtml-and-jquery/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Testing for POODLE_TLS Manually</title>
		<link>http://www.exploresecurity.com/testing-for-poodle_tls-manually/</link>
		<comments>http://www.exploresecurity.com/testing-for-poodle_tls-manually/#comments</comments>
		<pubDate>Fri, 13 Mar 2015 12:25:24 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[SSL/TLS]]></category>
		<category><![CDATA[Tools]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[POODLE]]></category>
		<category><![CDATA[TLS]]></category>
		<category><![CDATA[tlslite]]></category>
		<category><![CDATA[tool]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=362</guid>
		<description><![CDATA[Testing for the original POODLE vulnerability was easy because it was an inherent problem with SSLv3, so if you find SSLv3 enabled then you&#8217;ve found POODLE (although other factors such as cipher suite preference have a role to play &#8211; see my previous post). Like Heartbleed, though, testing for POODLE over TLS is conceptually easy [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Testing for the original POODLE vulnerability was easy because it was an inherent problem with SSLv3, so if you find SSLv3 enabled then you&#8217;ve found POODLE (although other factors such as cipher suite preference have a role to play &#8211; see my previous <a title="Thoughts on Testing for POODLE" href="http://www.exploresecurity.com/thoughts-on-testing-for-poodle/">post</a>). Like Heartbleed, though, testing for POODLE <em>over TLS</em> is conceptually easy but it falls within a class of flaws that requires bespoke tools as an unpatched version of <code>openssl</code>, for example, won&#8217;t do what you want it to do. This article discusses how the Python <em>tlslite</em> library can be used to test for POODLE_TLS &#8211; and so much more.<span id="more-362"></span></p>
<h3>What is <em>tlslite</em>?</h3>
<p>From the <a href="https://github.com/trevp/tlslite">source</a> &#8220;TLS Lite is an open source python library that implements SSL and TLS&#8221;. I&#8217;d seen references to it in the original BEAST <a href="http://vnhacker.blogspot.co.uk/2011/09/beast.html">post</a> written by Thai Duong and an article on <a href="https://vivaldi.net/blogs/entry/what-is-tls-testing-tlsprober-net">TLS Prober</a> by Yngve Pettersen. This gave me some confidence that <em>tlslite</em> would be a good starting point. Obviously it&#8217;s not going to be fast but that doesn&#8217;t matter. With a SSL/TLS implementation in a high level language, it would be much easier to make the changes required for the sorts of tests I wanted to run, and I thought POODLE_TLS would be a good one to try first.</p>
<p>TLS Prober is in fact where I wanted to be heading. It works on a modified version of <em>tlslite</em> to test for various SSL/TLS bugs. However, the public source code hasn&#8217;t been updated since Yngve left Opera in 2013 and thus wouldn&#8217;t cover POODLE_TLS. While I could have added that capability, I decided to ignore TLS Prober (for now) and start afresh with the latest <em>tlslite</em> &#8211; mainly as it would be a good learning experience.</p>
<h3>How to test for POODLE_TLS</h3>
<p>I&#8217;m not going to re-hash theory that&#8217;s already <a href="https://www.imperialviolet.org/2014/10/14/poodle.html">covered</a> <a href="https://www.imperialviolet.org/2014/12/08/poodleagain.html">elsewhere</a>. Suffice to say that implementations of TLS that are faithful to the RFC shouldn&#8217;t be vulnerable to POODLE because the spec states what the contents of the padding bytes should be. Therefore, the way to test for POODLE_TLS is to ignore that rule and see if the connection is terminated by the server. This isn&#8217;t the same as performing a full attack but like all testing you have to compromise between accuracy and aggressiveness. I think this test is a good indication. After some rummaging through the source code and a bit of debugging, I found what I wanted.</p>
<h3>Changes to <em>tlslite</em></h3>
<p>It seemed a bit crazy to fork the <a href="https://github.com/trevp/tlslite">original project</a> as my changes were tiny. I also thought that working through the changes here may be helpful to anyone else who wants to do the same sort of thing.</p>
<p>So to begin with I needed to signal to <em>tlslite</em> that I wanted to send TLS messages with invalid padding. You get things going with <em>tlslite</em> through the <code>TLSConnection</code> class so I changed how that was instantiated. <code>TLSConnection</code> inherits from <code>TLSRecordLayer</code>, which is where the padding code lives, so that needed changing too. Within the &#8220;tlslite&#8221; folder I made the following changes (obviously line numbers will be version dependent so I&#8217;ve added the original code too; my version was 0.4.8):</p>
<p><strong>tlsconnection.py</strong><br />
Line 52 was:<br />
<code>def __init__(self, sock):</code><br />
Now:<br />
<code>def __init__(self, sock, check_poodle_tls=False):</code><br />
# now i can signal whether or not I want to perform the test<br />
# if you already have <em>tlslite</em>, you can change it safely because <code>check_poodle_tls</code> defaults to <code>False</code> so it&#8217;s backward-compatible with any existing code that makes use of <em>tlslite</em></p>
<p>Line 61 was:<br />
<code>TLSRecordLayer.__init__(self, sock)</code><br />
Now:<br />
<code>TLSRecordLayer.__init__(self, sock, check_poodle_tls)</code><br />
# I need to pass that signal on to the parent</p>
<p><strong>tlsrecordlayer.py</strong><br />
Line 102 was:<br />
<code>def __init__(self, sock):</code><br />
Now:<br />
<code>def __init__(self, sock, check_poodle_tls):</code></p>
<p>After line 103 <code>self.sock = sock</code> added new line:<br />
<code>self.check_poodle_tls = check_poodle_tls</code></p>
<p>After line 600 <code>paddingBytes = bytearray([paddingLength] * (paddingLength+1))</code> added new lines:<br />
<code>if self.check_poodle_tls == True:<br />
<span style="padding-left: 30px;">paddingBytes = bytearray(x ^ 42 for x in paddingBytes[0:-1])</span><br />
<span style="padding-left: 30px;">paddingBytes.append(paddingLength)</span></code><br />
# change all but the last of the padding bytes to be invalid (just XOR with 42, the answer to everything)<br />
# make the last byte of padding valid = the number of padding bytes</p>
<p>And that&#8217;s it! Remember, as it&#8217;s Python, that tabs are important and the new code needs to be properly aligned.</p>
<h3>POODLE_TLS test script</h3>
<p>I then created the test script (available <a href="https://github.com/exploresecurity/test_poodle_tls">here</a>), which attempts a normal TLS connection first before testing for POODLE using the invalid padding trick. Place the script within the modified <em>tlslite</em> and run it as <code>test_poodle_tls.py &lt;hostname&gt;</code>. Remember, it only tests for POODLE <em>over TLS, <u>not</u> SSLv3.</em></p>
<p>I&#8217;ve noticed that sometimes the normal connection fails and one of the reasons for this is that the server does not support any of the small number of cipher suites offered by <em>tlslite</em>. In this case no conclusion can be drawn &#8211; and the script catches that.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/testing-for-poodle_tls-manually/feed/</wfw:commentRss>
		<slash:comments>9</slash:comments>
		</item>
		<item>
		<title>When HTML Encoding Helped XSS</title>
		<link>http://www.exploresecurity.com/when-html-encoding-helped-xss/</link>
		<comments>http://www.exploresecurity.com/when-html-encoding-helped-xss/#comments</comments>
		<pubDate>Fri, 30 Jan 2015 10:19:17 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[session fixation]]></category>
		<category><![CDATA[web apps]]></category>
		<category><![CDATA[xss]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=342</guid>
		<description><![CDATA[Recently I was pentesting a web app that had an unauthenticated XSS vulnerability but there was some heavy filtering in place. Nonetheless I was able to achieve session fixation using a combination of a technique I previously explained and some fun filter workarounds &#8211; including using the application&#8217;s own defensive HTML encoding to create a [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Recently I was pentesting a web app that had an unauthenticated XSS vulnerability but there was some heavy filtering in place. Nonetheless I was able to achieve session fixation using a combination of a technique I <a href="http://www.exploresecurity.com/session-fixation-and-xss-working-hand-in-hand/">previously explained</a> and some fun filter workarounds &#8211; including using the application&#8217;s own defensive HTML encoding to create a working XSS payload!<span id="more-342"></span></p>
<h3>The problem</h3>
<p>The application used a bespoke session management cookie. I&#8217;ll call it MYSESSIONID. On login, it wasn&#8217;t renewed. I couldn&#8217;t push a session cookie onto the victim in a classic session fixation attack. However, I had XSS in an unauthenticated page &#8211; but not the login page. The filtering in place used a combination of removal and encoding. Characters that were stripped out included:</p>
<p><code>+ ; ( ) ? &lt; &gt;</code></p>
<p>Characters that were allowed included:</p>
<p><code>" ' = [ ] / , .</code></p>
<p>So even though MYSESSIONID wasn&#8217;t protected with the <em>HttpOnly</em> flag, I just couldn&#8217;t construct a payload to steal it. Instead I looked to set one of my own. Here&#8217;s a breakdown of the attack:</p>
<h3>1. Get a valid cookie</h3>
<p>The application did not accept arbitrary session management cookies so the attacker sends a request to get a valid one. In this case, simply having no MYSESSIONID wasn&#8217;t enough, the cookie had to be present but an invalid value did the trick:</p>
<p><code>Cookie: MYSESSIONID=aaaaaaaaaaaaaaaaaaa:xx01</code></p>
<p>returned</p>
<p><code>Set-Cookie: MYSESSIONID=NDnQrZ6JsMHyJTBCw8n:xx01; Path=/; Domain=.example.com</code></p>
<h3>2. The XSS</h3>
<p>The malicious link looked something like this (the highlighted bits are explained below):</p>
<p><code>https://www.example.com/app/folder/page?vuln=foo"<span style="color: red;">%0a</span>document.cookie%3d"MYSESSIONID%3dNDnQrZ6JsMHyJTBCw8n:xx01<span style="color: blue;">:%0d</span><span style="color: red;">path%3d/app/</span></code></p>
<p>When clicked, the XSS flaw wrote the following to the return page inside a JavaScript code block:</p>
<p><code>var a = "foo"<br />
document.cookie="MYSESSIONID=NDnQrZ6JsMHyJTBCw8n:xx01:&amp;#13;path=/app/";</code></p>
<p>The <span style="color: red;">%0a</span> at the front of the XSS payload was used to start a new line and this was sufficient to act as a statement terminator after <code>var a = "foo"</code> (semi-colons were being stripped). But in order to inject a <code>path</code> attribute (discussed below) I did need a semi-colon in the cookie string. By running every character through a Burp Intruder attack, I saw which characters were allowed, which were stripped and which were returned encoded. By inserting <span style="color: blue;">:%0d</span> into the XSS payload :&amp;#13; was returned &#8211; yes, %0d was encoded but %0a (used above) came back fine! Being inside a string inside a JavaScript block &amp;#13; wasn&#8217;t seen as an HTML entity by the browser and thus wasn&#8217;t interpreted. This provided the semi-colon needed to create a <code>path</code> attribute.</p>
<p>The colon at the front was used because it looked like the session cookie was delimited in that way. That &#8220;xx01&#8243; might refer, for example, to an internal server for load-balancing. Anyway, whatever it did, the application tolerated the unusual suffix to the session cookie. So that explains the :%0d appended to the cookie value in the XSS payload. Now for the <code><span style="color: red;">path%3d/app/</span></code>&#8230;</p>
<h3>3. The victim logins in</h3>
<p>So, at this point, the attacker has set the MYSESSIONID cookie on the victim to be <code>NDnQrZ6JsMHyJTBCw8n:xx01:&amp;#13</code> via a reflected XSS attack. Now the victim goes to the login page at https://www.example.com/app/login or is bounced there by navigating to a part of the site that enforces authentication. At login two MYSESSIONID cookies are passed up. This is because one had been set earlier in a <code>Set-Cookie</code> response header the first time the victim hit the site, even if that was by visiting the XSS&#8217;ed page. The genuine MYSESSIONID has a <code>path</code> of / and a <code>domain</code> of .example.com. If I had set a cookie by XSS with no attributes my cookie would have had a <code>path</code> of /app/folder/ (to match the path of the page which set the cookie) and a <code>domain</code> of www.example.com (to match the domain of said page). This would mean my cookie would never be sent up to /app/login for authentication, hence the need to set a <code>path</code> as part of the XSS.</p>
<p>Furthermore, when two MYSESSIONID values were sent up, the application took the first value so I had to make sure my cookie was first. By setting a <code>path</code> of /app/, it trumped the real MYSESSIONID for having a better path match to /app/login. Thus it was listed first in the POST request with the credentials and became authenticated:</p>
<p><code>Cookie: MYSESSIONID=NDnQrZ6JsMHyJTBCw8n:xx01:&amp;#13; MYSESSIONID=4GRc4jiKNeQIfsqh2:xx01</code></p>
<p>In contrast, the <code>domain</code> of a cookie does not govern precedence in a standardised way, it varies between browser. From memory I think my cookie (with a more specific domain match) was sent up first by IE but second by Chrome and Firefox. It&#8217;s not something you want to rely on. Neither could I overwrite the cookie because for that to happen the name, path and domain must match. That would mean having to change both attributes from their defaults but in this case I could only change one. This is because I&#8217;d need a second semi-colon to set a second attribute and in doing so, using the encoding trick above, the first attribute would be spoilt, e.g. I&#8217;d get</p>
<p><code>var a = "foo"<br />
document.cookie="MYSESSIONID=NDnQrZ6JsMHyJTBCw8n:xx01:&amp;#13;path=/app/<span style="color: red;">&amp;#13</span>;domain=.example.com";</code></p>
<p>Developing this proof-of-concept for this specific injection point was quite fiddly and took some persistence but it was worth it. For all of their filtering &#8211; and because they did not change the session cookie after authentication &#8211; this was a nice practical attack using an unauthenticated XSS. One take-away thought then: be sure to probe the XSS defences in full because you never know what might come back and how it could be of help!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/when-html-encoding-helped-xss/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>SS-Hell: the Devil is in the Details</title>
		<link>http://www.exploresecurity.com/ss-hell-the-devil-is-in-the-details/</link>
		<comments>http://www.exploresecurity.com/ss-hell-the-devil-is-in-the-details/#comments</comments>
		<pubDate>Thu, 06 Nov 2014 12:58:05 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[SSL/TLS]]></category>
		<category><![CDATA[Whitepaper]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[SSL]]></category>
		<category><![CDATA[TLS]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=474</guid>
		<description><![CDATA[Penetration test reports commonly contain mention of vulnerabilities in SSL/TLS (hereafter referred to as just SSL). In many cases, this is due to system administrators not understanding the details of these services&#8217; configuration and assuming that simply using SSL provides security. The issues identified during penetration tests are usually low in severity, as an average [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Penetration test reports commonly contain mention of vulnerabilities in SSL/TLS (hereafter referred to as just SSL). In many cases, this is due to system administrators not understanding the details of these services&#8217; configuration and assuming that simply <em>using</em> SSL provides security. The issues identified during penetration tests are usually low in severity, as an average attacker would find them difficult to exploit, but they should be taken seriously to ensure that data in transit is properly secured. If something&#8217;s worth doing, it&#8217;s worth doing well. This <a href="https://research.nccgroup.com/wp-content/uploads/2020/07/ss-hell-v11-final.pdf">whitepaper</a>, co-written with <a href="https://twitter.com/wilgeralexander">Will Alexander</a>, discusses how organisations can avoid SSL issues commonly found during penetration tests, ensure that data in transit is properly secured and ultimately instil in users a sense of confidence that their information is adequately protected.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/ss-hell-the-devil-is-in-the-details/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Thoughts on Testing for POODLE</title>
		<link>http://www.exploresecurity.com/thoughts-on-testing-for-poodle/</link>
		<comments>http://www.exploresecurity.com/thoughts-on-testing-for-poodle/#comments</comments>
		<pubDate>Sat, 01 Nov 2014 00:09:21 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[SSL/TLS]]></category>
		<category><![CDATA[OpenSSL]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[POODLE]]></category>
		<category><![CDATA[SSLv3]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=311</guid>
		<description><![CDATA[I recently did an internal presentation on POODLE &#8211; what the flaw is and how to test for it &#8211; and a version of the slides can be found here. Obviously much has been written about the vulnerability, its mitigations and what the future holds. What follows expands on the testing aspect of the presentation, [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>I recently did an internal presentation on POODLE &#8211; what the flaw is and how to test for it &#8211; and a version of the slides can be found <a href='http://www.slideshare.net/exploresecurity/ss-lv3-and-poodle'>here</a>. Obviously much <a href='https://www.imperialviolet.org/2014/10/14/poodle.html'>has<a> <a href='http://blog.cryptographyengineering.com/2014/10/attack-of-week-poodle.html'>been</a> <a href='http://blogs.opera.com/security/2014/10/security-changes-opera-25-poodle-attacks/'>written</a> about the vulnerability, its mitigations and what the future holds. What follows expands on the testing aspect of the presentation, with a few pointers on manual checks if you feel you need to verify or clarify &#8211; and possibly even add to &#8211; what the tools are telling you.<span id="more-311"></span></p>
<h3>SSLv3 support with block ciphers (in CBC mode) supported</h3>
<p>All SSL/TLS tools check for SSLv3 support. You can do this manually with:</p>
<p><code>openssl s_client -ssl3 –connect &lt;host&gt;:443</code></p>
<p>This confirms SSLv3 support but obviously it only reports 1 cipher suite. This is where the tools come in. However, remember that POODLE only affects block ciphers in cipher block chaining (CBC) mode (which I&#8217;ll just abbreviate to &#8220;block ciphers&#8221; now, as I believe all the block ciphers that can run under SSLv3 operate in CBC mode). So review the list of supported cipher suites: if the server only supports RC4 ciphers then don&#8217;t report POODLE as an issue (instead report SSLv3, which is still old and creaky, and RC4!).</p>
<h3>Server preference</h3>
<p>Even if the server supports block ciphers, it may <em>prefer</em> RC4-based ciphers so the likelihood of exploitation is going to be negligible. I recently wrote up <a href='http://www.exploresecurity.com/testing-for-cipher-suite-preference/'>what to do</a> if you find that your tools disagree over which cipher suite is preferered.</p>
<h3>TLS_FALLBACK_SCSV</h3>
<p>I also recently <a href='http://www.exploresecurity.com/poodle-and-the-tls_fallback_scsv-remedy/'>posted</a> in detail about how the TLS_FALLBACK_SCSV remediation worked. In short it&#8217;s a signal to the server from the client that it is connecting with a lower protocol version than it supports. If the server supports something better, then that should have been negotiated during the earlier connection attempts, so the server can abort the connection as being suspicious.</p>
<p>With the release of OpenSSL v1.0.1j it&#8217;s easy to test for TLS_FALLBACK_SCSV support:</p>
<p><code>openssl s_client -ssl3 -fallback_scsv -connect &lt;host&gt;:443</code></p>
<p>This is telling the server than I&#8217;d like to connect using SSLv3 &#8211; but grudgingly. I&#8217;m using <code>-ssl3</code> in the context of POODLE but TLS_FALLBACK_SCSV offers wider protection than this (checking support for it will continue to be worthwhile long after we&#8217;ve forgotten about POODLE.) Below you can see the fake cipher suite value advertising the fallback (which Wireshark couldn&#8217;t decode into something meaningful as it didn&#8217;t recognise the new cipher suite value 0&#215;5600 at the time):</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/10/tls_fallback_scsv.png"><img src="http://www.exploresecurity.com/wp-content/uploads/2014/10/tls_fallback_scsv.png" alt="tls_fallback_scsv" width="712" height="98" class="aligncenter size-full wp-image-313" /></a></p>
<p>If the OpenSSL connection succeeds as usual (as shown below &#8211; a cipher suite has been chosen) then the server doesn&#8217;t support TLS_FALLBACK_SCSV.</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/10/openssl_connects.png"><img src="http://www.exploresecurity.com/wp-content/uploads/2014/10/openssl_connects.png" alt="openssl_connects" width="590" height="99" class="aligncenter size-full wp-image-314" /></a></p>
<p>If the connection fails with the new <code>inappropriate_fallback</code> alert then the server does support TLS_FALLBACK_SCSV:</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/10/tls_fallback_alert.png"><img src="http://www.exploresecurity.com/wp-content/uploads/2014/10/tls_fallback_alert.png" alt="tls_fallback_alert" width="1182" height="78" class="aligncenter size-full wp-image-315" /></a></p>
<p>Enabling TLS_FALLBACK_SCSV is all very well but it does depend on client support too &#8211; so if the server has SSLv3 enabled with block ciphers supported (and preferred) then it&#8217;s not out of the woods. A few browsers do already support it &#8211; Chrome 33 (Feb 2014), Firefox 35 (Jan 2015), Opera 25 (Oct 2014) &#8211; so it&#8217;s better than nothing, and of course support for it among browsers will only improve. Acknowledging TLS_FALLBACK_SCSV support is therefore worthwhile &#8211; both today and in the future. A client may even feel aggrieved if they&#8217;ve gone to the trouble of enabling TLS_FALLBACK_SCSV but get no credit for it in their pentest report!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/thoughts-on-testing-for-poodle/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Testing for Cipher Suite Preference</title>
		<link>http://www.exploresecurity.com/testing-for-cipher-suite-preference/</link>
		<comments>http://www.exploresecurity.com/testing-for-cipher-suite-preference/#comments</comments>
		<pubDate>Fri, 31 Oct 2014 22:00:01 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[SSL/TLS]]></category>
		<category><![CDATA[OpenSSL]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[POODLE]]></category>
		<category><![CDATA[SSL]]></category>
		<category><![CDATA[TLS]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=296</guid>
		<description><![CDATA[It&#8217;s often important to know which SSL/TLS cipher suite is preferred by a server to alter the risk rating of a particular issue. For POODLE, if the server prefers RC4 ciphers over SSLv3 connections then it&#8217;s very unlikely that a connection will be vulnerable to POODLE. Similarly, if a server prefers block ciphers then reporting [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>It&#8217;s often important to know which SSL/TLS cipher suite is preferred by a server to alter the risk rating of a particular issue. For <a href='http://www.exploresecurity.com/thoughts-on-testing-for-poodle/'>POODLE</a>, if the server prefers RC4 ciphers over SSLv3 connections then it&#8217;s very unlikely that a connection will be vulnerable to POODLE. Similarly, if a server prefers block ciphers then reporting RC4 support should be appropriately adjusted. Occasionally tools conflict over which cipher suite is preferred so I thought I&#8217;d write up how to resolve this manaully in the spirit of the <a href='http://www.exploresecurity.com/ssltls-checklist-for-pentesters/'>SSL/TLS manual cheatsheet</a>.<span id="more-296"></span></p>
<h3>How is a cipher suite chosen?</h3>
<p>Quick overview: the connection starts with a Client Hello in which the client advertises which cipher suites it supports in order of preference (most preferred first). This list will be tailored according to any local configuration, as well as to the SSL/TLS protocol version the client is hoping to use, which is also advertised in the Client Hello:</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/10/client_hello.png"><img src="http://www.exploresecurity.com/wp-content/uploads/2014/10/client_hello.png" alt="client_hello" width="527" height="289" class="aligncenter size-full wp-image-300" /></a></p>
<p>The protocol version is the highest the client supports &#8211; unless the browser has gone down the fallback route, which is the mechanism <a href='http://www.exploresecurity.com/poodle-and-the-tls_fallback_scsv-remedy/'>abused by POODLE</a> to make the SSLv3 attack more practical. Cipher suites can vary with protocol version simply because older protocols can&#8217;t always meet the needs of newer cipher suites. For example, only TLSv1.2 supports cipher suites that use SHA-256 for message integrity.</p>
<p>In receipt of the Client Hello, the server now has two options: it can either (a) opt for the client&#8217;s most preferred cipher suite that it too supports, or (b) ignore the client&#8217;s preference and opt for the cipher suite nearest the top of its <em>own</em> list that the client supports. For example, say the client has sent up a list of cipher suites which we&#8217;ll just call 1,2,3,4,5,6,7 and the server supports 8,3,4,2,6. In the case of (a) the server&#8217;s order is unimportant and it chooses 2; in the case of (b) the server chooses 3. The choice the server makes is returned in the Server Hello message:</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/10/server_hello.png"><img src="http://www.exploresecurity.com/wp-content/uploads/2014/10/server_hello.png" alt="server_hello" width="490" height="147" class="aligncenter size-full wp-image-302" /></a></p>
<p>Something to note in the above example is that, in the case of the server having a preference, you would never find out that cipher suite 8 is in fact the preferred choice because it isn&#8217;t supported by the client and thus it&#8217;s never offered in the Client Hello. Server preference is thus not only dictated by the server: it depends on what the client knows too.</p>
<h3>Conflicting results</h3>
<p>On my last test I had a conflict between SSLyze and SSLscan over which cipher suite was preferred over SSLv3. SSLyze thought it was RC4-SHA (I&#8217;m using the OpenSSL notation here)&#8230;</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/10/sslyze.png"><img src="http://www.exploresecurity.com/wp-content/uploads/2014/10/sslyze.png" alt="sslyze" width="1252" height="145" class="aligncenter size-full wp-image-297" /></a></p>
<p>&#8230;whereas SSLscan went for DES-CBC3-SHA:</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/10/sslscan.png"><img src="http://www.exploresecurity.com/wp-content/uploads/2014/10/sslscan-300x45.png" alt="sslscan" width="300" height="45" class="aligncenter size-medium wp-image-298" /></a></p>
<h3>Manually testing for preference</h3>
<p>To resolve this was simple. I ran:</p>
<p><code>openssl s_client -ssl3 -connect &lt;host&gt;:443</code></p>
<p>OpenSSL reported that DES-CBC3-SHA had been chosen. Just to be sure &#8211; which I explain below &#8211; I let the two cipher suites in question compete with one another using the <code>-cipher</code> switch, which allows you to put specific cipher suites in the Client Hello. OpenSSL orders them exactly how you list them according to the scheme set out in <code>man ciphers</code>. So I ran:</p>
<p><code>openssl s_client -ssl3 -cipher DES-CBC3-SHA:RC4-SHA -connect &lt;host&gt;:443</code></p>
<p>and then switched the order of the cipher suites:</p>
<p><code>openssl s_client -ssl3 -cipher RC4-SHA:DES-CBC3-SHA -connect &lt;host&gt;:443</code>.</p>
<p>In both cases DES-CBC3-SHA was chosen so I was confident that SSLscan was right.</p>
<h3>Why did SSLyze get it wrong this time?</h3>
<p>Up to now I had tried SSLyze versions 0.9 and 1.0dev and both had reported RC4-SHA as the preferred cipher suite. I then tried an earlier 0.6beta version and found it correctly reported DES-CBC3-SHA. Rather than delve into the code I first took the easy option and fired up Wireshark while running one of the later versions of SSLyze. When enumerating the supported cipher suites, I could see that DES-CBC3-SHA was tested individually; however, when it came to checking for preference, DES-CBC3-SHA was left out of the list in the Client Hello. Obviously the server couldn&#8217;t choose it in this case, hence the preference was misreported. I <a href='https://github.com/nabla-c0d3/sslyze/issues/10'>reported</a> this as a bug and Alban Diquet explained that:</p>
<blockquote><p>The reason why DES-CBC3-SHA isn&#8217;t sent within the preference test is that specific servers will not reply at all if the client hello is larger than 255 bytes (due to a bug in a specific brand of load balancers). To reduce the size of the hello, I had to disable some stuff including specific cipher suites.</p></blockquote>
<p>In this case the server only supported 3 cipher suites over SSLv3 so this misidentification could have been avoided. And this got me thinking&#8230;</p>
<h3>Algorithm for testing preference</h3>
<p>For each supported SSL/TLS protocol version, this is my version 0.1 of a method a tool could use to work out cipher suite preference:</p>
<ol>
<li>Determine which cipher suites are supported individually (i.e. repeatedly send a Client Hello with just one cipher suite and see if it&#8217;s accepted).</li>
<li>Once you know which suites are supported, send them all up in one Client Hello and see which one is picked. If you&#8217;re worried about the buggy load balancers mentioned above then use a subset for now.</li>
<li>If the chosen cipher suite is the one that was at the top of the list then there are two alternative explanations: either (a) the server picked the client&#8217;s preferred suite as it has no preference of its own, or (b) the server really does prefer that cipher suite and it just happened to be at the top. (This is why I ran more than one test above.) So repeat the test in step 2, this time changing the most preferred cipher suite at the top of the order. If the same cipher suite is chosen then it&#8217;s a case of (b) and the server definitely has a preference; otherwise, the first cipher suite should be chosen and it&#8217;s case of (a) where the server is happy to be guided by the client&#8217;s preference<sup>1</sup>.</li>
<li>If the cipher suite list has been cut short to appease buggy load balancers, repeat step 2 with the next set of cipher suites. If a preference has been expressed so far, that cipher suite should be included with the next set to allow it to compete.</li>
<li>If a preference has been found and you really wanted to go the whole hog, you could determine the order in full by starting again at step 2, missing out the cipher suite previously identified as preferred.
</ol>
<p>I put some of this to Alban Diquet (as part of the bug report) and he replied &#8220;yes I thought of adding the exact check you described but that&#8217;s pretty much at the bottom of my TODO list&#8221;. I think he was referring to step 3 but, anyway, if you ever have conflicting output from your tools over cipher suite preference, hopefully this posting will help you to resolve the issue.</p>
<p>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;</p>
<p><sup>1</sup> Actually, there is a possibility of &#8220;no preference&#8221; being misreported if you consider a server that supports cipher suite &#8220;wildcards&#8221; in such as way that there is no preference within a set of cipher suites that match a wildcard. I don&#8217;t think any popular implementation features this (hence the footnote) but for the sake of completeness imagine a server that prefers AES-* then RC4-*. The test tool sends up AES-SHA, AES-MD5, RC4-SHA, RC4-MD5 and AES-SHA is chosen. As per step 3, the tool then sends up AES-MD5, RC4-SHA, RC4-MD5, AES-SHA. This time AES-MD5 is chosen, giving the illusion of no server preference but in fact the server does have a preference, it&#8217;s just by groups. To cover this, if no server preference has been detected after step 3, repeat step 2 rotating the cipher suite at the top of the list each time; if at any point the cipher suite selected is <em>not</em> the first on the list then the server <em>does</em> have a preference. Admittedly this could add a fair bit of overhead!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/testing-for-cipher-suite-preference/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Testing for TCP/IP Timestamps</title>
		<link>http://www.exploresecurity.com/testing-for-tcp-ip-timestamps/</link>
		<comments>http://www.exploresecurity.com/testing-for-tcp-ip-timestamps/#comments</comments>
		<pubDate>Wed, 08 Oct 2014 22:56:29 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[TCP timestamps]]></category>
		<category><![CDATA[TCP/IP timestamps]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=261</guid>
		<description><![CDATA[It always used to be a stock joke in my old workplace that if you were having a tough time finding issues in a pentest then you could always rely on &#8220;TCP/IP timestamps&#8221;. Recently I did a re-test (based on another company&#8217;s report) that included this issue and found that it&#8217;s easy for this to [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>It always used to be a stock joke in my old workplace that if you were having a tough time finding issues in a pentest then you could always rely on &#8220;TCP/IP timestamps&#8221;. Recently I did a re-test (based on another company&#8217;s report) that included this issue and found that it&#8217;s easy for this to be a false positive. I thought I&#8217;d write up this finding &#8211; as much for the journey I took through Nessus, Nmap, hping and Wireshark as for the result itself.<span id="more-261"></span></p>
<h3>What are these timestamps anyway?</h3>
<p>Timestamps are an optional addition to the TCP layer to provide information on round-trip times and to help with sequencing &#8211; see <a href='http://www.ietf.org/rfc/rfc1323.txt'>RFC 1323</a>. The side-effect of supporting TCP timestamps (I&#8217;ll drop the IP now) is that in certain situations the uptime of the server can be estimated. The most likely impact of this is that an attacker could try to determine the host’s patch status as certain updates require a reboot.</p>
<p>The report I was re-testing had TCP timestamps as an issue &#8211; and you could tell it was a straight Nessus finding. I decided that, if timestamps were indeed enabled, I would at least try to find the uptime and make some comment on it.</p>
<h3>Testing with Nessus and Nmap</h3>
<p>I ran a Nessus scan and sure enough it reported &#8220;TCP/IP Timestamps Supported&#8221; but it didn&#8217;t state any estimated uptime. I then ran Nmap with -O and -v but it didn&#8217;t report anything either. In Nmap&#8217;s <a href='http://nmap.org/book/osdetect-usage.html'>help for OS detection</a> it states that:</p>
<blockquote><p>The uptime guess is labeled a &#8220;guess&#8221; because various factors can make it completely inaccurate. Some operating systems do not start the timestamp counter at zero, but initialize it with a random value, making extrapolation to zero meaningless. Even on systems using a simple counter starting at zero, the counter eventually overflows and wraps around. With a 1,000 Hz counter increment rate, the counter resets to zero roughly every 50 days. So a host that has been up for 102 days will appear to have been up only two days. Even with these caveats, the uptime guess is accurate much of the time for most operating systems, so it is <b>printed when available, but only in verbose mode. The uptime guess is omitted if the target gives zeros or no timestamp options in its SYN/ACK packets, or if it does not reply at all. The line is also omitted if Nmap cannot discern the timestamp increment rate or it seems suspicious (like a 30-year uptime).</b></p></blockquote>
<p>I&#8217;ve bolded the important bits: firstly, remember to add -v to display any timestamp information; secondly, there are a number of reasons why Nmap might have omitted it. I came across this <a href='http://maciejkola.wordpress.com/security-tips-2/why-uptime-can-be-dangerous/'>article</a> which talks about adding the -d switch to debug the time calculation. The output from this article was:</p>
<p><code>root@tester# nmap -d -v -O victim.com<br />
Starting Nmap 5.51 ( http://nmap.org ) at 2012-09-28 10:07 EDT<br />
Initiating OS detection (try #1) against 1.2.3.4<br />
OS detection timingRatio() == (1348841228.595 – 1348841228.095) * 1000 / 500 == 1.000<br />
Retrying OS detection (try #2) against 1.2.3.4<br />
OS detection timingRatio() == (1348841231.064 – 1348841230.563) * 1000 / 500 == 1.002<br />
...[and more of the same]...</code></p>
<p>In fact this took me down a dead-end. After some head scratching I realised that those timingRatio() lines don&#8217;t relate to the uptime calculation. Timestamp values are whole numbers and there&#8217;s no time unit attached to them. According to the RFC the &#8220;timestamp clock&#8230;must be at least approximately proportional to real time&#8221; and it recommends &#8220;a timestamp clock frequency in the range 1 ms to 1 sec per tick&#8221;. In the output above 1348841228.595, for example, is in fact an Epoch time (seconds since 1 Jan 1970) and is equal to Fri, 28 Sep 2012 14:07:08 GMT. Since GMT=EDT+4 you can see this time is exactly when the scan was run. A comment in Nmap&#8217;s source code (osscan2.cc) for the function timingRatio() explains the debug line: &#8220;Compute the ratio of amount of time taken between sending 1st TSEQ  probe and 1st ICMP probe compared to the amount of time it should have taken. Ratios far from 1 can cause bogus results&#8221;. So although Nmap&#8217;s online help was giving reasons why the uptime wasn&#8217;t being reported, debugging wasn&#8217;t telling me why.</p>
<h3>Manual testing: hping and Wireshark</h3>
<p>One thing I did get from the article above was how to fire off a packet to elicit a TCP timestamp response:</p>
<p><code>hping3 www.example.com -p 80 -S --tcp-timestamp -c 1<br />
&nbsp;&nbsp;hping3 &nbsp;&nbsp;a network packet generator and analyser<br />
&nbsp;&nbsp;-p &nbsp;&nbsp;an open port on the target<br />
&nbsp;&nbsp;-S &nbsp;&nbsp;set the SYN flag<br />
&nbsp;&nbsp;--tcp-timestamp &nbsp;&nbsp;add the TCP timestamp option<br />
&nbsp;&nbsp;-c 1 &nbsp;&nbsp;stop after receiving 1 response packet</code></p>
<p>Top tip: if you&#8217;re using hping in a VM, make sure the network interface isn&#8217;t set to NAT on a host that has TCP timestamps disabled (like my Win7 box) &#8211; it will waste at least 20 minutes of your life! Anyway, the response to my hping included:</p>
<p><code>TCP timestamp: tcpts=0</code></p>
<p>Now I had a theory. The server was responding with a TCP timestamp value, which made Nessus report it, but Nmap didn&#8217;t report the uptime because the value was 0. Of course there is a chance that the timestamp <em>could</em> be 0 but certainly not for two consecutive replies. If you do get a non-zero value, by the way, you can run 2 hpings separated by &#8220;sleep&#8221; to calculate the tick rate and then estimate the uptime:</p>
<p><code>hping3 www.example.com -p 80 -S --tcp-timestamp -c 1; sleep 5; hping3 www.example.com -p 80 -S --tcp-timestamp -c 1</code></p>
<p>Taking <code>tcpts[0]</code> to be the first timestamp reply and <code>tcpts[1]</code> to be the second, the uptime in seconds is:</p>
<p><code>tcpts[0] / ( ( tcpts[1] - tcpts[0] ) / 5 )</code></p>
<p>Or in words: subtract the first timestamp value from the second, divide by 5 and then divide that result into the first timestamp.</p>
<p>For completeness I thought I&#8217;d find the packets in Wireshark:<br />
<a href="http://www.exploresecurity.com/wp-content/uploads/2014/10/null_timestamp.png"><img src="http://www.exploresecurity.com/wp-content/uploads/2014/10/null_timestamp.png" alt="null_timestamp" width="292" height="352" class="aligncenter size-full wp-image-267" /></a><br />
The timestamp option in a TCP packet contains two values: <code>TSval</code> (the source&#8217;s time) and <code>TSecr</code> (an echo of the time the destination last sent). The best filter I found to look for positive timestamps was <code>ip.src == &lt;IP_of_target&gt; &#038;&#038; tcp.options.timestamp.tsval &#038;&#038; <span style="white-space: nowrap;">!(tcp.options.timestamp.tsval == 0)</span></code>. The second part ensures that a <code>TSval</code> value is there since the third will return TRUE if the field isn&#8217;t there as well as when it&#8217;s non-zero. In this case, the filter returned no packets, as expected.</p>
<h3>Back to Nessus</h3>
<p>The following is a compressed extract of the relevant code from the Nessus plugin <em>tcp_timestamps.nasl</em> (version 1.19, latest at time of writing):</p>
<p><code>function test(seq) { ...<br />
tsval = tcp_extract_timestamp(pkt["options"]);<br />
if (isnull(tsval)) return NULL;<br />
return make_list(ms, tsval);<br />
}<br />
...<br />
v1 = test(seq:1);<br />
sleep(1);	# Bigger sleep values make the test more precise<br />
v2 = test(seq: 2);<br />
dseq = v2[1] - v1[1];<br />
# Disable the uptime computation (unreliable)<br />
if ( TRUE || dseq == 0 || v2[1] < 0)<br />
{<br />
  security_note();<br />
}<br />
[else calculate and print uptime]</code></p>
<p>So Nessus reported the issue just because a <code>TSval</code> field was returned (it wasn't NULL), hence the false positive. For Nessus, an individual timestamp of 0 isn't a concern (which you could argue is justified) but if the difference between two timestamps is 0 then no uptime is computed. However, you can see that the <code>TRUE</code> that precedes this check effectively disables the uptime calculation completely, as the comment notes.</p>
<h3>Conclusion</h3>
<p>If Nessus reports TCP timestamps, it might not be a valid finding - and even if it is you won't get an uptime; if Nmap doesn't report it, there's probably a good reason. To be absolutely sure, hping can be used for a definitive test (along with Wireshark if you like to see your packets raw). My final word has to be this: I can't believe I've spent this much time on TCP timestamps.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/testing-for-tcp-ip-timestamps/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>The Small Print for OpenSSL legacy_renegotiation</title>
		<link>http://www.exploresecurity.com/the-small-print-for-openssl-legacy_renegotiation/</link>
		<comments>http://www.exploresecurity.com/the-small-print-for-openssl-legacy_renegotiation/#comments</comments>
		<pubDate>Mon, 29 Sep 2014 23:26:37 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[SSL/TLS]]></category>
		<category><![CDATA[OpenSSL]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[SSL]]></category>
		<category><![CDATA[TLS]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=245</guid>
		<description><![CDATA[The other day my attention was drawn to a switch in OpenSSL called -legacy_rengotation. I pulled up the built-in help for s_client and, sure enough, there it was. So I trawled back through the release notes and it looked to have been there since version 0.9.8m. I couldn&#8217;t believe that I hadn&#8217;t spotted this before: [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>The other day my attention was drawn to a switch in OpenSSL called <code>-legacy_rengotation</code>. I pulled up the built-in help for <code>s_client</code> and, sure enough, there it was. So I trawled back through the release notes and it looked to have been there since version 0.9.8m. I couldn&#8217;t believe that I hadn&#8217;t spotted this before: it looked like the perfect way to test for insecure renegotiation without the faff of having to recompile OpenSSL or use an older version. But after a bit of testing this proved to be a red herring&#8230;<span id="more-245"></span></p>
<p>The first thing I wanted to do was prove the negative &#8211; that is, if the <code>-legacy_rengotation</code> switch did what it seemed to promise, then without it renegotiation should fail. Using OpenSSL 1.0.1i I connected to a server that was missing the secure renegotiation patch and ran the test (more information <a href="http://www.exploresecurity.com/ssltls-checklist-for-pentesters/">here</a>):</p>
<p><code># openssl s_client -connect insecure.example.com:443<br />
...<br />
New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA<br />
Server public key is 2048 bit<br />
<span style="color: #ff0000;">Secure Renegotiation IS NOT supported</span><br />
Compression: NONE<br />
...<br />
HEAD / HTTP/1.0<br />
R<br />
RENEGOTIATING<br />
depth=2 C = US, O = GeoTrust Inc., CN = GeoTrust Primary Certification Authority<br />
verify error:num=20:unable to get local issuer certificate<br />
<span style="color: #ff0000;">verify return:0</span></code></p>
<p>It worked. Wait a minute, that shouldn&#8217;t have happened! So I tried OpenSSL 1.0.1e and then another vulnerable server &#8211; and it always connected. After some digging around I found an <a href="https://www.openssl.org/docs/ssl/SSL_CTX_set_options.html#Patched_OpenSSL_client_and_unpat">article</a> on the OpenSSL site. It stated that:</p>
<blockquote><p>If the option SSL_OP_LEGACY_SERVER_CONNECT or SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION is set then initial connections and renegotiation between patched OpenSSL clients and unpatched servers succeeds. If neither option is set then initial connections to unpatched servers will fail.</p>
<p>The option SSL_OP_LEGACY_SERVER_CONNECT is currently set by default even though it has security implications: otherwise it would be impossible to connect to unpatched servers (i.e. all of them initially) and this is clearly not acceptable. Renegotiation is permitted because this does not add any additional security issues: during an attack clients do not see any renegotiations anyway.</p></blockquote>
<p>There was the small print. So as far as <code>s_client</code> is concerned <code>-legacy_renegotiation</code> makes no difference by default because it will renegotiate with insecure servers anyway. To double-check that <code>-legacy_renegotiation</code> and <code>SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION</code> were in fact related, I took a quick look at the source code for <code>s_client.c</code> and the following lines shone out:</p>
<p><code>else if (strcmp(*argv,"-legacy_renegotiation") == 0)<br />
&nbsp;&nbsp;off|=SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION;<br />
else if (strcmp(*argv,"-legacy_server_connect") == 0)<br />
&nbsp;&nbsp;{ off|=SSL_OP_LEGACY_SERVER_CONNECT; }<br />
else if (strcmp(*argv,"-no_legacy_server_connect") == 0)<br />
&nbsp;&nbsp;{ clr|=SSL_OP_LEGACY_SERVER_CONNECT; }</code></p>
<p>As expected, the first line sees <code>-legacy_renegotiation</code> controlling <code>SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION</code>, which we now know has no effect if <code>SSL_OP_LEGACY_SERVER_CONNECT</code> is set. The code also suggests that <code>SSL_OP_LEGACY_SERVER_CONNECT</code> can be controlled with switches, which aren&#8217;t listed in the built-in help. Using the switch <code>-no_legacy_server_connect</code>, as the OpenSSL doc states, stops you from connecting to the server at all:</p>
<p><code># openssl s_client -no_legacy_server_connect -connect insecure.example.com:443<br />
CONNECTED(00000003)<br />
<span style="color: #ff0000;">140264951338664:error:1412F152:SSL routines:SSL_PARSE_SERVERHELLO_TLSEXT:unsafe legacy renegotiation disabled:t1_lib.c:1732:</span><br />
140264951338664:error:140920E3:SSL routines:SSL3_GET_SERVER_HELLO:parse tlsext:s3_clnt.c:1053:<br />
---<br />
no peer certificate available<br />
---<br />
No client certificate CA names sent<br />
---<br />
SSL handshake has read 63 bytes and written 7 bytes<br />
---<br />
<span style="color: #ff0000;">New, (NONE), Cipher is (NONE)</span></code></p>
<p>If you skimmed over the OpenSSL quote above, you may now be thinking <i>&#8220;why is it so black and white; why can&#8217;t I connect to an unpatched server but s_client refuse renegotiation?&#8221;</i> As the OpenSSL doc notes &#8211; and if you think back to the attack <a href="http://www.g-sec.lu/practicaltls.pdf">details</a> &#8211; the victim client doesn&#8217;t actually initiate a renegotiation, it&#8217;s all the attacker&#8217;s doing. OpenSSL isn&#8217;t leaving you vulnerable by letting you renegotiate to unpatched servers, it&#8217;s the very act of connecting to them that leaves you exposed. That&#8217;s where the <code>-no_legacy_server_connect</code> switch comes in: it gives you the option of terminating connections to unpatched servers if you don&#8217;t want to take any risks (and you can understand why they&#8217;ve not made that the default). From a pentest viewpoint, <code>&#8209;legacy_renegotiation</code> should be avoided when testing for insecure renegotiation.</p>
<p>I pinged Ivan Ristic (of <a href="https://www.ssllabs.com/">SSL Labs</a> fame) about this for a sanity check, since he was nice enough to get in touch following the release of my <a href="http://www.exploresecurity.com/ssltls-checklist-for-pentesters/">cheatsheet</a>. (Quick trailer: it turns out that Ivan is planning to release some of the manual testing aspects from his book <a href="https://www.feistyduck.com/books/bulletproof-ssl-and-tls/">Bulletproof SSL and TLS</a> as freeware in the near future.) He agreed that <code>-legacy_renegotiation</code> was something of a red herring as far as manual testing using OpenSSL <code>s_client</code> was concerned &#8211; and I think that&#8217;s now going to make it into the next version of his book!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/the-small-print-for-openssl-legacy_renegotiation/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>SQL Injection in Search Fields</title>
		<link>http://www.exploresecurity.com/sql-injection-in-search-fields/</link>
		<comments>http://www.exploresecurity.com/sql-injection-in-search-fields/#comments</comments>
		<pubDate>Thu, 25 Sep 2014 22:07:40 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[SQL]]></category>
		<category><![CDATA[SQL injection]]></category>
		<category><![CDATA[web apps]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=234</guid>
		<description><![CDATA[A quick posting about a fun SQL injection I cracked last week (of course, it&#8217;s only when you&#8217;ve cracked them that they&#8217;re fun!). A colleague had found the classic sign of a problem &#8211; add a single quote and you get an error &#8211; but was having no luck doing anything more. I was getting [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>A quick posting about a fun SQL injection I cracked last week (of course, it&#8217;s only when you&#8217;ve cracked them that they&#8217;re fun!). A colleague had found the classic sign of a problem &#8211; add a single quote and you get an error &#8211; but was having no luck doing anything more. I was getting nowhere with my test so I thought I&#8217;d take a look for a change of scene. The input field was in a search box so, for example, <code>search=keyword'</code> returned an error but <code>search=keyword''</code> was fine. Anything more exciting than that, however, such as <code>search=keyword' and '1'='1</code>, didn&#8217;t seem to work as expected: in this case, an error was returned instead of the same set of results that the normal <code>search=keyword</code> produced.<span id="more-234"></span></p>
<p>The first thing I did was to try to terminate the query as simply as possible with no funny business. So in went <code>search=keyword'--</code> but back came an error. It turned out that the injection point was inside a doubly nested query as <code>search=keyword'))--</code> worked, producing the same results as <code>search=keyword</code>. After a bit of faffing about it occurred to me that spaces might be the issue. So I tried <code>search=keyword'and'1'='1</code> (no spaces in there) and it worked! No error was returned &#8211; but it didn&#8217;t produce the same results as <code>search=keyword</code>, it returned no results at all. What produced the same results as <code>search=keyword</code> was <code>search=keyword'or'1'='1</code>. Okay, park that for now. I had found the main problem &#8211; and it was immediately clear what was going on.</p>
<p>With a developer&#8217;s hat on, what would you do if a user ran a search with multiple keywords? The obvious answer would be to split up the search terms with space as a delimiter, run a query on each one and then return all the results together. If that was true then <code>search=keyword' and '1'='1</code> was running a database query against three terms: <code>keyword'</code>, <code>and</code>, <code>'1'='1</code>. The first of these would fail (just like <code>search=keyword'</code> did), as would the last if it got that far. So next I tried <code>search=keyword'/**/and/**/'1'='1</code> using the inline SQL comment characters and got the same result. Again, using AND returned no results but using OR was like a normal query with <code>search=keyword</code>. I had seen this kind of behaviour once before but I couldn&#8217;t remember what the context was, which is why I&#8217;ve written it down this time!</p>
<h3>AND vs OR</h3>
<p>In general, AND within a SQL statement (and thus in SQL injection too) is restrictive, narrowing the result set, whereas OR is inclusive, widening the result set. But, as with all SQL injection, it all depends on the underlying query. So what could be happening here?</p>
<p>Again, with the developer hat on, what else might you do with a user&#8217;s search terms? Well, it would be nice if you searched a little more widely, using them as stubs. In fact some of the SQL errors were giving this away (thanks, guys): <em>Incorrect syntax near &#8216;%&#8217;</em>. The % character is, of course, a wildcard used with LIKE. So when I searched for <code>keyword</code>, somewhere in the resulting query was <code>LIKE '%keyword%'</code>. This perfectly explains the AND vs OR behaviour&#8230;</p>
<p>When I injected <code>search=keyword'and'1'='1</code> the resulting query included <code>LIKE '%keyword'and'1'='1%'</code>. So the AND clause I&#8217;d added was always evaluating to FALSE and hence no results were returned. Whereas injecting <code>search=keyword'or'1'='1</code> produced <code>LIKE '%keyword'or'1'='1%'</code>. Even though one half of the OR clause was evaluating to FALSE, overall it returned TRUE when I got a positive hit on the keyword.</p>
<p>Since the injection point was inside a doubly nested query and this was a black box test, I had no idea what the real query was, but this certainly made sense. I tried a few more injections to test the theory just for the hell of it:</p>
<ol>
<li>When I terminated the statement, AND and OR did their &#8220;usual&#8221; thing. Which is to say that <code>search=keyword'/**/and/**/1=1))--</code> produced the same result as <code>search=keyword</code> whereas <code>keyword'/**/or/**/1=1))--</code> produced lots of results. This is because I was now commenting out the final % along with the rest of the statement.</li>
<li>When I injected <code>search=keyword'and'1%'='1</code> I got the same results as if there had been no injection. This was the real proof. Now the resulting query would have included <code>LIKE '%keyword'and'1%'='1%'</code> so my AND clause evaluated to TRUE when I got a positive hit on the keyword.</li>
<li>Finally, for what it was worth, <code>search=word'and'1%'='1</code> produced the same result, showing that a % preceded the injection point.</li>
</ol>
<h3>sqlmap</h3>
<p>One of the things that makes a great tool is the ability to customise it for a particular attack scenario. And sqlmap offers that in abundance. In this case a &#8220;tamper&#8221; script, which transforms the payloads in some way, worked a treat. One of the built-in tamper scripts is &#8220;space2comment&#8221; &#8211; bingo! In fact running sqlmap with this script allowed it to find the injection point. Without the script, though, sqlmap would have been stuck because, to quote the <a href='https://github.com/sqlmapproject/sqlmap/wiki/Usage#tamper-injection-data'>wiki page</a>, &#8220;sqlmap itself does no obfuscation of the payload sent, except for strings between single quotes replaced by their CHAR()-alike representation&#8221;.</p>
<p>All this was a good reminder that, when things are getting tough, thinking like a developer can help to turn near-misses into exploitable flaws. Having said that, I&#8217;ve seen code in the past that I could never have guessed, when it was clear the developer wasn&#8217;t thinking at all!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/sql-injection-in-search-fields/feed/</wfw:commentRss>
		<slash:comments>6</slash:comments>
		</item>
	</channel>
</rss>
