I thought a couple of gotchas when testing for XSS could be a useful post. Not new, especially, but I haven’t posted for a while. It shows how you might think you’ve covered an area of a web application and declared it free from XSS but in fact there might be a bug left behind.
These days a typical web app pulls in data from background XHR requests, the responses come back in a data format like JSON, and the data is parsed client-side and inserted into the DOM of the page. This means, of course, that the encoding of the data in the response is not sufficient by itself to know whether there’s a potential XSS risk – it depends how the data is finally encoded when inserted into the page. But consider the following:
- You enter
<script>alert(1)</script>hoping for a quick and dirty win – nothing
- You suspect input validation so you enter something that you hope will pass validation but test encoding, e.g.
xss < > test
- You inspect the page using the browser dev tools and see it has been inserted as
xss < > test
- So the encoding looks good, and you move on -
But wait! Even dangerous DOM manipulation can perform encoding in a text context. For example,
innerHTML will observe the rules and, in a text context, will encode
& in fact), otherwise those characters are inserted/parsed literally. So the encoding seen in step 3 could be explained by this, meaning that the potential for XSS still exists. It’s precisely because jQuery uses
innerHTML as part of functions like append() that the documentation is full of caveats about not using data from “untrusted sources”.
Let’s assume the developers haven’t thought about XSS at all. So why did step 1 fail? This is because the spec says “script elements inserted using
innerHTML do not execute when they are inserted”. If the first test case had been
<img src=x onerror=alert(1)> the XSS would have been revealed in step 1.
Hope that’s of some use!