Cenzic 232 Patent
Paid Advertising
sla.ckers.org is
ha.ckers sla.cking
Sla.ckers.org
Q and A for any cross site scripting information. Feel free to ask away. 
Go to Topic: PreviousNext
Go to: Forum ListMessage ListNew TopicSearchLog In
Pages: 12Next
Current Page: 1 of 2
XSS attacks filtered output
Posted by: mlemos
Date: July 10, 2009 11:47PM

I am working on an HTML parser that sanitizes eventually malformed HTML. On top of it I added a HTML filter that discards dangerous HTML that may be used in XSS attacks.

I am using xssAtacks.xml to test the filter. It is a great feed of XSS vectors as it lets me have an idea of how much work I still need to do. Congratulations to the authors and maintainers. It is a really a brilliant job.

Suggestions of other useful feeds of XSS vectors would be useful too.

Anyway, my questions is whether there are any sources of recommended output for filters that try to neutralize this XSS vector attacks?

I would like to compare the output of my filter with the recommended output to help me verifying whether I understood and effectively netralize the attacks.

Once again, congratulations. Keep up the good work.

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: PaPPy
Date: July 11, 2009 07:36AM

why reinvent the wheel?
we have great programs constantly under development by people on these forums

that are finding new XSS that arent getting updated on the list you talk about

http://www.xssed.com/archive/author=PaPPy/

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: mlemos
Date: July 11, 2009 09:19PM

PaPPy, thanks for the response.

I have analyzed several solutions but none satisfied my needs. Actually, the existence of multiple solutions is a reflex that each address needs of different people.

Anyway, I looked at the XSSED site that you mention. I only see a list of tested sites. Maybe I misunderstood what you said, but that is not what I am looking for.

I already developed a capable HTML parser. On top of it I developed several types of filter components, including a XSS filter component among others that can be used together or separately.

Maybe I did not look properly, but what I did not find was a solution with XSS unit test, i.e. one that performs tests against a list of XSS attack vectors (like xssAtacks.xml or others) and compares the current results and with expected results.

This is important to assure that each new version still filters known attacks correctly as before and was not broken due to recent changes.

I was hoping to find a solution with data with XSS filter expected results, so I could compare with my own solution. If you know anything along these lines, I would really appreciate if you could let me know.

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: sirdarckcat
Date: July 12, 2009 02:41AM

You could use their HTML parser: http://www.htmlpurifier.com/
Besides antisamy and htmlpurifier I dont know any other person that is doing an HTML parser in the forum (I'm doing one in JS licenced under BSD if you are interested).

--------------------------------
http://sirdarckcat.blogspot.com/ http://www.sirdarckcat.net/ http://foro.elhacker.net/ http://twitter.com/sirdarckcat



Edited 1 time(s). Last edit at 07/12/2009 02:42AM by sirdarckcat.

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: mlemos
Date: July 13, 2009 04:59AM

I already tried htmlpurifier. Seems very extensive. It actually comes with a script to show the results of filtering XSS vectors from xssAttacks.xml.

However, it does not come with a test suite for those vectors that compares the filtering results against expected results.

I looked at its filtering results and some seem odd. Maybe it is a bug or I am misunderstanding something.

I have not tried antisamy. Quickly browsing its Java code it seems its test results are hardcode in the code itself. Not very useful to test if the filter is doing what it is expected.

If you have a functional Javascript code that provides expected results for xssAttacks.xml vectors or others as a separate file, that would be useful.

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: rvdh
Date: July 13, 2009 10:13AM

First, why would you ever need to purify data that was submitted? I don't understand all this fuzz, I've been making webpages, cms's webapps for ages, and in no case I needed to do that. if the client wanted some styling, bbcode was the ultimate answer.

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: rvdh
Date: July 13, 2009 10:17AM

And btw, it's a mission set to fail if you try, just glance over the stuff .mario, Gareth, SDC and other have come up with. No filter was and will be ever secure to this, only solution from my experience (to give some context: I build at least 3 to 5 sites a month for nearly 10 years) is to simply convert all the stuff to it's specialchars or entities. Or use bbcode and call it a day.

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: mlemos
Date: July 13, 2009 05:52PM

rdvh, you are thinking of plain text content submission.

Using a Web based HTML editor, users have a more friendly interface to format text like in word processing programs they are used to.

The problem is that if you allow HTML to be submitted, crackers may build scripts that pretend to be real users and submit HTML with XSS attacks. That is why XSS filters are necessary.

It is not impossible to filter XSS tags, even if you have to restrict the types of tags and attributes values that are accepted.

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: rvdh
Date: July 14, 2009 03:15AM

Then you got other problems to worry about if they already got access to an HTML editor, and you got a new problem because it restricts the actual WYSIWYG functionality for the admin or user, which means that the WYSIWYG editor will become practically useless, And how about an image or iframe? those are way too common to be entered into a WYSIWYG editor, how you gonna filter that? if you restrict most HTML, it's way easier to switch to a BBCODE style parser in combination with converting all stuff to it's specialchars, and completely mitigate ANY XSS attack instead of being overly clever about it and still be vulnerable and make something totally void of usability.

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: mlemos
Date: July 15, 2009 03:58PM

An HTML editor is mainly a DIV tag with CONTENTEDITABLE attribute set to true. Most browsers support that nowadays.

Using BBCode is like making people walk to some place when they could drive cars to go there much faster and comfortably. People can have car accidents but that should not be the reason to avoid using cars.

Imagine Google Docs using BBCode. It would be such an unusable solution that nobody would care.

I agree that using XSS filters is not a trivial solution, especially that most exploits abuse from vulnerabilities of certain browsers, that do not work on other browsers.

That is why it is important to test XSS filters against all known attack vectors like those from xssAttacks.xml, even though they may not cover all the possibilities.

However, we have to move on and provide better Web based interfaces to the users. HTML editors are the way to go.

My filter already discards Javascript, CSS, iframes and images that contain URLs that use protocol schemes that are not white-listed.

It is a lot of work to cover all the dangerous cases, but I think in the end it will be worth the functionality that you will provide to the users, that you never get using BBCode.

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: sirdarckcat
Date: July 16, 2009 02:35AM

The solution is a script that transforms HTML to BBCODE :) Its possible to make, I've done it.. =P


http://foro.elhacker.net/cake.js

The code that does the magic is:
String.prototype.toBB=function(){

So you only need to do..
"<html><body><img src=123><b>asdf</b></body></html>".toBB() and you get 
"[ img ]https://your.site.com/123[ /img ][ b ]asdf[ /b ]"

the parser is not perfect, but its pretty good.. imho haha :)

so, you can filter on the server bbcode, and have the user see a WYSIWYG!

Greetz!!

--------------------------------
http://sirdarckcat.blogspot.com/ http://www.sirdarckcat.net/ http://foro.elhacker.net/ http://twitter.com/sirdarckcat



Edited 2 time(s). Last edit at 07/16/2009 02:36AM by sirdarckcat.

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: rvdh
Date: July 16, 2009 06:07AM

mlemos Wrote:
-------------------------------------------------------
> Imagine Google Docs using BBCode. It would be such
> an unusable solution that nobody would care.


Well, with a WYSIWYG you can use JavaScript to format the stuff and render it while they enter it, while they enter the user will never see that it is BBCODE, because that is handled when they hit submit. For the rest it will be exactly the same as any WYSIWYG editor. So I meant a BBCODE type system, not the actual: [ b ] [ / b ] of course.

@sirdarckcat

Yah exactly, that's what I more or less meant.



Edited 1 time(s). Last edit at 07/16/2009 06:12AM by rvdh.

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: mlemos
Date: July 19, 2009 05:32AM

I do not think that the BBCode emulation solution that you are proposing would solve anything.

You would only do at the browser side in Javascript what the server side application must do to parse and sanitize input, which always has to be done because a cracker can always forge HTTP requests to emulate submissions of whatever comes from the browser.

Sanitizing BBCode or HTML is practically the same thing. You just use a different markup syntax. I do not see any advantage in switching HTML for BBCode, on the contrary, it would make the client side code for the editor even more complicated.

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: rvdh
Date: July 19, 2009 09:19AM

mlemos Wrote:
-------------------------------------------------------
> I do not think that the BBCode emulation solution
> that you are proposing would solve anything.

What wouldn't solve it?

> You would only do at the browser side in
> Javascript what the server side application must
> do to parse and sanitize input, which always has
> to be done because a cracker can always forge HTTP
> requests to emulate submissions of whatever comes
> from the browser.

DOH! of course it's only for the user's usability instead of typing [] all the time. So when I click the BOLD icon it wraps [ b ] in the source, but renders it bold through javascript for the users pleasure, and this of course happens on the site it outputs as well. It's simply based upon a whitelist.

> Sanitizing BBCode or HTML is practically the same
> thing. You just use a different markup syntax. I
> do not see any advantage in switching HTML for
> BBCode, on the contrary, it would make the client
> side code for the editor even more complicated.

No it isn't. purifying HTML is based upon blacklisting and that will fail maybe not now but somewhere in the future it will. I mean how many times must this be said? just take a look at other voluminous threads, why are they so voluminous? because folks don't accept that a machine or program cannot function like a human or a whitelist can. You are dealing in the breath of a Turing test, unless you invent true AI it cannot be solved.

But what am I doing here, go ahead, and good luck.

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: Anonymous User
Date: July 19, 2009 10:42AM

@mlemos: You said sanitizing BBCode and HTML would be practically the same. I had to laugh a little bit when I read this - and concluded I might add my 2ยข to this thread - even if I managed to keep myself from doing that for the last n days :)

The thing is: When talking about sanitizing markup we are not talking about HTML. We are talking about a vast array of XML and XMLish dialects with such a huge potential of doing damage to any of the involved parties - be it the server, the user agent, the user, the operation system the user is running etc etc etc.

There are some solutions that do a good job - like for example the mentioned HTMLPurifier written by AmbushCommander, AntiSamy or even htmLawed. If you really feel like building another system consider having a look at the existing stuff. Those are really powerful and ultra complex tools. Re-inventing the wheel would be a total waste of time in my opinion and believe - this job will eat your soul for breakfast. You would need an expertise in XML, character encodings, control chars and endiansism, browser quirks and forgotten features, proprietary pre-implementations, browser extensions, CSS, JavaScript, DOM, the whole array of server side components, etc etc.

Anything XMLish running in the browser is evil, horrifyingly complex and full of surprises - usually not the good kind.

So - good luck from me too :) Feel free to post a link to a demo we can play with if you build the system anyway.

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: mlemos
Date: July 19, 2009 06:46PM

Mario rvdh, maybe I was not clear, but what I meant is that converting HTML to Javascript on the browser side will not avoid the fact that you still have to parsing or sanitize BBCode that is submitted to the server.

If an hacker submits BBCode mixed with malicious HTML, your BBCode parser still has to be able to parse and detect invalid markup to filter it or discard it altogether.

BBCode is just an emulation of a subset of HTML. If you just want to accept a subset of HTML, just discard everything else that you do not accept.

I never talked in blacklisting tags. What I am saying is that you just whitelist the markup that you want to accept, and reject the rest even if it may be valid of XSS safe.

But to do this, you do not need to convert it to BBCode first. Just deal with HTML directly and avoid further work of conversion being on client side or server side.

As for other solutions, I have tried some but they have inconveniences that I already mentioned this above.

Rest assured that I am aware that parsing and filtering HTML is not a trivial task. Actually I did not came here to tell you I am considering doing my own HTML filter. I already decided to do it a long time ago and it is mostly done.

For now it comprises an HTML parser, CSS parser, DTD parser and XSS filter. It has more than 5,000 lines of PHP code so far, but it does not completely handle all the cases of xssAttacks.xml.

So there is still work to do before I consider it ready for you to test it. But I appreciate your offer to test it.

Anyway, I only came here mainly to ask if anybody knows of test suite that provides expected results of a XSS filter against the XSS vectors listed in xssAttacks.xml or any other sources of XSS vectors?

I tried for instance using htmlPurifier against xssAttacks.xml but it shows some odd results that vary from version to version. I think it misses a test suite to make sure that any changes to the filter engine do not change the results.

I also tried AntiSamy but its quality control testing methods are a bit weak. The test suite code embeds the test cases. To verify it works it just checks if some sequences are still in output. There is no source of expected results that is outside of the test suite code.

I did not try htmLawed yet. I will do it ASAP.

Anyway, please do not get me wrong. I did not come here to promote any competition between HTML filters. I just have needs that others did not solve, so I did mine in a way that suits me.

I was just hoping for honest cooperation that helps me improve my solution, and eventually exchange ideas that may help others to improve their own filter solutions. Therefore I appreciate the time and patience you take to provide your feedback and cooperation to help us all bring better solutions.

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: sirdarckcat
Date: July 19, 2009 09:44PM

Quote

If an hacker submits BBCode mixed with malicious HTML, your BBCode parser still has to be able to parse and detect invalid markup to filter it or discard it altogether.
false, you receive BBcode, you HTML-entities-fyit and then parse the BBCode to HTML (checking for javascript: on image tags and urls, and etc..). you never parse any HTML.. HTML is disabled always.

Quote

BBCode is just an emulation of a subset of HTML. If you just want to accept a subset of HTML, just discard everything else that you do not accept.
so you will write a full HTML parser on the server to discover what's valid and what's invalid? Isn't that the same as HTMLPurifier?

anyway, if you want to reinvent the wheel that's ok, sometimes the new wheel is better than the last one

Quote

I never talked in blacklisting tags. What I am saying is that you just whitelist the markup that you want to accept, and reject the rest even if it may be valid of XSS safe.
I will throw a PHPIDS 0day here just to let you know how not-easy is this

http://demo.phpids.org/?test=%3Cb+%22%3Cscript%3Ealert%281%29%3C%2Fscript%3E%22%3Ehola%3C%2Fb%3E&html=on

that uses a whitelist, and to make it clear, that works on one of the most-pentested filters out there.. but well, who knows.. maybe you are better than us :)

so well, testcases..

I will be uploading html test cases to code.google.com/p/googlecaja/downloads in case you are interested..

Greetz!!

--------------------------------
http://sirdarckcat.blogspot.com/ http://www.sirdarckcat.net/ http://foro.elhacker.net/ http://twitter.com/sirdarckcat



Edited 1 time(s). Last edit at 07/19/2009 09:47PM by sirdarckcat.

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: mlemos
Date: July 20, 2009 01:57AM

Quote

Quote

If an hacker submits BBCode mixed with malicious HTML, your BBCode parser still has to be able to parse and detect invalid markup to filter it or discard it altogether.

false, you receive BBcode, you HTML-entities-fyit and then parse the BBCode to HTML (checking for javascript: on image tags and urls, and etc..). you never parse any HTML.. HTML is disabled always.

Admittedly I have not thought much about BBCode because it is not an option for my purposes. I needed an HTML parser for several purposes which includes displaying HTML e-mail messages. That is something for which BBCode cannot be used because you have no control of the messages people send.


Quote

Quote

BBCode is just an emulation of a subset of HTML. If you just want to accept a subset of HTML, just discard everything else that you do not accept.

so you will write a full HTML parser on the server to discover what's valid and what's invalid? Isn't that the same as HTMLPurifier?

anyway, if you want to reinvent the wheel that's ok, sometimes the new wheel is better than the last one

As I mentioned before, HTMLPurifier is quite extensive but it does not address all my needs. I am sure its author is very skilled and experienced on XSS, but I needed a solution that HTMLPurifier was not meant for, so it is nothing against the quality of the work done to build HTMLPurifier.

For instance, I need to parse HTML to let users of a templating application that they have. I detects and warns about malformed HTML. I did an HTML parser and an XSS filter on top of it. That solution provides the necessary information including line and column numbers of the errors. Even if it is well-formed, I need to check if the HTML has certain tags and validate its values according to application specific semantics.

My HTML parser generates a stream (array) of tokens (tags, data, entities, etc..). It pipes the tokens to XSS filter, and if it is all right, the application level filter validates the semantics tags.

HTMLPurifier just returns you a filtered string. It also adds missing close tags and lowers the tag case, which is something I did not discover how to disable. That is OK if you want to make the filtered HTML XHTML compliant. But if you want to tell your users where they may have done something wrong, altering the original HTML will make it confusing because the positions of the errors may have changed by HTMLPurifier.

Other than that, I tried the last version of HTMLPurifier against xssAttacks.xml vectors and I got some odd results. For instance, HTMLPurifier changes this:

<IMG SRC="http://www.thesiteyouareon.com/somecommand.php?somevariables=maliciouscode">

Into this:

<img src="http://www.thesiteyouareon.com/somecommand.php?somevariables=maliciouscode" alt="somecommand.php?somevariables=maliciouscode" />

Maybe I am missing something here. First I do not quite see why this is considered a XSS vector. Still, adding an alt attribute with an image description is a good idea for SEO, but using part of the image URL as alt attribute seems an odd idea. Maybe somebody could clarify.


Quote

Quote

I never talked in blacklisting tags. What I am saying is that you just whitelist the markup that you want to accept, and reject the rest even if it may be valid of XSS safe.

I will throw a PHPIDS 0day here just to let you know how not-easy is this

that uses a whitelist, and to make it clear, that works on one of the most-pentested filters out there.. but well, who knows.. maybe you are better than us :)

I never said it was easy. I even mentioned it was not trivial.

I am not claiming anything about my solution being capable of detecting more types of attacks. I am sure you are more experienced because you have been looking into these things for more time than me.

What I mean is that, despite I need to use my own solution, which in part replicates existing solutions, that does not mean we could not cooperate, so every solution could be improved with knowledge shared among more people working on similar tools.

Quote

so well, testcases..

I will be uploading html test cases to code.google.com/p/googlecaja/downloads in case you are interested..

Thanks. Wouldn't you like to build sort of database of XSS vectors like those you find in those test case files, eventually evolving http://ha.ckers.org/xssAttacks.xml ?

http://ha.ckers.org/xssAttacks.xml is useful to evaluate XSS filters, but I think it misses two things:

1) I am sure there are many types of XSS vectors not listed there.

2) It does not contain the output of a good XSS filter for filtering each case so we can compare the results of each filtering solution and see how each is doing.

I don't know if this already exists, but what do you think about an eventual cooperative effort for improving http://ha.ckers.org/xssAttacks.xml to address those two aspects, starting from using test cases that you found out that are not listed there?

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: sirdarckcat
Date: July 20, 2009 02:15AM

> <img src="http://www.thesiteyouareon.com/somecommand.php?somevariables=maliciouscode" alt="somecommand.php?somevariables=maliciouscode" />

HTML requires an alt attribute on all image tags (accesibility)

About the BBCode/HTML issue, I know BBCode is not a all-mighty solution, rvdh is the one that suggested its use, since its way-easier to parse, but some times you need to deal with HTML.. (the same for lowercase/uppercase).

The reason we are suggesting you to use htmlpurifier/antisamy/etc.. is because we can review their internals, and we can be sure they are good (maybe not perfect some times, but in general, very good).

tra.ckers.org is aiming to be some sort of xss database, since the existing ha.ckers.org/xss | wasc | xssdb | etc are either death projects or irrelevant now a days.

I'm using code.google.com's tracker for the time being to test how it would be a good way of organizing this.

Putting the output of existing vectors I think is not relevant for such a project, but well..

Greetz!!

--------------------------------
http://sirdarckcat.blogspot.com/ http://www.sirdarckcat.net/ http://foro.elhacker.net/ http://twitter.com/sirdarckcat

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: rvdh
Date: July 20, 2009 09:54AM

Yah I may sounded a bit harsh, but that's the way I talk hehe.

I'm a bit in favor of white listing in any case, if you can. It sounds tempting to write a blacklist filter or something like it, but one also have to understand that if it fails to filter 1 single vector, it is 100% vulnerable. Since the potential of possible vectors is unknown, a filter based upon blacklisting will not be 100% secure, thus failing your objective to secure it. All it seems to do is impeding usability leaving it open to future attacks or unknown attacks to you/us.

So if the filter fails to catch all possible vectors, it's like being "a bit pregnant".

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: mlemos
Date: July 21, 2009 04:10AM

Quote

> <img src="http://www.thesiteyouareon.com/somecommand.php?somevariables=maliciouscode" alt="somecommand.php?somevariables=maliciouscode" />

HTML requires an alt attribute on all image tags (accesibility)

Yes, it will silent the validators that complain of missing alt tags, but pasting the URI in there is not really making it more accessible. Imagine a screen reader trying to spell that alt tag. Blind people will not get it. Maybe a text like "Missing image description" would be more helpful.

As far as security is concerned, that does not matter. What makes it more intriguing is the orginal description of the attack in xssAttacks.xml.

Quote

This works when the webpage where this is injected (like a web-board) is behind password protection and that password protection works with other commands on the same domain. This can be used to delete users, add users (if the user who visits the page is an administrator), send credentials elsewhere, etc... This is one of the lesser used but more useful XSS vectors.

I think they meant CSRF not XSS. Still what an HTML filter is supposed to do? Nothing, I think.


Quote

The reason we are suggesting you to use htmlpurifier/antisamy/etc.. is because we can review their internals, and we can be sure they are good (maybe not perfect some times, but in general, very good).

If you would like to review the internals of my HTML parsing and filtering engine, I would appreciate. I can make a mirror of my CVS repository available in a public server, despite there is some work to do.

It is going to be Open Source anyway. That will just take more time as it needs proper documentation.


Quote

tra.ckers.org is aiming to be some sort of xss database, since the existing ha.ckers.org/xss | wasc | xssdb | etc are either death projects or irrelevant now a days.

What about tra.ckers.org? I could not find anything like a XSS database there. Can you point me to the exact URL?


Quote

I'm using code.google.com's tracker for the time being to test how it would be a good way of organizing this.

Putting the output of existing vectors I think is not relevant for such a project, but well..

I am not sure what you mean. What I mean is that if you have the expected output of good filter for known vectors, it will help developers of all filters to compare against their own implementation, and eventually propose better solutions.

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: mlemos
Date: July 21, 2009 04:20AM

Quote

I'm a bit in favor of white listing in any case, if you can. It sounds tempting to write a blacklist filter or something like it, but one also have to understand that if it fails to filter 1 single vector, it is 100% vulnerable. Since the potential of possible vectors is unknown, a filter based upon blacklisting will not be 100% secure, thus failing your objective to secure it. All it seems to do is impeding usability leaving it open to future attacks or unknown attacks to you/us.

So if the filter fails to catch all possible vectors, it's like being "a bit pregnant".

I absolutely agree. My approach is to parse and rewrite discarding what is not know. It takes more work to develop and it will eventually be slower, but it will be safer, which I find more important since I do not want to take chances.

The greatest challenge to that approach is finding a good source of accepted tags, attributes, entities and CSS properties. For tags, attributes and entities you can use DTD. For CSS properties there are listings of properties in CSS specification but that requires manual work.

In any case, a problem may arise when you want to allow safe proprietary tags and CSS properties. If you know all you need that is OK. The problem is when you want to allow your site users to use properties that you do not know yet what they may need.

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: Anonymous User
Date: July 21, 2009 06:58AM

@mlemos: Regarding allowed tags and CSS properties/values you have to make sure the user agent is being considered too. Some browser versions execute background-image: javascript:alert(1), some don't. Same is of course fop the allowed attributes. If you need a (almost) complete list of tags/attributes/css properties ping me.

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: mlemos
Date: July 21, 2009 07:01PM

Mario, if there is a browser that may handle that as a valid URL, despite it is not a standard way to specify an URL property value, it should always be filtered before publishing filtered untrusted HTML/CSS, as we never know what browser the user may have.

The question is what should be the logic to detect and filter that and other CSS properties that take URLs? Shall I analyze all attributes for values like that, or shall I build a list of known attributes that take URLs? What do you suggest?

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: Anonymous User
Date: July 22, 2009 05:28AM

I think you won't get around a well maintained white-list. That was what I meant when saying it eats soul :) Either the tool would have to be very strict and just forbid a lot of properties-value-combinations (agnostic approach) - or would know about the common browser peculiarities and handle them.

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: sirdarckcat
Date: July 22, 2009 10:44AM

some browsers allow url() on all css attributes to make requests, some execute js on all of them, some dont require the string 'url', some others only load the js URI if the property requires a url, some others accept urls con content, some others allow funny things on fonts haha.. its amazing how many things you can do in css..

And even if you manage to parse correctly CSS (thing that is way more complicated that it seems.. IMHO is more complicated than JS and HTML.. and I can say I have experience parsing the 3 of them), completely valid CSS is dangerous (check out "CSS The Sexy Assassin" for an example of XSS attacks that use just CSS-no javascript).

The conclusion is.. well.. there's no conclusion haha..

Greetz!!

--------------------------------
http://sirdarckcat.blogspot.com/ http://www.sirdarckcat.net/ http://foro.elhacker.net/ http://twitter.com/sirdarckcat

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: mlemos
Date: July 22, 2009 03:28PM

Mario, I already said I agree with you. White listing is the way to go. It is a lot of work to develop a safe solution, that will "eat your soul" to develop and maintain, just to use your words. But it is a work that must be done.

We are not going anywhere just moaning that it is a difficult job, and it will be eventually impossible avoid 0 day exploit discoveries, just like anti-virus.

All Web-mail systems that claim to be safe (if there is really such a thing) need to have a good filtering solution. If they can do it and they are not Gods, mere mortals like us can do it given the necessary amount of time and skill.

I am already using white lists for everything I use. Knowing every discovered attack caused by browser pecularities is harder because there seems to not be a single central source of information that lists all known vectors, so every developer working on a filtering solution could evaluate its efficiency.

This is what I was asking your colaboration. http://ha.ckers.org/xssAttacks.xml is a good start, but as it was pointed out, it is not upto date. Anybody interested in building a more upto date source of XSS vectors?

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: mlemos
Date: July 22, 2009 03:56PM

sirdarckcat maybe a drastic solution is the way to go, like dropping all CSS properties that start with a non-whitelisted URL scheme name followed by colon. I am just not sure if this may drop many cases of valid properties that are not really URLs could start with javascript: or some dangerous URL scheme.

As for parsing CSS correctly, I have developed a PHP class just for that purpose. If you would like to evaluate that and other classes of my solution, I can mirror my project CVS repository in a public server so everybody can try it. Just let me know if you are interested.

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: Anonymous User
Date: July 22, 2009 05:49PM

Did someone say vectors?

https://trac.php-ids.org/index.fcgi/browser/trunk/tests/IDS/MonitorTest.php

Options: ReplyQuote
Re: XSS attacks filtered output
Posted by: arshan
Date: August 03, 2009 08:52PM

ronald:
>And btw, it's a mission set to fail if you try, just glance over the stuff .mario,
>Gareth, SDC and other have come up with. No filter was and will be ever secure to
>this,

It depends on what you mean by "fail". If you mean that you can't write a perfectly secure filter that will never be vulnerable for one day, then of course you are bound to fail. If this is your expectation then computers probably aren't for you. User generated content (not bbcode) is the future (or rather, the present) and I'd rather enable users and do the best I can to make it secure; that's just my take on it.

mlemos:
>I also tried AntiSamy but its quality control testing methods are a bit weak.The
>test suite code embeds the test cases. To verify it works it just checks if some
>sequences are still in output. There is no source of expected results that is
>outside of the test suite code.

While this is fair critique, it would only be a problem if code caused usability regression problems. I'm as interested in receiving "usability 0days" where good HTML is rejected, but those cases don't come up that much, especially due to regression. We have a community of people that use it that will let us know the second it doesn't do what they want. And I wouldn't have it any other way. =)

If you're going to write one, let me just tell you, it's hard to do securely, and even harder to do in a usable way.

FTR I wish there was a better way than hardcoding security tests that checking to "avoid allowing badness", and if you can think of one, let me know. Usability and security are two different things and I test them differently.

Btw, I have high hopes for tra.ckers.org - hopefully in that it will allow represent vectors in an object model. This would make automating the fuzzing variants very easy, i.e. <script>alert(/1/)</script>, then "><script>alert(/1/)</script>, etc. If it's going to be a list of strings I encourage you to think bigger!

Options: ReplyQuote
Pages: 12Next
Current Page: 1 of 2


Sorry, only registered users may post in this forum.