Offensive Thinking

Internet Thoughtcrime

Fuzzing and Enumerating with Wfuzz

Posted: 2010-01-04

Wfuzz. What a neat tool. It’s an HTTP fuzzer designed for fuzzing web applications. Although the term “fuzzing” implies too narrow a scope for this tool in my opinion. I always think of finding application bugs when I hear the term “fuzzing”, but wfuzz is also a great program to enumerate files or the like. I guess the line between “fuzzing” and “enumerating” can be kind of blurry at times.

One usage possibility besides the obvious enumeration of e.g. often found applications is to quickly build your own wordlist for the web application you’re currently (pen)testing and using wfuzz to check for these files. Most of the times you do not have a file listing of the remote installation directory. But people tend to forget to remove many of the standard files of today’s web applications which may give you useful hints, for example a Changelog file with the exact version of the application. There’s also the chance of example scripts or extensions (think frameworks like Joomla!) still available, which are surprisingly often vulnerable. Search OSVDB for “example” and you’ll see what I mean.

So what you can do is the following: First, download and unpack (and possibly even install) the webapp locally. Change into the resulting (installation) directory and get a list of all files and directories, e.g. by running:

        ruby -e 'puts Dir["**/**"]' > dict.txt

Now, run wfuzz with the newly created wordlist against the remote website. The following command line is a basic wfuzz invocation with coloured output, the wordlist dict.txt, 5 threads, the suppression of HTTP 404 return values and printing HTML output to stderr, which is redirected to a file to be viewed later:

        ./ -c
                   -z file
                   -f dict.txt 
                   -t 5 
                   --hc 404 
                   2> `date '+%Y-%m-%d_%H%I'`-example.html

It’ll replace FUZZ with the words from your wordlist and give you a nice and clean output to stdout showing its findings. The saved HTML file even gives you clickable links to all the generated URLs. I tend to view it in elinks because it’s not very pretty, but it gets the job done.

Have also a look at the wordlists already included, I find many of them very useful. Wfuzz has many other options like fuzzing POST data, setting a cookie, doing authentication etc. Just have a look at “ -h”, it’s easy to understand.