Skip to content ↓ | Skip to navigation ↓

Fuzz testing is one of the most powerful tools in the bug hunter’s toolset. At a basic level, fuzzing is the art of repeatedly processing crafted test inputs while checking for ill-effects, such as memory corruptions or information disclosures.

One of the main advantages of fuzz testing is that it works 24×7 without a break and with no need for overtime pay.

In recent months, I have successfully employed fuzz testing to identify security bugs in four major web browsers, a popular server side scripting language, various graphics/media libraries, a cross-platform compression utility, and more.

In my upcoming BSides SF presentation, I will be discussing how I make use of American Fuzzy Lop (AFL) in my research.

American Fuzzy Lop, or AFL for short, is a framework for performing coverage-based fuzzing.

At a high level, this means that AFL refines the test cases it chooses based on feedback of which parts of a target program are exercised. In contrast, other popular fuzzing techniques commonly involve random file mutations or derive test cases from target-specific templates.

The concept of coverage-based fuzzing is not new and, in fact, inspiration for AFL came from Google’s Tavis Ormandy, whose research conducted over 10 years ago showed how to evaluate test cases based on GCov coverage reports.

In the case of AFL, feedback is provided through instrumentation, which records path transitions within a programs execution flow.

Ideally, the instrumentation trampolines are added to a programs assembly during a build process but AFL also supports a much slower mode in which QEMU user emulation is used to trace execution. (A port of AFL also exists to enable runtime instrumentation through the Intel PIN tool but this is again much slower than compiled instrumentation.)

While there are very few “knobs and dials” to tune for successful fuzzing with AFL, there are quite a few experimental features to take advantage of, too, as nuances that can drastically improve or handicap the process.

This will be the focus of my upcoming BSidesSF presentation “Fuzz Smarter Not Harder.”

Here are just a few of the topics for discussion:

  • Selecting a good fuzz target
  • Identifying ideal test cases
  • Using persistent mode to increase execution rate
  • Finding cross-platform bugs with AFL chaining
  • Dealing with checksums and other blockers
  • Crash triage with ASAN, GDB, and Peruvian Were Rabbit

My session is February 29, 2016, at 3:00 PM in the main room of the DNA lounge. If you will be in San Francisco and do not already have tickets, I encourage you to head on over to the DNA lounge site to purchase them before the event.

If you won’t be able to attend, you can also catch the presentation via live stream on the DNA lounge website.

Title image courtesy of ShutterStock