An AFL implementation with UnTracer (our coverage-guided tracer)

Overview

UnTracer-AFL

This repository contains an implementation of our prototype coverage-guided tracing framework UnTracer in the popular coverage-guided fuzzer AFL. Coverage-guided tracing employs two versions of the target binary: (1) a forkserver-only oracle binary modified with basic block-level software interrupts on unseen basic blocks for quickly identifying coverage-increasing testcases and (2) a fully-instrumented tracer binary for tracing the coverage of all coverage-increasing testcases.

In UnTracer, both the oracle and tracer binaries use the AFL-inspired forkserver execution model. For oracle instrumentation we require all target binaries be compiled with untracer-cc -- our "forkserver-only" modification of AFL's assembly-time instrumenter afl-cc. For tracer binary instrumentation we utilize Dyninst with much of our code based-off AFL-Dyninst. We plan to incorporate a purely binary-only ("black-box") instrumentation approach in the near future. Our current implementation of UnTracer supports basic block coverage.

Presented in our paper Full-speed Fuzzing: Reducing Fuzzing Overhead through Coverage-guided Tracing
(2019 IEEE Symposium on Security and Privacy).
Citing this repository: @inproceedings{nagy:fullspeedfuzzing,
title = {Full-speed Fuzzing: Reducing Fuzzing Overhead through Coverage-guided Tracing},
author = {Stefan Nagy and Matthew Hicks},
booktitle = {{IEEE} Symposium on Security and Privacy (Oakland)},
year = {2019},}
Developers: Stefan Nagy ([email protected]) and Matthew Hicks ([email protected])
License: MIT License
Disclaimer: This software is strictly a research prototype.

INSTALLATION

1. Download and build Dyninst (we used v9.3.2)

sudo apt-get install cmake m4 zlib1g-dev libboost-all-dev libiberty-dev
wget https://github.com/dyninst/dyninst/archive/v9.3.2.tar.gz
tar -xf v9.3.2.tar.gz dyninst-9.3.2/
mkdir dynBuildDir
cd dynBuildDir
cmake ../dyninst-9.3.2/ -DCMAKE_INSTALL_PREFIX=`pwd`
make
make install

2. Download UnTracer-AFL (this repo)

git clone https://github.com/FoRTE-Research/UnTracer-AFL

3. Configure environment variables

export DYNINST_INSTALL=/path/to/dynBuildDir
export UNTRACER_AFL_PATH=/path/to/Untracer-AFL

export DYNINSTAPI_RT_LIB=$DYNINST_INSTALL/lib/libdyninstAPI_RT.so
export LD_LIBRARY_PATH=$DYNINST_INSTALL/lib:$UNTRACER_AFL_PATH
export PATH=$PATH:$UNTRACER_AFL_PATH

4. Build UnTracer-AFL

Update DYN_ROOT in UnTracer-AFL/Makefile to your Dyninst install directory. Then, run the following commands:

make clean && make all

USAGE

First, compile all target binaries using "forkserver-only" instrumentation. As with AFL, you will need to manually set the C compiler (untracer-clang or untracer-gcc) and/or C++ compiler (untracer-clang++ or untracer-g++). Note that only non-position-independent target binaries are supported, so compile all target binaries with CFLAG -no-pie (unnecessary for Clang). For example:

NOTE: We provide a set of fuzzing-ready benchmarks available here: https://github.com/FoRTE-Research/FoRTE-FuzzBench.

$ CC=/path/to/afl/untracer-clang ./configure --disable-shared
$ CXX=/path/to/afl/untracer-clang++.
$ make clean all
Instrumenting in forkserver-only mode...

Then, run untracer-afl as follows:

untracer-afl -i [/path/to/seed/dir] -o [/path/to/out/dir] [optional_args] -- [/path/to/target] [target_args]

Status Screen

  • calib execs and trim execs - Number of testcase calibration and trimming executions, respectively. Tracing is done for both.
  • block coverage - Percentage of total blocks found (left) and the number of total blocks (right).
  • traced / queued - Ratio of traced versus queued testcases. This ratio should (ideally) be 1:1 but will increase as trace timeouts occur.
  • trace tmouts (discarded) - Number of testcases which timed out during tracing. Like AFL, we do not queue these.
  • no new bits (discarded) - Number of testcases which were marked coverage-increasing by the oracle but did not actually increase coverage. This should (ideally) be 0.

PyTorch implementation for "HyperSPNs: Compact and Expressive Probabilistic Circuits", NeurIPS 2021

HyperSPN This repository contains code for the paper: HyperSPNs: Compact and Expressive Probabilistic Circuits "HyperSPNs: Compact and Expressive Prob

8 Nov 08, 2022
Activating More Pixels in Image Super-Resolution Transformer

HAT [Paper Link] Activating More Pixels in Image Super-Resolution Transformer Xiangyu Chen, Xintao Wang, Jiantao Zhou and Chao Dong BibTeX @article{ch

XyChen 270 Dec 27, 2022
Official pytorch implementation of Active Learning for deep object detection via probabilistic modeling (ICCV 2021)

Active Learning for Deep Object Detection via Probabilistic Modeling This repository is the official PyTorch implementation of Active Learning for Dee

NVIDIA Research Projects 130 Jan 06, 2023
The official PyTorch code implementation of "Human Trajectory Prediction via Counterfactual Analysis" in ICCV 2021.

Human Trajectory Prediction via Counterfactual Analysis (CausalHTP) The official PyTorch code implementation of "Human Trajectory Prediction via Count

46 Dec 03, 2022
QR2Pass-project - A proof of concept for an alternative (passwordless) authentication system to a web server

QR2Pass This is a proof of concept for an alternative (passwordless) authenticat

4 Dec 09, 2022
Deep Q-Learning Network in pytorch (not actively maintained)

pytoch-dqn This project is pytorch implementation of Human-level control through deep reinforcement learning and I also plan to implement the followin

Hung-Tu Chen 342 Jan 01, 2023
It's a implement of this paper:Relation extraction via Multi-Level attention CNNs

Relation Classification via Multi-Level Attention CNNs It's a implement of this paper:Relation Classification via Multi-Level Attention CNNs. Training

Aybss 2 Nov 04, 2022
duralava is a neural network which can simulate a lava lamp in an infinite loop.

duralava duralava is a neural network which can simulate a lava lamp in an infinite loop. Example This is not a real lava lamp but a "fake" one genera

Maximilian Bachl 87 Dec 20, 2022
Denoising Diffusion Probabilistic Models

Denoising Diffusion Probabilistic Models Jonathan Ho, Ajay Jain, Pieter Abbeel Paper: https://arxiv.org/abs/2006.11239 Website: https://hojonathanho.g

Jonathan Ho 1.5k Jan 08, 2023
Temporally Coherent GAN SIGGRAPH project.

TecoGAN This repository contains source code and materials for the TecoGAN project, i.e. code for a TEmporally COherent GAN for video super-resolution

Duc Linh Nguyen 2 Jan 18, 2022
automatic color-grading

color-matcher Description color-matcher enables color transfer across images which comes in handy for automatic color-grading of photographs, painting

hahnec 168 Jan 05, 2023
MT-GAN-PyTorch - PyTorch Implementation of Learning to Transfer: Unsupervised Domain Translation via Meta-Learning

MT-GAN-PyTorch PyTorch Implementation of AAAI-2020 Paper "Learning to Transfer: Unsupervised Domain Translation via Meta-Learning" Dependency: Python

29 Oct 19, 2022
Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving

SalsaNext: Fast, Uncertainty-aware Semantic Segmentation of LiDAR Point Clouds for Autonomous Driving Abstract In this paper, we introduce SalsaNext f

308 Jan 04, 2023
From Perceptron model to Deep Neural Network from scratch in Python.

Neural-Network-Basics Aim of this Repository: From Perceptron model to Deep Neural Network (from scratch) in Python. ** Currently working on a basic N

Aditya Kahol 1 Jan 14, 2022
X-modaler is a versatile and high-performance codebase for cross-modal analytics.

X-modaler X-modaler is a versatile and high-performance codebase for cross-modal analytics. This codebase unifies comprehensive high-quality modules i

910 Dec 28, 2022
Reinforcement Learning for finance

Reinforcement Learning for Finance We apply reinforcement learning for stock trading. Fetch Data Example import utils # fetch symbols from yahoo fina

Tomoaki Fujii 159 Jan 03, 2023
Code for paper "Learning to Reweight Examples for Robust Deep Learning"

learning-to-reweight-examples Code for paper Learning to Reweight Examples for Robust Deep Learning. [arxiv] Environment We tested the code on tensorf

Uber Research 261 Jan 01, 2023
Using image super resolution models with vapoursynth and speeding them up with TensorRT

vs-RealEsrganAnime-tensorrt-docker Using image super resolution models with vapoursynth and speeding them up with TensorRT. Also a docker image since

4 Aug 23, 2022
Tensorflow python implementation of "Learning High Fidelity Depths of Dressed Humans by Watching Social Media Dance Videos"

Learning High Fidelity Depths of Dressed Humans by Watching Social Media Dance Videos This repository is the official tensorflow python implementation

Yasamin Jafarian 287 Jan 06, 2023
Code for our CVPR2021 paper coordinate attention

Coordinate Attention for Efficient Mobile Network Design (preprint) This repository is a PyTorch implementation of our coordinate attention (will appe

Qibin (Andrew) Hou 726 Jan 05, 2023