An AFL implementation with UnTracer (our coverage-guided tracer)

Overview

UnTracer-AFL

This repository contains an implementation of our prototype coverage-guided tracing framework UnTracer in the popular coverage-guided fuzzer AFL. Coverage-guided tracing employs two versions of the target binary: (1) a forkserver-only oracle binary modified with basic block-level software interrupts on unseen basic blocks for quickly identifying coverage-increasing testcases and (2) a fully-instrumented tracer binary for tracing the coverage of all coverage-increasing testcases.

In UnTracer, both the oracle and tracer binaries use the AFL-inspired forkserver execution model. For oracle instrumentation we require all target binaries be compiled with untracer-cc -- our "forkserver-only" modification of AFL's assembly-time instrumenter afl-cc. For tracer binary instrumentation we utilize Dyninst with much of our code based-off AFL-Dyninst. We plan to incorporate a purely binary-only ("black-box") instrumentation approach in the near future. Our current implementation of UnTracer supports basic block coverage.

Presented in our paper Full-speed Fuzzing: Reducing Fuzzing Overhead through Coverage-guided Tracing
(2019 IEEE Symposium on Security and Privacy).
Citing this repository: @inproceedings{nagy:fullspeedfuzzing,
title = {Full-speed Fuzzing: Reducing Fuzzing Overhead through Coverage-guided Tracing},
author = {Stefan Nagy and Matthew Hicks},
booktitle = {{IEEE} Symposium on Security and Privacy (Oakland)},
year = {2019},}
Developers: Stefan Nagy ([email protected]) and Matthew Hicks ([email protected])
License: MIT License
Disclaimer: This software is strictly a research prototype.

INSTALLATION

1. Download and build Dyninst (we used v9.3.2)

sudo apt-get install cmake m4 zlib1g-dev libboost-all-dev libiberty-dev
wget https://github.com/dyninst/dyninst/archive/v9.3.2.tar.gz
tar -xf v9.3.2.tar.gz dyninst-9.3.2/
mkdir dynBuildDir
cd dynBuildDir
cmake ../dyninst-9.3.2/ -DCMAKE_INSTALL_PREFIX=`pwd`
make
make install

2. Download UnTracer-AFL (this repo)

git clone https://github.com/FoRTE-Research/UnTracer-AFL

3. Configure environment variables

export DYNINST_INSTALL=/path/to/dynBuildDir
export UNTRACER_AFL_PATH=/path/to/Untracer-AFL

export DYNINSTAPI_RT_LIB=$DYNINST_INSTALL/lib/libdyninstAPI_RT.so
export LD_LIBRARY_PATH=$DYNINST_INSTALL/lib:$UNTRACER_AFL_PATH
export PATH=$PATH:$UNTRACER_AFL_PATH

4. Build UnTracer-AFL

Update DYN_ROOT in UnTracer-AFL/Makefile to your Dyninst install directory. Then, run the following commands:

make clean && make all

USAGE

First, compile all target binaries using "forkserver-only" instrumentation. As with AFL, you will need to manually set the C compiler (untracer-clang or untracer-gcc) and/or C++ compiler (untracer-clang++ or untracer-g++). Note that only non-position-independent target binaries are supported, so compile all target binaries with CFLAG -no-pie (unnecessary for Clang). For example:

NOTE: We provide a set of fuzzing-ready benchmarks available here: https://github.com/FoRTE-Research/FoRTE-FuzzBench.

$ CC=/path/to/afl/untracer-clang ./configure --disable-shared
$ CXX=/path/to/afl/untracer-clang++.
$ make clean all
Instrumenting in forkserver-only mode...

Then, run untracer-afl as follows:

untracer-afl -i [/path/to/seed/dir] -o [/path/to/out/dir] [optional_args] -- [/path/to/target] [target_args]

Status Screen

  • calib execs and trim execs - Number of testcase calibration and trimming executions, respectively. Tracing is done for both.
  • block coverage - Percentage of total blocks found (left) and the number of total blocks (right).
  • traced / queued - Ratio of traced versus queued testcases. This ratio should (ideally) be 1:1 but will increase as trace timeouts occur.
  • trace tmouts (discarded) - Number of testcases which timed out during tracing. Like AFL, we do not queue these.
  • no new bits (discarded) - Number of testcases which were marked coverage-increasing by the oracle but did not actually increase coverage. This should (ideally) be 0.

This project uses ViT to perform image classification tasks on DATA set CIFAR10.

Vision-Transformer-Multiprocess-DistributedDataParallel-Apex Introduction This project uses ViT to perform image classification tasks on DATA set CIFA

Kaicheng Yang 3 Jun 03, 2022
This is the repository of the NeurIPS 2021 paper "Curriculum Disentangled Recommendation withNoisy Multi-feedback"

Curriculum_disentangled_recommendation This is the repository of the NeurIPS 2021 paper "Curriculum Disentangled Recommendation with Noisy Multi-feedb

14 Dec 20, 2022
Unleashing Transformers: Parallel Token Prediction with Discrete Absorbing Diffusion for Fast High-Resolution Image Generation from Vector-Quantized Codes

Unleashing Transformers: Parallel Token Prediction with Discrete Absorbing Diffusion for Fast High-Resolution Image Generation from Vector-Quantized C

Sam Bond-Taylor 139 Jan 04, 2023
Physics-Aware Training (PAT) is a method to train real physical systems with backpropagation.

Physics-Aware Training (PAT) is a method to train real physical systems with backpropagation. It was introduced in Wright, Logan G. & Onodera, Tatsuhiro et al. (2021)1 to train Physical Neural Networ

McMahon Lab 230 Jan 05, 2023
[ACL 20] Probing Linguistic Features of Sentence-level Representations in Neural Relation Extraction

REval Table of Contents Introduction Overview Requirements Installation Probing Usage Citation License 🎓 Introduction REval is a simple framework for

13 Jan 06, 2023
DeepLabv3+:Encoder-Decoder with Atrous Separable Convolution语义分割模型在tensorflow2当中的实现

DeepLabv3+:Encoder-Decoder with Atrous Separable Convolution语义分割模型在tensorflow2当中的实现 目录 性能情况 Performance 所需环境 Environment 注意事项 Attention 文件下载 Download

Bubbliiiing 31 Nov 25, 2022
Contrastive Language-Image Pretraining

CLIP [Blog] [Paper] [Model Card] [Colab] CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pair

OpenAI 11.5k Jan 08, 2023
KGDet: Keypoint-Guided Fashion Detection (AAAI 2021)

KGDet: Keypoint-Guided Fashion Detection (AAAI 2021) This is an official implementation of the AAAI-2021 paper "KGDet: Keypoint-Guided Fashion Detecti

Qian Shenhan 35 Dec 29, 2022
Lexical Substitution Framework

LexSubGen Lexical Substitution Framework This repository contains the code to reproduce the results from the paper: Arefyev Nikolay, Sheludko Boris, P

Samsung 37 Sep 15, 2022
Simple improvement of VQVAE that allow to generate x2 sized images compared to baseline

vqvae_dwt_distiller.pytorch Simple improvement of VQVAE that allow to generate x2 sized images compared to baseline. It allows to generate 512x512 ima

Sergei Belousov 25 Jul 19, 2022
Accelerated SMPL operation, commonly used in generate 3D human mesh, STAR included.

SMPL2 An enchanced and accelerated SMPL operation which commonly used in 3D human mesh generation. It takes a poses, shapes, cam_trans as inputs, outp

JinTian 20 Oct 17, 2022
Task Transformer Network for Joint MRI Reconstruction and Super-Resolution (MICCAI 2021)

T2Net Task Transformer Network for Joint MRI Reconstruction and Super-Resolution (MICCAI 2021) [Paper][Code] Dependencies numpy==1.18.5 scikit_image==

64 Nov 23, 2022
Statistical and Algorithmic Investing Strategies for Everyone

Eiten - Algorithmic Investing Strategies for Everyone Eiten is an open source toolkit by Tradytics that implements various statistical and algorithmic

Tradytics 2.5k Jan 02, 2023
Unofficial implementation of the Involution operation from CVPR 2021

involution_pytorch Unofficial PyTorch implementation of "Involution: Inverting the Inherence of Convolution for Visual Recognition" by Li et al. prese

Rishabh Anand 46 Dec 07, 2022
Medical Image Segmentation using Squeeze-and-Expansion Transformers

Medical Image Segmentation using Squeeze-and-Expansion Transformers Introduction This repository contains the code of the IJCAI'2021 paper 'Medical Im

askerlee 172 Dec 20, 2022
Sequence to Sequence (seq2seq) Recurrent Neural Network (RNN) for Time Series Forecasting

Sequence to Sequence (seq2seq) Recurrent Neural Network (RNN) for Time Series Forecasting Note: You can find here the accompanying seq2seq RNN forecas

Guillaume Chevalier 1k Dec 25, 2022
Pytorch implementation for "Implicit Semantic Response Alignment for Partial Domain Adaptation"

Implicit-Semantic-Response-Alignment Pytorch implementation for "Implicit Semantic Response Alignment for Partial Domain Adaptation" Prerequisites pyt

4 Dec 19, 2022
Bagua is a flexible and performant distributed training algorithm development framework.

Bagua is a flexible and performant distributed training algorithm development framework.

786 Dec 17, 2022
PyTorch implementation of Value Iteration Networks (VIN): Clean, Simple and Modular. Visualization in Visdom.

VIN: Value Iteration Networks This is an implementation of Value Iteration Networks (VIN) in PyTorch to reproduce the results.(TensorFlow version) Key

Xingdong Zuo 215 Dec 07, 2022
Where2Act: From Pixels to Actions for Articulated 3D Objects

Where2Act: From Pixels to Actions for Articulated 3D Objects The Proposed Where2Act Task. Given as input an articulated 3D object, we learn to propose

Kaichun Mo 69 Nov 28, 2022