Angora is a mutation-based fuzzer. The main goal of Angora is to increase branch coverage by solving path constraints without symbolic execution.

Overview

Angora

License Build Status

Angora is a mutation-based coverage guided fuzzer. The main goal of Angora is to increase branch coverage by solving path constraints without symbolic execution.

Published Work

Arxiv: Angora: Efficient Fuzzing by Principled Search, S&P 2018.

Building Angora

Build Requirements

  • Linux-amd64 (Tested on Ubuntu 16.04/18.04 and Debian Buster)
  • Rust stable (>= 1.31), can be obtained using rustup
  • LLVM 4.0.0 - 7.1.0 : run PREFIX=/path-to-install ./build/install_llvm.sh.

Environment Variables

Append the following entries in the shell configuration file (~/.bashrc, ~/.zshrc).

export PATH=/path-to-clang/bin:$PATH
export LD_LIBRARY_PATH=/path-to-clang/lib:$LD_LIBRARY_PATH

Fuzzer Compilation

The build script will resolve most dependencies and setup the runtime environment.

./build/build.sh

System Configuration

As with AFL, system core dumps must be disabled.

echo core | sudo tee /proc/sys/kernel/core_pattern

Test

Test if Angora is builded successfully.

cd /path-to-angora/tests
./test.sh mini

Running Angora

Build Target Program

Angora compiles the program into two separate binaries, each with their respective instrumentation. Using autoconf programs as an example, here are the steps required.

# Use the instrumenting compilers
CC=/path/to/angora/bin/angora-clang \
CXX=/path/to/angora/bin/angora-clang++ \
LD=/path/to/angora/bin/angora-clang \
PREFIX=/path/to/target/directory \
./configure --disable-shared

# Build with taint tracking support 
USE_TRACK=1 make -j
make install

# Save the compiled target binary into a new directory
# and rename it with .taint postfix, such as uniq.taint

# Build with light instrumentation support
make clean
USE_FAST=1 make -j
make install

# Save the compiled binary into the directory previously
# created and rename it with .fast postfix, such as uniq.fast

If you fail to build by this approach, try wllvm and gllvm described in Build a target program.

Also, we have implemented taint analysis with libdft64 instead of DFSan (Use libdft64 for taint tracking).

Fuzzing

./angora_fuzzer -i input -o output -t path/to/taint/program -- path/to/fast/program [argv]

For more information, please refer to the documentation under the docs/ directory.

Comments
  • Unable to compile lavam programs correctly

    Unable to compile lavam programs correctly

    Hello Angora authors,

    I'm trying to reproduce the lavam evaluation within Magma's infrastructure. However, I think I encounter the following 2 issues. Could you help me to check if I'm doing anything wrong?

    Thank you in advance!

    The 2 issues are as follow:

    1. Angora cannot find any bugs while AFLplusplus can easily discover ones within a few minutes. From the log files I see that Angora is saying Multiple inconsistent warnings. It caused by the fast and track programs has different behaviors. If most constraints are inconsistent, ensure they are compiled with the same environment. Otherwise, please report us.
    2. For who, AFLplusplus can only find <20 bugs after running for 5 hours. For other targets it is finding the numbers of bugs reported in your paper.

    You can find the scripts I use to compile and run the fuzzing campaigns here. Basically, the lavam programs are compiled with fuzzers/aflplusplus/instrument.sh and fuzzers/angora/instrument.sh, which they set up some config and execute targets/lavam/build.sh.
    In targets/lavam/LAVAM you can find the patched source code following your instructions.

    To launch the fuzzing campaigns, cd into tools/captain and run ./run.sh run_lavamrc.
    run_lavamrc is the config file for the campaign. It would create a working directory in ~/lavam-results, build docker containers and start fuzzing with fuzzers/aflplusplus/run.sh and fuzzers/angora/run.sh. The fuzzing results are stored in ~/lavam-results/ar as tarballs.

    Please do let me know if you need any additional information.

    Spencer

    opened by spencerwuwu 1
  • Fix up compiler warnings

    Fix up compiler warnings

    • Correct signedness for c-strings in angora-clang
    • Const-correctness throughout
    • Move #[link] attribute to extern block

    Fixes all warnings emitted by clang version 14.

    opened by bossmc 0
  • Upgrade to GitHub-native Dependabot

    Upgrade to GitHub-native Dependabot

    Dependabot Preview will be shut down on August 3rd, 2021. In order to keep getting Dependabot updates, please merge this PR and migrate to GitHub-native Dependabot before then.

    Dependabot has been fully integrated into GitHub, so you no longer have to install and manage a separate app. This pull request migrates your configuration from Dependabot.com to a config file, using the new syntax. When merged, we'll swap out dependabot-preview (me) for a new dependabot app, and you'll be all set!

    With this change, you'll now use the Dependabot page in GitHub, rather than the Dependabot dashboard, to monitor your version updates, and you'll configure Dependabot through the new config file rather than a UI.

    If you've got any questions or feedback for us, please let us know by creating an issue in the dependabot/dependabot-core repository.

    Learn more about migrating to GitHub-native Dependabot

    Please note that regular @dependabot commands do not work on this pull request.

    dependencies 
    opened by dependabot-preview[bot] 1
  • Angora compile IR

    Angora compile IR

    Would Angora have support to compile from LLVM or BAP derived intermediate representation?

    Trying to analyze binary (pre-compiled) but couldn't figure out how:

     INFO  angora::fuzz_main > CommandOpt { mode: LLVM, id: 0, main: ("/input/azorult2", []), track: ("/input/azorult2", []), tmp_dir: "./output/bar/tmp", out_file: "./output/bar/tmp/cur_input", forksrv_socket_path: "./output/bar/tmp/forksrv_socket", track_path: "./output/bar/tmp/track", is_stdin: true, search_method: Gd, mem_limit: 200, time_limit: 1, is_raw: true, uses_asan: false, ld_library: "$LD_LIBRARY_PATH:/clang+llvm/lib", enable_afl: true, enable_exploitation: true }
    thread 'main' panicked at 'The program is not complied by Angora', fuzzer/src/check_dep.rs:55:9
    
    opened by aug2uag 1
  • Update rand requirement from 0.7 to 0.8

    Update rand requirement from 0.7 to 0.8

    Updates the requirements on rand to permit the latest version.

    Changelog

    Sourced from rand's changelog.

    [0.8.0] - 2020-12-18

    Platform support

    • The minimum supported Rust version is now 1.36 (#1011)
    • getrandom updated to v0.2 (#1041)
    • Remove wasm-bindgen and stdweb feature flags. For details of WASM support, see the getrandom documentation. (#948)
    • ReadRng::next_u32 and next_u64 now use little-Endian conversion instead of native-Endian, affecting results on Big-Endian platforms (#1061)
    • The nightly feature no longer implies the simd_support feature (#1048)
    • Fix simd_support feature to work on current nightlies (#1056)

    Rngs

    • ThreadRng is no longer Copy to enable safe usage within thread-local destructors (#1035)
    • gen_range(a, b) was replaced with gen_range(a..b). gen_range(a..=b) is also supported. Note that a and b can no longer be references or SIMD types. (#744, #1003)
    • Replace AsByteSliceMut with Fill and add support for [bool], [char], [f32], [f64] (#940)
    • Restrict rand::rngs::adapter to std (#1027; see also #928)
    • StdRng: add new std_rng feature flag (enabled by default, but might need to be used if disabling default crate features) (#948)
    • StdRng: Switch from ChaCha20 to ChaCha12 for better performance (#1028)
    • SmallRng: Replace PCG algorithm with xoshiro{128,256}++ (#1038)

    Sequences

    • Add IteratorRandom::choose_stable as an alternative to choose which does not depend on size hints (#1057)
    • Improve accuracy and performance of IteratorRandom::choose (#1059)
    • Implement IntoIterator for IndexVec, replacing the into_iter method (#1007)
    • Add value stability tests for seq module (#933)

    Misc

    • Support PartialEq and Eq for StdRng, SmallRng and StepRng (#979)
    • Added a serde1 feature and added Serialize/Deserialize to UniformInt and WeightedIndex (#974)
    • Drop some unsafe code (#962, #963, #1011)
    • Reduce packaged crate size (#983)
    • Migrate to GitHub Actions from Travis+AppVeyor (#1073)

    Distributions

    • Alphanumeric samples bytes instead of chars (#935)
    • Uniform now supports char, enabling rng.gen_range('A'..='Z') (#1068)
    • Add UniformSampler::sample_single_inclusive (#1003)

    Weighted sampling

    • Implement weighted sampling without replacement (#976, #1013)
    • rand::distributions::alias_method::WeightedIndex was moved to rand_distr::WeightedAliasIndex. The simpler alternative rand::distribution::WeightedIndex remains. (#945)
    • Improve treatment of rounding errors in WeightedIndex::update_weights (#956)
    • WeightedIndex: return error on NaN instead of panic (#1005)

    Documentation

    • Document types supported by random (#994)
    Commits

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
    • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

    Additionally, you can set the following in your Dependabot dashboard:

    • Update frequency (including time of day and day of week)
    • Pull request limits (per update run and/or open at any time)
    • Automerge options (never/patch/minor, and dev/runtime dependencies)
    • Out-of-range updates (receive only lockfile updates, if desired)
    • Security updates (receive only security updates, if desired)
    dependencies 
    opened by dependabot-preview[bot] 0
  • showmap: added tool for displaying coverage data

    showmap: added tool for displaying coverage data

    Analogous to afl-showmap. Logs code coverage information to a file (in the same format as afl-showmap).

    This is my first time writing Rust, so I hope that it's okay!

    opened by adrianherrera 0
Releases(1.3.0)
  • 1.3.0(Apr 13, 2022)

    • Support LLVM 11/12
    • Tested in Rust 1.6.*, and Ubuntu 20.04
    • Fix issues
      • getc model
      • https://github.com/AngoraFuzzer/Angora/commit/b31af93bb7401a296af0ddaa7b80eaaed7f73415
      • https://github.com/AngoraFuzzer/Angora/issues/86
    • New PRs
    Source code(tar.gz)
    Source code(zip)
  • 1.2.2(Jul 17, 2019)

    • Implementation of Never-zero counter: The idea is from Marc and Heiko in AFLPlusPlus . https://github.com/vanhauser-thc/AFLplusplus/blob/master/llvm_mode/README.neverzero

    • add inst_ratio : issue #67

    • fix asan compatible: did not instrument function startswith "asan.module"

    Source code(tar.gz)
    Source code(zip)
  • 1.2.1(Jun 14, 2019)

  • 1.2.0(May 23, 2019)

The code uses SegFormer for Semantic Segmentation on Drone Dataset.

SegFormer_Segmentation The code uses SegFormer for Semantic Segmentation on Drone Dataset. The details for the SegFormer can be obtained from the foll

Dr. Sander Ali Khowaja 1 May 08, 2022
Image-retrieval-baseline - MUGE Multimodal Retrieval Baseline

MUGE Multimodal Retrieval Baseline This repo is implemented based on the open_cl

47 Dec 16, 2022
Rethinking Portrait Matting with Privacy Preserving

Rethinking Portrait Matting with Privacy Preserving This is the official repository of the paper Rethinking Portrait Matting with Privacy Preserving.

184 Jan 03, 2023
Finetuner allows one to tune the weights of any deep neural network for better embeddings on search tasks

Finetuner allows one to tune the weights of any deep neural network for better embeddings on search tasks

Jina AI 794 Dec 31, 2022
Simple converter for deploying Stable-Baselines3 model to TFLite and/or Coral

Running SB3 developed agents on TFLite or Coral Introduction I've been using Stable-Baselines3 to train agents against some custom Gyms, some of which

Gary Briggs 16 Oct 11, 2022
Code for SentiBERT: A Transferable Transformer-Based Architecture for Compositional Sentiment Semantics (ACL'2020).

SentiBERT Code for SentiBERT: A Transferable Transformer-Based Architecture for Compositional Sentiment Semantics (ACL'2020). https://arxiv.org/abs/20

Da Yin 66 Aug 13, 2022
LSSY量化交易系统

LSSY量化交易系统 该项目是本人3年来研究量化慢慢积累开发的一套系统,属于早期作品慢慢修改而来,仅供学习研究,回测分析,实盘交易部分未公开

55 Oct 04, 2022
Fast and robust clustering of point clouds generated with a Velodyne sensor.

Depth Clustering This is a fast and robust algorithm to segment point clouds taken with Velodyne sensor into objects. It works with all available Velo

Photogrammetry & Robotics Bonn 957 Dec 21, 2022
Pytorch implementation of "MOSNet: Deep Learning based Objective Assessment for Voice Conversion"

MOSNet pytorch implementation of "MOSNet: Deep Learning based Objective Assessment for Voice Conversion" https://arxiv.org/abs/1904.08352 Dependency L

9 Nov 18, 2022
Code to replicate the key results from Exploring the Limits of Out-of-Distribution Detection

Exploring the Limits of Out-of-Distribution Detection In this repository we're collecting replications for the key experiments in the Exploring the Li

Stanislav Fort 35 Jan 03, 2023
TensorFlow (Python) implementation of DeepTCN model for multivariate time series forecasting.

DeepTCN TensorFlow TensorFlow (Python) implementation of multivariate time series forecasting model introduced in Chen, Y., Kang, Y., Chen, Y., & Wang

Flavia Giammarino 21 Dec 19, 2022
MM1 and MMC Queue Simulation using python - Results and parameters in excel and csv files

implementation of MM1 and MMC Queue on randomly generated data and evaluate simulation results then compare with analytical results and draw a plot curve for them, simulate some integrals and compare

Mohamadreza Rezaei 1 Jan 19, 2022
Simple Dynamic Batching Inference

Simple Dynamic Batching Inference 解决了什么问题? 众所周知,Batch对于GPU上深度学习模型的运行效率影响很大。。。 是在Inference时。搜索、推荐等场景自带比较大的batch,问题不大。但更多场景面临的往往是稀碎的请求(比如图片服务里一次一张图)。 如果

116 Jan 01, 2023
This is an official implementation for "ResT: An Efficient Transformer for Visual Recognition".

ResT By Qing-Long Zhang and Yu-Bin Yang [State Key Laboratory for Novel Software Technology at Nanjing University] This repo is the official implement

zhql 222 Dec 13, 2022
Free like Freedom

This is all very much a work in progress! More to come! ( We're working on it though! Stay tuned!) Installation Open an Anaconda Prompt (in Windows, o

2.3k Jan 04, 2023
Official implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks"

DiscoGAN Official PyTorch implementation of Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. Prerequisites Python 2.7

SK T-Brain 754 Dec 29, 2022
ROCKET: Exceptionally fast and accurate time series classification using random convolutional kernels

ROCKET + MINIROCKET ROCKET: Exceptionally fast and accurate time series classification using random convolutional kernels. Data Mining and Knowledge D

298 Dec 26, 2022
CLUES: Few-Shot Learning Evaluation in Natural Language Understanding

CLUES: Few-Shot Learning Evaluation in Natural Language Understanding This repo contains the data and source code for baseline models in the NeurIPS 2

Microsoft 29 Dec 29, 2022
Scripts for training an AI to play the endless runner Subway Surfers using a supervised machine learning approach by imitation and a convolutional neural network (CNN) for image classification

About subwAI subwAI - a project for training an AI to play the endless runner Subway Surfers using a supervised machine learning approach by imitation

82 Jan 01, 2023
A port of muP to JAX/Haiku

MUP for Haiku This is a (very preliminary) port of Yang and Hu et al.'s μP repo to Haiku and JAX. It's not feature complete, and I'm very open to sugg

18 Dec 30, 2022