Fuzzing the Kernel Using Unicornafl and AFL++

Overview

Unicorefuzz

Build Status code-style: black

Fuzzing the Kernel using UnicornAFL and AFL++. For details, skim through the WOOT paper or watch this talk at CCCamp19.

Is it any good?

yes.

AFL Screenshot

Unicorefuzz Setup

  • Install python2 & python3 (ucf uses python3, however qemu/unicorn needs python2 to build)
  • Run ./setup.sh, preferrably inside a Virtualenv (else python deps will be installed using --user). During install, afl++ and uDdbg as well as python deps will be pulled and installed.
  • Enjoy ucf

Upgrading

When upgrading from an early version of ucf:

  • Unicorefuzz will notify you of config changes and new options automatically.
  • Alternatively, run ucf spec to output a commented config.py spec-like element.
  • probe_wrapper.py is now ucf attach.
  • harness.py is now named ucf emu.
  • The song remains the same.

Debug Kernel Setup (Skip this if you know how this works)

  • Create a qemu-img and install your preferred OS on there through qemu
  • An easy way to get a working userspace up and running in QEMU is to follow the steps described by syzkaller, namely create-image.sh
  • For kernel customization you might want to clone your preferred kernel version and compile it on the host. This way you can also compile your own kernel modules (e.g. example_module).
  • In order to find out the address of a loaded module in the guest OS you can use cat /proc/modules to find out the base address of the module location. Use this as the offset for the function where you want to break. If you specify MODULE and BREAK_OFFSET in the config.py, it should use ./get_mod_addr.sh to start it automated.
  • You can compile the kernel with debug info. When you have compiled the linux kernel you can start gdb from the kernel folder with gdb vmlinux. After having loaded other modules you can use the lx-symbols command in gdb to load the symbols for the other modules (make sure the .ko files of the modules are in your kernel folder). This way you can just use something like break function_to_break to set breakpoints for the required functions.
  • In order to compile a custom kernel for Arch, download the current Arch kernel and set the .config to the Arch default. Then set DEBUG_KERNEL=y, DEBUG_INFO=y, GDB_SCRIPTS=y (for convenience), KASAN=y, KASAN_EXTRA=y. For convenience, we added a working example_config that can be place to the linux dir.
  • To only get necessary kernel modules boot the current system and execute lsmod > mylsmod and copy the mylsmod file to your host system into the linux kernel folder that you downloaded. Then you can use make LSMOD=mylsmod localmodconfig to only make the kernel modules that are actually needed by the guest system. Then you can compile the kernel like normal with make. Then mount the guest file system to /mnt and use make modules_install INSTALL_MOD_PATH=/mnt. At last you have to create a new initramfs, which apparently has to be done on the guest system. Here use mkinitcpio -k <folder in /lib/modules/...> -g <where to put initramfs>. Then you just need to copy that back to the host and let qemu know where your kernel and the initramfs are located.
  • Setting breakpoints anywhere else is possible. For this, set BREAKADDR in the config.py instead.
  • For fancy debugging, ucf uses uDdbg
  • Before fuzzing, run sudo ./setaflops.sh to initialize your system for fuzzing.

Run

  • ensure a target gdbserver is reachable, for example via ./startvm.sh
  • adapt config.py:
    • provide the target's gdbserver network address in the config to the probe wrapper
    • provide the target's target function to the probe wrapper and harness
    • make the harness put AFL's input to the desired memory location by adopting the place_input func config.py
    • add all EXITs
  • start ucf attach, it will (try to) connect to gdb.
  • make the target execute the target function (by using it inside the vm)
  • after the breakpoint was hit, run ucf fuzz. Make sure afl++ is in the PATH. (Use ./resumeafl.sh to resume using the same input folder)

Putting afl's input to the correct location must be coded invididually for most targets. However with modern binary analysis frameworks like IDA or Ghidra it's possible to find the desired location's address.

The following place_input method places at the data section of sk_buff in key_extract:

    # read input into param xyz here:
    rdx = uc.reg_read(UC_X86_REG_RDX)
    utils.map_page(uc, rdx) # ensure sk_buf is mapped
    bufferPtr = struct.unpack("<Q",uc.mem_read(rdx + 0xd8, 8))[0]
    utils.map_page(uc, bufferPtr) # ensure the buffer is mapped
    uc.mem_write(rdx, input) # insert afl input
    uc.mem_write(rdx + 0xc4, b"\xdc\x05") # fix tail

QEMUing the Kernel

A few general pointers. When using ./startvm.sh, the VM can be debugged via gdb. Use

$gdb
>file ./linux/vmlinux
>target remote :1234

This dynamic method makes it rather easy to find out breakpoints and that can then be fed to config.py. On top, startvm.sh will forward port 22 (ssh) to 8022 - you can use it to ssh into the VM. This makes it easier to interact with it.

Debugging

You can step through the code, starting at the breakpoint, with any given input. The fancy debugging makes use of uDdbg. To do so, run ucf emu -d $inputfile. Possible inputs to the harness (the thing wrapping afl-unicorn) that help debugging:

-d flag loads the target inside the unicorn debugger (uDdbg) -t flag enables the afl-unicorn tracer. It prints every emulated instruction, as well as displays memory accesses.

Gotchas

A few things to consider.

FS_BASE and GS_BASE

Unicorn did not offer a way to directly set model specific registers directly. The forked unicornafl version of AFL++ finally supports it. Most ugly code of earlier versions was scrapped.

Improve Fuzzing Speed

Right now, the Unicorefuzz ucf attach harness might need to be manually restarted after an amount of pages has been allocated. Allocated pages should propagate back to the forkserver parent automatically but might still get reloaded from disk for each iteration.

IO/Printthings

It's generally a good idea to nop out kprintf or kernel printing functionality if possible, when the program is loaded into the emulator.

Troubleshooting

If you got trouble running unicorefuzz, follow these rulse, worst case feel free to reach out to us, for example to @domenuk on twitter. For some notes on debugging and developing ucf and afl-unicorn further, read DEVELOPMENT.md

Just won't start

Run the harness without afl (ucf emu -t ./sometestcase). Make sure you are not in a virtualenv or in the correct one. If this works but it still crashes in AFL, set AFL_DEBUG_CHILD_OUTPUT=1 to see some harness output while fuzzing.

All testcases time out

Make sure ucf attach is running, in the same folder, and breakpoint has been triggered.

Owner
Security in Telecommunications
The Computer Security Group at Berlin University of Technology
Security in Telecommunications
Message Passing on Cell Complexes

CW Networks This repository contains the code used for the papers Weisfeiler and Lehman Go Cellular: CW Networks (Under review) and Weisfeiler and Leh

Twitter Research 108 Jan 05, 2023
Attention-driven Robot Manipulation (ARM) which includes Q-attention

Attention-driven Robotic Manipulation (ARM) This codebase is home to: Q-attention: Enabling Efficient Learning for Vision-based Robotic Manipulation I

Stephen James 84 Dec 29, 2022
Little Ball of Fur - A graph sampling extension library for NetworKit and NetworkX (CIKM 2020)

Little Ball of Fur is a graph sampling extension library for Python. Please look at the Documentation, relevant Paper, Promo video and External Resour

Benedek Rozemberczki 619 Dec 14, 2022
A simple editor for captions in .SRT file extension

WaySRT A simple editor for captions in .SRT file extension The program doesn't use any external dependecies, just run: python way_srt.py {file_name.sr

Gustavo Lopes 3 Nov 16, 2022
Digan - Official PyTorch implementation of Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks

DIGAN (ICLR 2022) Official PyTorch implementation of "Generating Videos with Dyn

Sihyun Yu 147 Dec 31, 2022
retweet 4 satoshi ⚡️

rt4sat retweet 4 satoshi This bot is the codebase for https://twitter.com/rt4sat please feel free to create an issue if you saw any bugs basically thi

6 Sep 30, 2022
Official implementation of the article "Unsupervised JPEG Domain Adaptation For Practical Digital Forensics"

Unsupervised JPEG Domain Adaptation for Practical Digital Image Forensics @WIFS2021 (Montpellier, France) Rony Abecidan, Vincent Itier, Jeremie Boulan

Rony Abecidan 6 Jan 06, 2023
Loopy belief propagation for factor graphs on discrete variables, in JAX!

PGMax implements general factor graphs for discrete probabilistic graphical models (PGMs), and hardware-accelerated differentiable loopy belief propagation (LBP) in JAX.

Vicarious 62 Dec 23, 2022
Official implementation of "StyleCariGAN: Caricature Generation via StyleGAN Feature Map Modulation" (SIGGRAPH 2021)

StyleCariGAN in PyTorch Official implementation of StyleCariGAN:Caricature Generation via StyleGAN Feature Map Modulation in PyTorch Requirements PyTo

PeterZhouSZ 49 Oct 31, 2022
3D HourGlass Networks for Human Pose Estimation Through Videos

3D-HourGlass-Network 3D CNN Based Hourglass Network for Human Pose Estimation (3D Human Pose) from videos. This was my summer'18 research project. Dis

Naman Jain 51 Jan 02, 2023
Arquitetura e Desenho de Software.

S203 Este é um repositório dedicado às aulas de Arquitetura e Desenho de Software, cuja sigla é "S203". E agora, José? Como não tenho muito a falar aq

Fabio 7 Oct 23, 2021
Spectral Tensor Train Parameterization of Deep Learning Layers

Spectral Tensor Train Parameterization of Deep Learning Layers This repository is the official implementation of our AISTATS 2021 paper titled "Spectr

Anton Obukhov 12 Oct 23, 2022
ImageBART: Bidirectional Context with Multinomial Diffusion for Autoregressive Image Synthesis

ImageBART NeurIPS 2021 Patrick Esser*, Robin Rombach*, Andreas Blattmann*, Björn Ommer * equal contribution arXiv | BibTeX | Poster Requirements A sui

CompVis Heidelberg 110 Jan 01, 2023
Implementation of Shape and Electrostatic similarity metric in deepFMPO.

DeepFMPO v3D Code accompanying the paper "On the value of using 3D-shape and electrostatic similarities in deep generative methods". The paper can be

34 Nov 28, 2022
Image inpainting using Gaussian Mixture Models

dmfa_inpainting Source code for: MisConv: Convolutional Neural Networks for Missing Data (to be published at WACV 2022) Estimating conditional density

Marcin Przewięźlikowski 8 Oct 09, 2022
An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise Weight Sharing) by Sensetime Research.

An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise

45 Dec 08, 2022
Deep functional residue identification

DeepFRI Deep functional residue identification Citing @article {Gligorijevic2019, author = {Gligorijevic, Vladimir and Renfrew, P. Douglas and Koscio

Flatiron Institute 156 Dec 25, 2022
PyTorch implementation of paper: AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer, ICCV 2021.

AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer [Paper] [PyTorch Implementation] [Paddle Implementation] Overview This reposit

148 Dec 30, 2022
Representing Long-Range Context for Graph Neural Networks with Global Attention

Graph Augmentation Graph augmentation/self-supervision/etc. Algorithms gcn gcn+virtual node gin gin+virtual node PNA GraphTrans Augmentation methods N

UC Berkeley RISE 67 Dec 30, 2022
Code for intrusion detection system (IDS) development using CNN models and transfer learning

Intrusion-Detection-System-Using-CNN-and-Transfer-Learning This is the code for the paper entitled "A Transfer Learning and Optimized CNN Based Intrus

Western OC2 Lab 38 Dec 12, 2022