The code for the NSDI'21 paper "BMC: Accelerating Memcached using Safe In-kernel Caching and Pre-stack Processing".

Overview

BMC

The code for the NSDI'21 paper "BMC: Accelerating Memcached using Safe In-kernel Caching and Pre-stack Processing".

BibTex entry available here.

BMC (BPF Memory Cache) is an in-kernel cache for memcached. It enables runtime, crash-safe extension of the Linux kernel to process specific memcached requests before the execution of the standard network stack. BMC does not require modification of neither the Linux kernel nor the memcached application. Running memcached with BMC improves throughput by up to 18x compared to the vanilla memcached application.

Requirements

Linux kernel v5.3 or higher is required to run BMC.

Other software dependencies are required to build BMC and Memcached-SR (see Building BMC and Building Memcached-SR).

Build instructions

Building BMC

BMC must be compiled with libbpf and other header files obtained from kernel sources. The project does not include the kernel sources, but the kernel-src-download.sh and kernel-src-prepare.sh scripts automate the download of the kernel sources and prepare them for the compilation of BMC.

These scripts require the following software to be installed:

gpg curl tar xz make gcc flex bison libssl-dev libelf-dev

The project uses llvm and clang version 9 to build BMC, but more recent versions might work as well:

llvm-9 clang-9

Note that libelf-dev is also required to build libbpf and BMC.

With the previous software installed, BMC can be built with the following:

$ ./kernel-src-download.sh
$ ./kernel-src-prepare.sh
$ cd bmc && make

After BMC has been successfully built, kernel sources can be removed by running the kernel-src-remove.sh script from the project root.

Building Memcached-SR

Memcached-SR is based on memcached v1.5.19. Building it requires the following software:

clang-9 (or gcc-9) automake libevent-dev

Either clang-9 or gcc-9 is required in order to compile memcached without linking issues. Depending on your distribution, you might also need to use the -Wno-deprecated-declarations compilation flag.

Memcached-SR can be built with the following:

$ cd memcached-sr 
$ ./autogen.sh
$ CC=clang-9 CFLAGS='-DREUSEPORT_OPT=1 -Wno-deprecated-declarations' ./configure && make

The memcached binary will be located in the memcached-sr directory.

Further instructions

TC egress hook

BMC doesn't attach the tx_filter eBPF program to the egress hook of TC, it needs to be attached manually.

To do so, you first need to make sure that the BPF is mounted, if it isn't you can mount it with the following command:

# mount -t bpf none /sys/fs/bpf/

Once BMC is running and the tx_filter program has been pinned to /sys/fs/bpf/bmc_tx_filter, you can attach it using the tc command line:

# tc qdisc add dev 
   
     clsact
   
# tc filter add dev 
   
     egress bpf object-pinned /sys/fs/bpf/bmc_tx_filter
   

After you are done using BMC, you can detach the program with these commands:

# tc filter del dev 
   
     egress
   
# tc qdisc del dev 
   
     clsact
   

And unpin the program with # rm /sys/fs/bpf/bmc_tx_filter

License

Files under the bmc directory are licensed under the GNU Lesser General Public License version 2.1.

Files under the memcached-sr directory are licensed under the BSD-3-Clause BSD license.

Cite this work

BibTex:

@inproceedings{265047,
	title        = {{BMC}: Accelerating Memcached using Safe In-kernel Caching and Pre-stack Processing},
	author       = {Yoann Ghigoff and Julien Sopena and Kahina Lazri and Antoine Blin and Gilles Muller},
	year         = 2021,
	month        = apr,
	booktitle    = {18th {USENIX} Symposium on Networked Systems Design and Implementation ({NSDI} 21)},
	publisher    = {{USENIX} Association},
	pages        = {487--501},
	isbn         = {978-1-939133-21-2},
	url          = {https://www.usenix.org/conference/nsdi21/presentation/ghigoff}
}
Owner
Orange
Open Source by Orange
Orange
Official implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis https://arxiv.org/abs/2011.13775

CIPS -- Official Pytorch Implementation of the paper Image Generators with Conditionally-Independent Pixel Synthesis Requirements pip install -r requi

Multimodal Lab @ Samsung AI Center Moscow 201 Dec 21, 2022
Lightweight stereo matching network based on MobileNetV1 and MobileNetV2

MobileStereoNet: Towards Lightweight Deep Networks for Stereo Matching

Cognitive Systems Research Group 139 Nov 30, 2022
[NeurIPS 2021] PyTorch Code for Accelerating Robotic Reinforcement Learning with Parameterized Action Primitives

Robot Action Primitives (RAPS) This repository is the official implementation of Accelerating Robotic Reinforcement Learning via Parameterized Action

Murtaza Dalal 55 Dec 27, 2022
Trading and Backtesting environment for training reinforcement learning agent or simple rule base algo.

TradingGym TradingGym is a toolkit for training and backtesting the reinforcement learning algorithms. This was inspired by OpenAI Gym and imitated th

Yvictor 1.1k Jan 02, 2023
[AAAI22] Reliable Propagation-Correction Modulation for Video Object Segmentation

Reliable Propagation-Correction Modulation for Video Object Segmentation (AAAI22) Preview version paper of this work is available at: https://arxiv.or

Xiaohao Xu 70 Dec 04, 2022
Code for "ATISS: Autoregressive Transformers for Indoor Scene Synthesis", NeurIPS 2021

ATISS: Autoregressive Transformers for Indoor Scene Synthesis This repository contains the code that accompanies our paper ATISS: Autoregressive Trans

138 Dec 22, 2022
[ICCV 2021] Released code for Causal Attention for Unbiased Visual Recognition

CaaM This repo contains the codes of training our CaaM on NICO/ImageNet9 dataset. Due to my recent limited bandwidth, this codebase is still messy, wh

Wang Tan 66 Dec 31, 2022
The code for the NeurIPS 2021 paper "A Unified View of cGANs with and without Classifiers".

Energy-based Conditional Generative Adversarial Network (ECGAN) This is the code for the NeurIPS 2021 paper "A Unified View of cGANs with and without

sianchen 22 May 28, 2022
Segmentation Training Pipeline

Segmentation Training Pipeline This package is a part of Musket ML framework. Reasons to use Segmentation Pipeline Segmentation Pipeline was developed

Musket ML 52 Dec 12, 2022
PolyphonicFormer: Unified Query Learning for Depth-aware Video Panoptic Segmentation

PolyphonicFormer: Unified Query Learning for Depth-aware Video Panoptic Segmentation Winner method of the ICCV-2021 SemKITTI-DVPS Challenge. [arxiv] [

Yuan Haobo 38 Jan 03, 2023
CV backbones including GhostNet, TinyNet and TNT, developed by Huawei Noah's Ark Lab.

CV Backbones including GhostNet, TinyNet, TNT (Transformer in Transformer) developed by Huawei Noah's Ark Lab. GhostNet Code TinyNet Code TNT Code Pyr

HUAWEI Noah's Ark Lab 3k Jan 08, 2023
Codes of paper "Unseen Object Amodal Instance Segmentation via Hierarchical Occlusion Modeling"

Unseen Object Amodal Instance Segmentation (UOAIS) Seunghyeok Back, Joosoon Lee, Taewon Kim, Sangjun Noh, Raeyoung Kang, Seongho Bak, Kyoobin Lee This

GIST-AILAB 92 Dec 13, 2022
[NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training

Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training Code for NeurIPS 2021 paper "Better Safe Than Sorry: Preventing Delu

Lue Tao 29 Sep 20, 2022
O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis

O-CNN This repository contains the implementation of our papers related with O-CNN. The code is released under the MIT license. O-CNN: Octree-based Co

Microsoft 607 Dec 28, 2022
Implement slightly different caffe-segnet in tensorflow

Tensorflow-SegNet Implement slightly different (see below for detail) SegNet in tensorflow, successfully trained segnet-basic in CamVid dataset. Due t

Tseng Kuan Lun 364 Oct 27, 2022
Code for MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks

MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks This is the code for the paper: MentorNet: Learning Data-Driven Curriculum fo

Google 302 Dec 23, 2022
Deep Watershed Transform for Instance Segmentation

Deep Watershed Transform Performs instance level segmentation detailed in the following paper: Min Bai and Raquel Urtasun, Deep Watershed Transformati

193 Nov 20, 2022
Zero-shot Synthesis with Group-Supervised Learning (ICLR 2021 paper)

GSL - Zero-shot Synthesis with Group-Supervised Learning Figure: Zero-shot synthesis performance of our method with different dataset (iLab-20M, RaFD,

Andy_Ge 62 Dec 21, 2022
This is a TensorFlow implementation for C2-Rec

This is a TensorFlow implementation for C2-Rec We refer to the repo SASRec. Requirements requirement.txt Datasets This repo includes Amazon Beauty dat

7 Nov 14, 2022
A modular active learning framework for Python

Modular Active Learning framework for Python3 Page contents Introduction Active learning from bird's-eye view modAL in action From zero to one in a fe

modAL 1.9k Dec 31, 2022