Python scripts for a generic performance testing infrastructure using Locust.

Related tags

TestingLocust_Scripts
Overview

TODOs

  • Reference to published paper or online version of it
  • loadtest_plotter.py: Cleanup and reading data from files
  • ARS_simulation.py: Cleanup, documentation and control workloads and parameters of the simulation model through CLI
  • locust-parameter-variation.py: Cleanup and Documentation
  • Move the files into subfolders (Executors, Load Testers, Evaluators, Systems under Test)

Locust Performance Testing Infrastructure

In [1] we introduced a generic performance testing infrastructure and used it in an industrial case study. Our idea is to have decoupled components, Python scripts in our case, that together allow to:

  1. reproducible execute a load testing tool with a set of parameters for a particular experiment,
  2. evaluate the performance measurements assisted by visualizations or automatic evaluators.

Generally, we have four types of components in our infrastructure:

  • Executors: execute a particular Load Tester as long as the Load Tester provides a CLI or an API;
  • Load Testers: execute the load test, parametrized with values given by an Executor. Have to output a logfile containing the response times;
  • Evaluators: postprocess the logfile and for example plot the response times;
  • Systems under Test (SUTs): Target systems we want to test. Usually, the target systems will be external systems, e.g., web servers. In our case, we build software that simulates the behavior of a real system, in order to provide the means for others to roughly reproduce our experiments.

More details about our generic performance testing infrastructure can be found in our paper [1].

This repository contains the aforementioned Python scripts:

  • Executors:
    • executor.py: executes Locust with a set of parameters;
    • locust-parameter-variation.py: executes Locust and keeps increasing the load. This is similar to Locust's Step Load Mode, however, our approach increases the number of clients for as long as the ARS complies with real-time requirements in order to find the saturation point of the ARS.
  • Load Testers:
    • locust_tester.py: contains specific code for Locust to perform the actual performance test. For demonstration purposes, this script tests ARS_simulation.py. Outputs a locust_log.log;
    • locust_multiple_requests: an enhanced version of locust_tester that sends additional requests to generate more load.
    • locust_teastore.py: performs load testing against TeaStore, or our simulated TeaStore.
  • Evaluators:
    • loadtest_plotter.py: reads the locust_log.log, plots response times, and additional metrics to better visualize, if the real-time requirements of the EN 50136 are met.
  • SUTs
    • Alarm Receiving Software Simulation (ARS_simulation.py): simulates an industrial ARS based on data measured in the production environment of the GS company group.
    • TeaStore (teastore_simulation.py): simulates TeaStore based on a predictive model generated in a lab environment.

Instructions to reproduce results in our paper

Quick start

  • Clone the repository;
  • run pip3 install -r requirements.txt;
  • In the file ARS_simulation.py make sure that the constant MASCOTS2020 is set to True.
  • open two terminal shells:
    1. run python3 ARS_simulation.py in one of them;
    2. run python3 executor.py. in the other.
  • to stop the test, terminate the executor.py script;
  • run python3 loadtest_plotter.py, pass the locust_log.log and see the results. :)

Details

Using the performance testing infrastructure available in this repository, we conducted performance tests in a real-world alarm system provided by the GS company. To provide a way to reproduce our results without the particular alarm system, we build a software simulating the Alarm Receiving Software. The simulation model uses variables, we identified as relevant and also performed some measurements in the production environment, to initialize the variables correctly.

To reproduce our results, follow the steps in the Section "Quick start". The scripts are already preconfigured, to simulate a realistic workload, inject faults, and automatically recover from them. The recovery is performed after the time, the real fault management mechanism requires.

If you follow the steps and, for example, let the test run for about an hour, you will get similar results to the ones you can find in the Folder "Tests under Fault".

Results after running our scripts for about an hour:

Results


Keep in mind that we use a simulated ARS here; in our paper we present measurements performed with a real system, thus the results reproduced with the code here are slightly different.

Nonetheless, the overall observations we made in our paper, are in fact reproducible.


Instructions on how to adapt our performance testing infrastructure to other uses

After cloning the repository, take a look at the locust_tester.py. This is, basically, an ordinary Locust script that sends request to the target system and measures the response time, when the response arrives. Our locust_tester.py is special, because:

  • we implemented a custom client instead of using the default;
  • we additionally log the response times to a logfile instead of using the .csv files Locust provides.

So, write a performance test using Locust, following the instructions of the Locust developers on how to write a Locust script. The only thing to keep in mind is, that your Locust script has to output the measured response times to a logfile in the same way our script does it. Use logger.info("Response time %s ms", total_time) to log the response times.

When you have your Locust script ready, execute it with python3 executor.py, pass the path to your script as argument, and when you want to finish the load test, terminate it with Ctrl + C.

Use python3 executor.py --help to get additional information.

Example call:

% python3 executor.py locust_scripts/locust_tester.py

After that, plot your results:

% python3 loadtest_plotter.py
Path to the logfile: locust_log.log
Owner
Juri Tomak
Juri Tomak
Enabling easy statistical significance testing for deep neural networks.

deep-significance: Easy and Better Significance Testing for Deep Neural Networks Contents ⁉️ Why 📥 Installation 🔖 Examples Intermezzo: Almost Stocha

Dennis Ulmer 270 Dec 20, 2022
API Test Automation with Requests and Pytest

api-testing-requests-pytest Install Make sure you have Python 3 installed on your machine. Then: 1.Install pipenv sudo apt-get install pipenv 2.Go to

Sulaiman Haque 2 Nov 21, 2021
The successor to nose, based on unittest2

Welcome to nose2 nose2 is the successor to nose. It's unittest with plugins. nose2 is a new project and does not support all of the features of nose.

736 Dec 16, 2022
An AWS Pentesting tool that lets you use one-liner commands to backdoor an AWS account's resources with a rogue AWS account - or share the resources with the entire internet 😈

An AWS Pentesting tool that lets you use one-liner commands to backdoor an AWS account's resources with a rogue AWS account - or share the resources with the entire internet 😈

Brandon Galbraith 276 Mar 03, 2021
Green is a clean, colorful, fast python test runner.

Green -- A clean, colorful, fast python test runner. Features Clean - Low redundancy in output. Result statistics for each test is vertically aligned.

Nathan Stocks 756 Dec 22, 2022
The definitive testing tool for Python. Born under the banner of Behavior Driven Development (BDD).

mamba: the definitive test runner for Python mamba is the definitive test runner for Python. Born under the banner of behavior-driven development. Ins

Néstor Salceda 502 Dec 30, 2022
Tools for test driven data-wrangling and data validation.

datatest: Test driven data-wrangling and data validation Datatest helps to speed up and formalize data-wrangling and data validation tasks. It impleme

269 Dec 16, 2022
PacketPy is an open-source solution for stress testing network devices using different testing methods

PacketPy About PacketPy is an open-source solution for stress testing network devices using different testing methods. Currently, there are only two c

4 Sep 22, 2022
Travel through time in your tests.

time-machine Travel through time in your tests. A quick example: import datetime as dt

Adam Johnson 373 Dec 27, 2022
This repository contains a testing script for nmigen-boards that tries to build blinky for all the platforms provided by nmigen-boards.

Introduction This repository contains a testing script for nmigen-boards that tries to build blinky for all the platforms provided by nmigen-boards.

S.J.R. van Schaik 4 Jul 23, 2022
Faker is a Python package that generates fake data for you.

Faker is a Python package that generates fake data for you. Whether you need to bootstrap your database, create good-looking XML documents, fill-in yo

Daniele Faraglia 15.2k Jan 01, 2023
Data-Driven Tests for Python Unittest

DDT (Data-Driven Tests) allows you to multiply one test case by running it with different test data, and make it appear as multiple test cases. Instal

424 Nov 28, 2022
A grab-bag of nifty pytest plugins

A goody-bag of nifty plugins for pytest OS Build Coverage Plugin Description Supported OS pytest-server-fixtures Extensible server-running framework w

Man Group 492 Jan 03, 2023
Airspeed Velocity: A simple Python benchmarking tool with web-based reporting

airspeed velocity airspeed velocity (asv) is a tool for benchmarking Python packages over their lifetime. It is primarily designed to benchmark a sing

745 Dec 28, 2022
Useful additions to Django's default TestCase

django-test-plus Useful additions to Django's default TestCase from REVSYS Rationale Let's face it, writing tests isn't always fun. Part of the reason

REVSYS 546 Dec 22, 2022
Voip Open Linear Testing Suite

VOLTS Voip Open Linear Tester Suite Functional tests for VoIP systems based on voip_patrol and docker 10'000 ft. view System is designed to run simple

Igor Olhovskiy 17 Dec 30, 2022
A tool to auto generate the basic mocks and asserts for faster unit testing

Mock Generator A tool to generate the basic mocks and asserts for faster unit testing. 🎉 New: you can now use pytest-mock-generator, for more fluid p

31 Dec 24, 2022
A Django plugin for pytest.

Welcome to pytest-django! pytest-django allows you to test your Django project/applications with the pytest testing tool. Quick start / tutorial Chang

pytest-dev 1.1k Dec 31, 2022
Pytest modified env

Pytest plugin to fail a test if it leaves modified os.environ afterwards.

wemake.services 7 Sep 11, 2022
Getting the most out of your hobby servo

ServoProject by Adam Bäckström Getting the most out of your hobby servo Theory The control system of a regular hobby servo looks something like this:

209 Dec 20, 2022