pytest plugin for a better developer experience when working with the PyTorch test suite

Overview

pytest-pytorch

license repo status isort black tests status

What is it?

pytest-pytorch is a lightweight pytest-plugin that enhances the developer experience when working with the PyTorch test suite if you come from a pytest background.

Why do I need it?

Some testcases in the PyTorch test suite are automatically generated when a module is loaded in order to parametrize them. Trying to collect them with their names as written, e.g. pytest test_foo.py::TestFoo or pytest test_foo.py::TestFoo::test_bar, is unfortunately not possible. If you are used to this syntax or your IDE relies on it (PyCharm, VSCode), you can install pytest-pytorch to make it work.

How do I install it?

You can install pytest-pytorch with pip

$ pip install pytest-pytorch

or with conda:

$ conda install -c conda-forge pytest-pytorch

How do I use it?

With pytest-pytorch installed you can select test cases and tests as if the instantiation for different devices was performed by @pytest.mark.parametrize:

Use case Command
Run a test case against all devices pytest test_foo.py::TestBar
Run a test case against one device pytest test_foo.py::TestBar -k "$DEVICE"
Run a test against all devices pytest test_foo.py::TestBar::test_baz
Run a test against one device pytest test_foo.py::TestBar::test_baz -k "$DEVICE"

Can I have a little more background?

PyTorch uses its own method for generating tests that is for the most part compatible with unittest and pytest. Its custom test generation allows test templates to be written and instantiated for different device types, data types, and operators. Consider the following module test_foo.py:

from torch.testing._internal.common_utils import TestCase
from torch.testing._internal.common_device_type import instantiate_device_type_tests

class TestFoo(TestCase):
    def test_bar(self, device):
        pass
    
    def test_baz(self, device):
        pass

instantiate_device_type_tests(TestFoo, globals())

Assuming we "cpu" and "cuda" are available as devices, we can collect four tests:

  1. test_foo.py::TestFooCPU::test_bar_cpu,
  2. test_foo.py::TestFooCPU::test_baz_cpu,
  3. test_foo.py::TestFooCUDA::test_bar_cuda, and
  4. test_foo.py::TestFooCUDA::test_baz_cuda.

From a pytest perspective this is similar to decorating TestFoo with @pytest.mark.parametrize("device", ("cpu", "cuda"))) which would result in

  1. test_foo.py::TestFoo:test_bar[cpu],
  2. test_foo.py::TestFoo:test_bar[cuda],
  3. test_foo.py::TestFoo:test_baz[cpu], and
  4. test_foo.py::TestFoo:test_baz[cuda].

Since the PyTorch test framework renames testcases and tests, naively running pytest test_foo.py::TestFoo or pytest test_foo.py::TestFoo::test_bar fails, because it can't find anything matching these names. Of course you can get around it by using the regular expression matching (-k command line flag) that pytest offers.

pytest-pytorch performs this matching so you can keep your familiar workflow and your IDE is happy out of the box.

How do I contribute?

First and foremost: Thank you for your interest in development of pytest-pytorch's! We appreciate all contributions be it code or something else. Check out our contribution guide lines for details.

Comments
  • Fix broken link in readme

    Fix broken link in readme

    The blog link provided in the readme is broken. Please fix it.

    https://deploy-preview-211--quansight-labs.netlify.app/blog/2021/06/pytest-pytorch/

    I would suggest to replace it with:

    https://labs.quansight.org/blog/2021/06/pytest-pytorch/

    :bulb: Note: I am pushing a PR to fix this.

    opened by sugatoray 1
  • Change test workflows from PyTorch nightly to stable

    Change test workflows from PyTorch nightly to stable

    Previously, we used PyTorch nightly to have a second device available in CI. As of torch==1.9 the "meta" device is included in the stable binaries. Thus, there is no need for less safe nightly testing anymore.

    opened by pmeier 0
  • remove duplicate tests

    remove duplicate tests

    In some cases new_cmds == legacy_cmds. This made it more verbose to write and additionally resulted in duplicate tests.

    After this PR, if no legacy_cmds is passed to Config, the value of new_cmds is used. Plus, duplicate configs are filtered out.

    opened by pmeier 0
  • trim the test matrix

    trim the test matrix

    We don't need to test this for every os / python combination. It should be sufficient to test every os with the minimum python requirement as well as one (linux) with every supported python version.

    This should speed up the CI runs and save some resources.

    opened by pmeier 0
  • refactor test suite to test the actual collection

    refactor test suite to test the actual collection

    Before we tested the collection by using different outcomes for each test based on the respective parameter. Afterwards we checked the pytest result against that. This has two downsides:

    1. It takes more mental effort to not only parse which test will run, but also how the outcome of all run tests is.
    2. Since we could only test against the aggregated result for multiple tests, we can't be sure the test is actually right.

    With this PR we are actually testing the collection, by parsing the pytest output. Additionally, you can now add this codeblock

    # ======================================================================================
    # This block is necessary to autogenerate the parametrization for
    # tests/test_plugin.py::test_standard_collection.
    # It needs to be placed **after** the import of 'instantiate_device_type_tests' and
    # **before** its first usage.
    # ======================================================================================
    try:
        from _spy import Spy
    
        __spy__ = Spy()
        del Spy
        instantiate_device_type_tests = __spy__(instantiate_device_type_tests)
    except ModuleNotFoundError:
        pass
    # ======================================================================================
    

    to a test file to automatically test the selection of

    • everything in the file,
    • every test case, and
    • every test case function.

    Anything beyond that still needs to be configured manually.

    opened by pmeier 0
  • support selection of tests that use op infos

    support selection of tests that use op infos

    Fixes #16.

    Currently we rely on the device identifier comes directly after the test case function name. That is no longer true when using OpInfo's. The name of the instantiated test follows the scheme (template_name)_(op_name)_(device)_(dtype).

    This is a complete rewrite of the internal matching logic:

    • test case: Test cases are only parametrized by the device. Since every TestCase has a device_type attribute, we can simply strip the device identifier from the instantiated name.
    • test case function: Since both template_name and op_name in the pattern above might contain underscores and they are also separated by a single underscore, it is impossible to extract the two parts without further knowledge. To overcome this, we can inspect the source of the function and extract the template_name (which is the function name) directly.
    opened by pmeier 0
  • Re-add support for OpInfo's

    Re-add support for OpInfo's

    Consider the following test setup:

    from torch.testing._internal.common_device_type import (
        instantiate_device_type_tests,
        ops,
    )
    from torch.testing._internal.common_utils import TestCase
    from torch.testing._internal.common_methods_invocations import OpInfo
    
    BazOpInfo = OpInfo("add")
    
    
    class TestFoo(TestCase):
        @ops([BazOpInfo])
        def test_bar(self, device, dtype, op):
            pass
    
    
    instantiate_device_type_tests(TestFoo, globals(), only_for="cpu")
    

    Running pytest test_foo.py --collect-only on that results in:

    <PyTorchTestCase TestFooCPU>
      <PyTorchTestCaseFunction test_bar_add_cpu_float32>
      <PyTorchTestCaseFunction test_bar_add_cpu_float64>
    <PyTorchTestCase TestFooMETA>
      <PyTorchTestCaseFunction test_bar_add_meta_float32>
      <PyTorchTestCaseFunction test_bar_add_meta_float64>
    

    Naming schemes:

    • test cases: (template_name)(device)
    • test case functions: (template_name)_(op_name)_(device)_(dtype)

    After #12 test case functions now require the device to follow right after the template name. Thus, it is no longer possible to select an individual test by name pytest test_foo.py::TestFoo::test_bar

    opened by pmeier 0
  • make dtype testing more concise

    make dtype testing more concise

    Instead of instantiating the tests for all devices and use the @onlyCPU decorator everywhere, used the only_for keyword when instantiating. With that the meta tests are not generated in the first place and do not need to be considered for the number of skipped tests.

    opened by pmeier 0
  • Do not require torch at installation

    Do not require torch at installation

    Right now we have torch as dependency:

    https://github.com/Quansight/pytest-pytorch/blob/bd98f6b23214460605e4b8c0ee2bd4956e846291/setup.cfg#L32-L34

    If you set up the development environment to work on PyTorch, you would probably not have it installed already. Thus installing pytest-pytorch would also install torch which is usually not desired.

    opened by pmeier 0
  • Support for nested test case names

    Support for nested test case names

    Currently we match the test case (function) names based on this:

    https://github.com/Quansight/pytest-pytorch/blob/ff8f2d86906486a2d437b2617ef9973394f5e216/pytest_pytorch/plugin.py#L9-L10

    This works well until you have a setup similar to this:

    class TestFoo(TestCase):
        pass
    
    class TestFooBar(TestCase):
        pass
    

    If you run pytest test_foo.py::TestFoo on this, both test cases are collected instead of just TestFoo.

    opened by pmeier 0
  • CI tests are failing

    CI tests are failing

    The CI tests are failing, because we need torch>=1.9, but it is only available through the nightlies. Unfortunately, tox-ltt is not able to handle the nightly channel yet.

    opened by pmeier 0
Releases(v0.2.1)
  • v0.2.1(May 25, 2021)

    This adds the --disable-pytest-pytorch command line option (#25), which makes it easier to debug incompatibilites with the vanilla pytest collection.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Apr 21, 2021)

    This minor release adds support for OpInfo's which are used more and more throughout the PyTorch test suite (#17).

    Furthermore @xmnlab helped us to get pytest-pytorch into conda-forge. Installation instruction can be found in the README (#18).

    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Apr 20, 2021)

    This release includes two minor improvements:

    1. Support for selecting individual test cases if there names are nested, i.e. TestFoo and TestFooBar (#12)
    2. Removal of PyTorch as installation requirement (#14)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Apr 14, 2021)

Owner
Quansight
We grow talent, build technology, and discover products by helping companies grow OSS communities to organize and analyze their data.
Quansight
Test utility for validating OpenAPI documentation

DRF OpenAPI Tester This is a test utility to validate DRF Test Responses against OpenAPI 2 and 3 schema. It has built-in support for: OpenAPI 2/3 yaml

snok 103 Dec 21, 2022
Green is a clean, colorful, fast python test runner.

Green -- A clean, colorful, fast python test runner. Features Clean - Low redundancy in output. Result statistics for each test is vertically aligned.

Nathan Stocks 756 Dec 22, 2022
A pytest plugin, that enables you to test your code that relies on a running PostgreSQL Database

This is a pytest plugin, that enables you to test your code that relies on a running PostgreSQL Database. It allows you to specify fixtures for PostgreSQL process and client.

Clearcode 252 Dec 21, 2022
A toolbar overlay for debugging Flask applications

Flask Debug-toolbar This is a port of the excellent django-debug-toolbar for Flask applications. Installation Installing is simple with pip: $ pip ins

863 Dec 29, 2022
Python Webscraping using Selenium

Web Scraping with Python and Selenium The code shows how to do web scraping using Python and Selenium. We use as data the https://sbot.org.br/localize

Luís Miguel Massih Pereira 1 Dec 01, 2021
Lightweight, scriptable browser as a service with an HTTP API

Splash - A javascript rendering service Splash is a javascript rendering service with an HTTP API. It's a lightweight browser with an HTTP API, implem

Scrapinghub 3.8k Jan 03, 2023
Browser reload with uvicorn

uvicorn-browser This project is inspired by autoreload. Installation pip install uvicorn-browser Usage Run uvicorn-browser --help to see all options.

Marcelo Trylesinski 64 Dec 17, 2022
bulk upload files to libgen.lc (Selenium script)

LibgenBulkUpload bulk upload files to http://libgen.lc/librarian.php (Selenium script) Usage ./upload.py to_upload uploaded rejects So title and autho

8 Jul 07, 2022
Webscreener is a tool for mass web domains pentesting.

Webscreener is a tool for mass web domains pentesting. It is used to take snapshots for domains that is generated by a tool like knockpy or Sublist3r. It cuts out most of the pentesting time by scree

Seekurity 3 Jun 07, 2021
Testing Calculations in Python, using OOP (Object-Oriented Programming)

Testing Calculations in Python, using OOP (Object-Oriented Programming) Create environment with venv python3 -m venv venv Activate environment . venv

William Koller 1 Nov 11, 2021
A Proof of concept of a modern python CLI with click, pydantic, rich and anyio

httpcli This project is a proof of concept of a modern python networking cli which can be simple and easy to maintain using some of the best packages

Kevin Tewouda 17 Nov 15, 2022
Object factory for Django

Model Bakery: Smart fixtures for better tests Model Bakery offers you a smart way to create fixtures for testing in Django. With a simple and powerful

Model Bakers 632 Jan 08, 2023
A twitter bot that simply replies with a beautiful screenshot of the tweet, powered by poet.so

Poet this! Replies with a beautiful screenshot of the tweet, powered by poet.so Installation git clone https://github.com/dhravya/poet-this.git cd po

Dhravya Shah 30 Dec 04, 2022
A small automated test structure using python to test *.cpp codes

Get Started Insert C++ Codes Add Test Code Run Test Samples Check Coverages Insert C++ Codes you can easily add c++ files in /inputs directory there i

Alireza Zahiri 2 Aug 03, 2022
Aioresponses is a helper for mock/fake web requests in python aiohttp package.

aioresponses Aioresponses is a helper to mock/fake web requests in python aiohttp package. For requests module there are a lot of packages that help u

402 Jan 06, 2023
Show surprise when tests are passing

pytest-pikachu pytest-pikachu prints ascii art of Surprised Pikachu when all tests pass. Installation $ pip install pytest-pikachu Usage Pass the --p

Charlie Hornsby 13 Apr 15, 2022
Python tools for penetration testing

pyTools_PT python tools for penetration testing Please don't use these tool for illegal purposes. These tools is meant for penetration testing for leg

Gourab 1 Dec 01, 2021
Docker-based integration tests

Docker-based integration tests Description Simple pytest fixtures that help you write integration tests with Docker and docker-compose. Specify all ne

Avast 326 Dec 27, 2022
Automated mouse clicker script using PyAutoGUI and Typer.

clickpy Automated mouse clicker script using PyAutoGUI and Typer. This app will randomly click your mouse between 1 second and 3 minutes, to prevent y

Joe Fitzgibbons 0 Dec 01, 2021
This project is used to send a screenshot by email of your MyUMons schedule using Selenium python lib (headless mode)

MyUMonsSchedule Use MyUMonsSchedule python script to send a screenshot by email (Gmail) of your MyUMons schedule. If you use it on Windows, take care

Pierre-Louis D'Agostino 6 May 12, 2022