Selects tests affected by changed files. Continous test runner when used with pytest-watch.

Related tags

Testingpytest-testmon
Overview

This is a pytest plug-in which automatically selects and re-executes only tests affected by recent changes. How is this possible in dynamic language like Python and how reliable is it? Read here: Determining affected tests

Quickstart

pip install pytest-testmon

# build the dependency database and save it to .testmondata
pytest --testmon

# change some of your code (with test coverage)

# only run tests affected by recent changes
pytest --testmon

To learn more about specifying multiple project directories and troubleshooting, please head to testmon.org

Comments
  • pytest-xdist support

    pytest-xdist support

    When combining testmon and xdist, more tests are rerun than necessary. I suspect the xdist runners don't update .testmondata.

    Is testmon compatible with xdist or is this due to misconfiguration?

    enhancement 
    opened by timdiels 22
  • Rerun tests should not fail.

    Rerun tests should not fail.

    There is a test (A) that fails when running pytest --testmon. After fixing test A, running pytest --testmon shows the results of when the test A was still failing. But If we change any code that affect test A with a NOP and run pytest --testmon the test A will pass once. But running pytest --testmon immediately after that will result in what I believe is a cached result to show up, which is the result failure.

    I believe the cached results in this case should be updated so it doesnt fail.

    opened by henningphan 17
  • Refactor

    Refactor

    This addresses issues: #53 #52 #51 #50 #32 partially #42 (poor mans solution)

    With the limited test cases it worked and the performance was OK. If @blueyed @boxed tell me it works for them I'll merge and release.

    opened by tarpas 14
  • performance with large source base

    performance with large source base

    There is a lot of repetition and json array in the .testmondata sqlite database, let's make it more efficient. @boxed you mentioned you have 20mb per commit Db, so this might interest you.

    opened by tarpas 10
  • Override pytest's exit code 5 for

    Override pytest's exit code 5 for "no tests were run"

    py.test will exit with code 5 in case no tests were ran (https://github.com/pytest-dev/pytest/issues/812, https://github.com/pytest-dev/pytest/issues/500#issuecomment-112204804).

    I can see that this is useful in general, but with pytest-testmon this should not be an error.

    Would it be possible and is it sensible to override pytest's exit code to be 0 in that case then?

    My use case it using pytest-watch and its --onfail feature, which should not be triggered because testmon made py.test skip all tests.

    opened by blueyed 10
  • using of debugger corrupts the testmon database

    using of debugger corrupts the testmon database

    from @max-imlian https://github.com/tarpas/pytest-testmon/issues/90#issuecomment-408984214 FYI I'm still having real issues with testmon, where it doesn't run tests despite both changes in the code and existing failures, even when I pass --tlf.

    I love the goals of testmon, and it's performs so well in 90% of cases that it's become an essential part of my workflow. As a result, when it ignores tests that have changed, it's frustrating. I've found that --tlf often doesn't work, which is a shame as it was often a 'last resort' to testmon making a mistake in ignoring too many tests.

    Is there any info I can supply that would help debug this? I'm happy to post anything.

    Would there be any use of a 'Conservative' mode, where testmon would lean towards testing too much? A Type I error is far less costly than a Type II.

    opened by tarpas 9
  • testmon_data.fail_reports might contain both failed and skipped

    testmon_data.fail_reports might contain both failed and skipped

    testmon_data.fail_reports might contain both failed and skipped:

    (Pdb++) nodeid
    'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output'
    (Pdb++) pp self.testmon_data.fail_reports[nodeid]
    [{'duration': 0.0020258426666259766,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 742, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': None,
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'passed',
      'sections': [],
      'user_properties': [],
      'when': 'setup'},
     {'duration': 0.003198385238647461,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 742, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': 'tests/test_renderers.py:753: in test_schemajs_output\n'
                  '    output = renderer.render(\'data\', renderer_context={"request": request})\n'
                  'rest_framework/renderers.py:862: in render\n'
                  '    codec = coreapi.codecs.CoreJSONCodec()\n'
                  "E   AttributeError: 'NoneType' object has no attribute 'codecs'",
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'failed',
      'sections': [],
      'user_properties': [],
      'when': 'call'},
     {'duration': 0.008923768997192383,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 742, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': None,
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'passed',
      'sections': [],
      'user_properties': [],
      'when': 'teardown'},
     {'duration': 0.0012934207916259766,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'skipif': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 743, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': ['tests/test_renderers.py', 743, 'Skipped: coreapi is not installed'],
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'skipped',
      'sections': [],
      'user_properties': [],
      'when': 'setup'},
     {'duration': 0.026836156845092773,
      'keywords': {'TestSchemaJSRenderer': 1, 'django-rest-framework': 1, 'skipif': 1, 'test_schemajs_output': 1, 'tests/test_renderers.py': 1},
      'location': ['tests/test_renderers.py', 743, 'TestSchemaJSRenderer.test_schemajs_output'],
      'longrepr': None,
      'nodeid': 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output',
      'outcome': 'passed',
      'sections': [],
      'user_properties': [],
      'when': 'teardown'}]
    (Pdb++) pp [unserialize_report('testreport', report) for report in self.testmon_data.fail_reports[nodeid]]
    [<TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='setup' outcome='passed'>,
     <TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='call' outcome='failed'>,
     <TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='teardown' outcome='passed'>,
     <TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='setup' outcome='skipped'>,
     <TestReport 'tests/test_renderers.py::TestSchemaJSRenderer::test_schemajs_output' when='teardown' outcome='passed'>]
    

    I might have messed up some internals while debugging #101 / #102, but I think it should be ensured that this would never happen, e.g. on the DB level.

    opened by blueyed 9
  • combination of --testmon and --tlf will execute failing tests multiple time in some circumstances

    combination of --testmon and --tlf will execute failing tests multiple time in some circumstances

    I have a situation where executing pytest --testmon --tlf module1/module2 after deleting .testmondata will yield 3 failures.

    The next run of pytest --testmon --tlf module1/module2 yields 6 failures (3 duplicates of the first 3) and each subsequent run yields 3 more duplicates.

    If I run pytest --testmon --tlf module1/module2/tests I get back to 3 tests, the duplication starts as soon as I remove the final tests in the path (or execute with the root path of the repository).

    I've tried to find a simple way to reproduce this but failed so far. I've also tried looking at what changes in .testmondata but didn't spot anything.

    I'd be grateful if you could give me a hint how I might reproduce or analyze this, as I unfortunately can't share the code at the moment.

    opened by TauPan 9
  • deleting code causes an internal exception

    deleting code causes an internal exception

    I see this with pytest-testmon > 0.8.3 (downgrading to this version fixes it for me).

    How to reproduce:

    • delete .testmondata
    • run py.test --testmon once
    • Now delete some code (in my case, it was decorated with # pragma: no cover, not sure if that's relevant)
    • call py.test --testmon again And look at a backtrace like the following
    ===================================================================================== test session starts =====================================================================================
    platform linux2 -- Python 2.7.12, pytest-3.0.7, py-1.4.33, pluggy-0.4.0
    Django settings: sis.settings_devel (from ini file)
    testmon=True, changed files: sis/lib/python/sis/rest.py, skipping collection of 529 items, run variant: default
    rootdir: /home/delgado/nobackup/git/sis/software, inifile: pytest.ini
    plugins: testmon-0.9.4, repeat-0.4.1, env-0.6.0, django-3.1.2, cov-2.4.0
    collected 122 items 
    INTERNALERROR> Traceback (most recent call last):
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/main.py", line 98, in wrap_session
    INTERNALERROR>     session.exitstatus = doit(config, session) or 0
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/main.py", line 132, in _main
    INTERNALERROR>     config.hook.pytest_collection(session=session)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 745, in __call__
    INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 339, in _hookexec
    INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 334, in <lambda>
    INTERNALERROR>     _MultiCall(methods, kwargs, hook.spec_opts).execute()
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 614, in execute
    INTERNALERROR>     res = hook_impl.function(*args)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/main.py", line 141, in pytest_collection
    INTERNALERROR>     return session.perform_collect()
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/main.py", line 602, in perform_collect
    INTERNALERROR>     config=self.config, items=items)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 745, in __call__
    INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 339, in _hookexec
    INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 334, in <lambda>
    INTERNALERROR>     _MultiCall(methods, kwargs, hook.spec_opts).execute()
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 614, in execute
    INTERNALERROR>     res = hook_impl.function(*args)
    INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/testmon/pytest_testmon.py", line 165, in pytest_collection_modifyitems
    INTERNALERROR>     assert item.nodeid not in self.collection_ignored
    INTERNALERROR> AssertionError: assert 'sis/lib/python/sis/modules/marvin/tests.py::TestGetEventIDsForAWID::test_should_fail_for_expired_events' not in set(['da-exchanged/lib/python/exchanged/tests/test_config.py::TestConfigOptions::test_should_have_default_from_addr', ...'da-exchanged/lib/python/exchanged/tests/test_config.py::TestGetExchanges::test_should_return_sensible_defaults', ...])
    INTERNALERROR>  +  where 'sis/lib/python/sis/modules/marvin/tests.py::TestGetEventIDsForAWID::test_should_fail_for_expired_events' = <TestCaseFunction 'test_should_fail_for_expired_events'>.nodeid
    INTERNALERROR>  +  and   set(['da-exchanged/lib/python/exchanged/tests/test_config.py::TestConfigOptions::test_should_have_default_from_addr', ...'da-exchanged/lib/python/exchanged/tests/test_config.py::TestGetExchanges::test_should_return_sensible_defaults', ...]) = <testmon.pytest_testmon.TestmonDeselect object at 0x7f85679b82d0>.collection_ignored
    
    ==================================================================================== 185 tests deselected =====================================================================================
    =============================================================================== 185 deselected in 0.34 seconds ================================================================================
    
    opened by TauPan 9
  • Performance issue on big projects

    Performance issue on big projects

    We have a big project with a big test suite. When starting pytest with testmon enabled it takes something like 8 minutes just to start when running (almost) no tests. A profile dump reveals this:

    Wed Dec  7 14:37:13 2016    testmon-startup-profile
    
             353228817 function calls (349177685 primitive calls) in 648.684 seconds
    
       Ordered by: cumulative time
       List reduced from 15183 to 100 due to restriction <100>
    
       ncalls  tottime  percall  cumtime  percall filename:lineno(function)
            1    0.001    0.001  648.707  648.707 env/bin/py.test:3(<module>)
     10796/51    0.006    0.000  648.614   12.718 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:335(_hookexec)
     10796/51    0.017    0.000  648.614   12.718 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:332(<lambda>)
     11637/51    0.063    0.000  648.614   12.718 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:586(execute)
            1    0.000    0.000  648.612  648.612 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/config.py:29(main)
      10596/2    0.016    0.000  648.612  324.306 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:722(__call__)
            1    0.000    0.000  562.338  562.338 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/pytest_testmon.py:80(pytest_cmdline_main)
            1    0.000    0.000  562.338  562.338 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/pytest_testmon.py:70(init_testmon_data)
            1    0.004    0.004  562.338  562.338 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/testmon_core.py:258(read_fs)
         4310    1.385    0.000  545.292    0.127 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/testmon_core.py:224(test_should_run)
         4310    3.995    0.001  542.647    0.126 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/testmon_core.py:229(<dictcomp>)
      4331550   54.292    0.000  538.652    0.000 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/process_code.py:104(checksums)
            1    0.039    0.039  537.138  537.138 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/testmon_core.py:273(compute_unaffected)
     73396811   67.475    0.000  484.571    0.000 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/process_code.py:14(checksum)
     73396871  360.852    0.000  360.852    0.000 {method 'encode' of 'str' objects}
            1    0.000    0.000   83.370   83.370 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/main.py:118(pytest_cmdline_main)
            1    0.000    0.000   83.370   83.370 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/main.py:118(pytest_cmdline_main)
    
    

    as you can see the last line is 80 seconds cumulative, but the two lines above are 360 and 484 respectively.

    This hurts our use case a LOT, and since we use a reference .testmondata file that has been produced by a CI job, it seems excessive (and useless) to recalculate this on each machine when it could be calculated once up front.

    So, what do you guys think about caching this data in .testmondata?

    opened by boxed 9
  • Failure still reported when whole module gets re-run

    Failure still reported when whole module gets re-run

    I have just seen this, after marking TestSchemaJSRenderer (which only contains test_schemajs_output) with @pytest.mark.skipif(not coreapi, reason='coreapi is not installed'):

    % .tox/venvs/py36-django20/bin/pytest --testmon
    ==================================================================================== test session starts =====================================================================================
    platform linux -- Python 3.6.5, pytest-3.5.1, py-1.5.3, pluggy-0.6.0
    testmon=True, changed files: tests/test_renderers.py, skipping collection of 118 files, run variant: default
    rootdir: /home/daniel/Vcs/django-rest-framework, inifile: setup.cfg
    plugins: testmon-0.9.11, django-3.2.1, cov-2.5.1
    collected 75 items / 1219 deselected                                                                                                                                                         
    
    tests/test_renderers.py F                                                                                                                                                              [  0%]
    tests/test_filters.py F                                                                                                                                                                [  0%]
    tests/test_renderers.py ..............................................Fs                                                                                                               [100%]
    
    ========================================================================================== FAILURES ==========================================================================================
    _________________________________________________________________________ TestSchemaJSRenderer.test_schemajs_output __________________________________________________________________________
    tests/test_renderers.py:753: in test_schemajs_output
        output = renderer.render('data', renderer_context={"request": request})
    rest_framework/renderers.py:862: in render
        codec = coreapi.codecs.CoreJSONCodec()
    E   AttributeError: 'NoneType' object has no attribute 'codecs'
    _________________________________________________________________ BaseFilterTests.test_get_schema_fields_checks_for_coreapi __________________________________________________________________
    tests/test_filters.py:36: in test_get_schema_fields_checks_for_coreapi
        assert self.filter_backend.get_schema_fields({}) == []
    rest_framework/filters.py:36: in get_schema_fields
        assert coreschema is not None, 'coreschema must be installed to use `get_schema_fields()`'
    E   AssertionError: coreschema must be installed to use `get_schema_fields()`
    ________________________________________________________________ TestDocumentationRenderer.test_document_with_link_named_data ________________________________________________________________
    tests/test_renderers.py:719: in test_document_with_link_named_data
        document = coreapi.Document(
    E   AttributeError: 'NoneType' object has no attribute 'Document'
    ====================================================================================== warnings summary ======================================================================================
    None
      [pytest] section in setup.cfg files is deprecated, use [tool:pytest] instead.
    
    -- Docs: http://doc.pytest.org/en/latest/warnings.html
    ======================================================== 3 failed, 46 passed, 1 skipped, 1219 deselected, 1 warnings in 3.15 seconds =========================================================
    

    TestSchemaJSRenderer.test_schemajs_output should not show up in FAILURES.

    Note also that tests/test_renderers is listed twice, where the failure appears to come from the first entry.

    (running without --testmon shows the same number of tests (tests/test_renderers.py ..............................................Fs)).

    opened by blueyed 8
  • feat: merge DB

    feat: merge DB

    This is a very draft PR to answer this need.

    I had a hard time understanding how testmon works internally, and also how to test this.

    I'll happily get any feedback, so we can agree and merge this in testmon.

    It would be very helpful for us as we have multiple workers in our CI that run tests, and we would need to merge testmon result so it can be reused later

    opened by ElPicador 0
  • multiprocessing does not seem to be supported

    multiprocessing does not seem to be supported

    Suppose I have the following code and tests

    # mymodule.py
    class MyClass:
        def foo(self):
            return "bar"
    
    # test_mymodule.py
    import multiprocessing
    
    def __run_foo():
        from mymodule import MyClass
        c = MyClass()
        assert c.foo() == "bar"
    
    def test_foo():
        process = multiprocessing.Process(target=__run_foo)
        process.start()
        process.join()
        assert process.exitcode == 0
    

    After a first successful run of pytest --testmon test_mymodule.py, one can change the implementation of foo() without testmon noticing and no new runs are triggered.

    When running pytest --cov manually, we get a trace image

    opened by janbernloehr 1
  • Changes to global / class variables are ignored (if no method of their module is executed)

    Changes to global / class variables are ignored (if no method of their module is executed)

    Suppose you have the following files

    # mymodule.py
    foo_bar = "value"
    
    class MyClass:
        FOO = "bar"
    
    # test_mymodule.py
    from mymodule import MyClass, foo_bar
    
    def test_module():
        assert foo_bar == "value"
        assert MyClass.FOO == "bar"
    

    Now running pytest --testmon test_mymodule.py does not rerun test_module() when the value of MyClass.FOO or foo_bar is changed. Even worse, FOO can be completely removed, e.g.

    class MyClass:
        pass
    

    without triggering a re-run.

    When running pytest --cov manually, there seems to be a trace

    image

    opened by janbernloehr 2
  • Fix/improve linting of the code when using pytest-pylint

    Fix/improve linting of the code when using pytest-pylint

    This also includes changed source code files when pytest-pylint is enabled. By default source code files are ignored so the pylint is not able to process those files.

    opened by msemanicky 1
  • Testmon very sensitive towards library changes

    Testmon very sensitive towards library changes

    Background

    I have found pytest-testmon to be very sensitive towards the slightest changes for the packages in the environment it is installed. This makes very difficult to use pytest-testmon in practice when sharing a .testmondata file between our developers.

    Use Case:

    A developer is trying out a new library foo and have written a wrapper foo_bar.py for this. The developer only wants to run test_foo_bar.py for foo_bar.py, however the entire test suite is run since foo was installed.

    Example Solution

    Flag for disregarding library changes

    • I am happy to contribute with a solution, if it is considered possible.
    • If library needs funding for a solution, this is an option as well.
    opened by dgot 1
  • How does testmon handle data files?

    How does testmon handle data files?

    if I have in my code an open('path/to/file.json').read() command, would testmon be able to flag that file?

    I saw there is something called file tracers in coverage but not sure if it is for this use case.

    opened by uriva 1
Releases(v1.4.3b1)
The source code and slide for my talk about the subject: unittesing in python

PyTest Talk This talk give you some ideals about the purpose of unittest? how to write good unittest? how to use pytest framework? and show you the ba

nguyenlm 3 Jan 18, 2022
Automated testing tool developed in python for Advanced mathematical operations.

Advanced-Maths-Operations-Validations Automated testing tool developed in python for Advanced mathematical operations. Requirements Python 3.5 or late

Nikhil Repale 1 Nov 16, 2021
Hamcrest matchers for Python

PyHamcrest Introduction PyHamcrest is a framework for writing matcher objects, allowing you to declaratively define "match" rules. There are a number

Hamcrest 684 Dec 29, 2022
A small automated test structure using python to test *.cpp codes

Get Started Insert C++ Codes Add Test Code Run Test Samples Check Coverages Insert C++ Codes you can easily add c++ files in /inputs directory there i

Alireza Zahiri 2 Aug 03, 2022
A tool to auto generate the basic mocks and asserts for faster unit testing

Mock Generator A tool to generate the basic mocks and asserts for faster unit testing. 🎉 New: you can now use pytest-mock-generator, for more fluid p

31 Dec 24, 2022
A feature flipper for Django

README Django Waffle is (yet another) feature flipper for Django. You can define the conditions for which a flag should be active, and use it in a num

952 Jan 06, 2023
A pytest plugin to skip `@pytest.mark.slow` tests by default.

pytest-skip-slow A pytest plugin to skip @pytest.mark.slow tests by default. Include the slow tests with --slow. Installation $ pip install pytest-ski

Brian Okken 19 Jan 04, 2023
Front End Test Automation with Pytest Framework

Front End Test Automation Framework with Pytest Installation and running instructions: 1. To install the framework on your local machine: clone the re

Sergey Kolokolov 2 Jun 17, 2022
:game_die: Pytest plugin to randomly order tests and control random.seed

pytest-randomly Pytest plugin to randomly order tests and control random.seed. Features All of these features are on by default but can be disabled wi

pytest-dev 471 Dec 30, 2022
A command-line tool and Python library and Pytest plugin for automated testing of RESTful APIs, with a simple, concise and flexible YAML-based syntax

1.0 Release See here for details about breaking changes with the upcoming 1.0 release: https://github.com/taverntesting/tavern/issues/495 Easier API t

909 Dec 15, 2022
HTTP traffic mocking and testing made easy in Python

pook Versatile, expressive and hackable utility library for HTTP traffic mocking and expectations made easy in Python. Heavily inspired by gock. To ge

Tom 305 Dec 23, 2022
Generic automation framework for acceptance testing and RPA

Robot Framework Introduction Installation Example Usage Documentation Support and contact Contributing License Introduction Robot Framework is a gener

Robot Framework 7.7k Jan 07, 2023
Redis fixtures and fixture factories for Pytest.

Redis fixtures and fixture factories for Pytest.This is a pytest plugin, that enables you to test your code that relies on a running Redis database. It allows you to specify additional fixtures for R

Clearcode 86 Dec 23, 2022
a plugin for py.test that changes the default look and feel of py.test (e.g. progressbar, show tests that fail instantly)

pytest-sugar pytest-sugar is a plugin for pytest that shows failures and errors instantly and shows a progress bar. Requirements You will need the fol

Teemu 963 Dec 28, 2022
pytest plugin providing a function to check if pytest is running.

pytest-is-running pytest plugin providing a function to check if pytest is running. Installation Install with: python -m pip install pytest-is-running

Adam Johnson 21 Nov 01, 2022
Testing Calculations in Python, using OOP (Object-Oriented Programming)

Testing Calculations in Python, using OOP (Object-Oriented Programming) Create environment with venv python3 -m venv venv Activate environment . venv

William Koller 1 Nov 11, 2021
Donors data of Tamil Nadu Chief Ministers Relief Fund scrapped from https://ereceipt.tn.gov.in/cmprf/Interface/CMPRF/MonthWiseReport

Tamil Nadu Chief Minister's Relief Fund Donors Scrapped data from https://ereceipt.tn.gov.in/cmprf/Interface/CMPRF/MonthWiseReport Scrapper scrapper.p

Arunmozhi 5 May 18, 2021
A Modular Penetration Testing Framework

fsociety A Modular Penetration Testing Framework Install pip install fsociety Update pip install --upgrade fsociety Usage usage: fsociety [-h] [-i] [-

fsociety-team 802 Dec 31, 2022
This repository contains a testing script for nmigen-boards that tries to build blinky for all the platforms provided by nmigen-boards.

Introduction This repository contains a testing script for nmigen-boards that tries to build blinky for all the platforms provided by nmigen-boards.

S.J.R. van Schaik 4 Jul 23, 2022
Baseball Discord bot that can post up-to-date scores, lineups, and home runs.

Sunny Day Discord Bot Baseball Discord bot that can post up-to-date scores, lineups, and home runs. Uses webscraping techniques to scrape baseball dat

Benjamin Hammack 1 Jun 20, 2022