Declarative HTTP Testing for Python and anything else

Overview
Documentation Status

Gabbi

Release Notes

Gabbi is a tool for running HTTP tests where requests and responses are represented in a declarative YAML-based form. The simplest test looks like this:

tests:
- name: A test
  GET: /api/resources/id

See the docs for more details on the many features and formats for setting request headers and bodies and evaluating responses.

Gabbi is tested with Python 3.6, 3.7, 3.8, 3.9 and pypy3.

Tests can be run using unittest style test runners, pytest or from the command line with a gabbi-run script.

There is a gabbi-demo repository which provides a tutorial via its commit history. The demo builds a simple API using gabbi to facilitate test driven development.

Purpose

Gabbi works to bridge the gap between human readable YAML files that represent HTTP requests and expected responses and the obscured realm of Python-based, object-oriented unit tests in the style of the unittest module and its derivatives.

Each YAML file represents an ordered list of HTTP requests along with the expected responses. This allows a single file to represent a process in the API being tested. For example:

  • Create a resource.
  • Retrieve a resource.
  • Delete a resource.
  • Retrieve a resource again to confirm it is gone.

At the same time it is still possible to ask gabbi to run just one request. If it is in a sequence of tests, those tests prior to it in the YAML file will be run (in order). In any single process any test will only be run once. Concurrency is handled such that one file runs in one process.

These features mean that it is possible to create tests that are useful for both humans (as tools for improving and developing APIs) and automated CI systems.

Testing and Developing Gabbi

To get started, after cloning the repository, you should install the development dependencies:

$ pip install -r requirements-dev.txt

If you prefer to keep things isolated you can create a virtual environment:

$ virtualenv gabbi-venv
$ . gabbi-venv/bin/activate
$ pip install -r requirements-dev.txt

Gabbi is set up to be developed and tested using tox (installed via requirements-dev.txt). To run the built-in tests (the YAML files are in the directories gabbi/tests/gabbits_* and loaded by the file gabbi/test_*.py), you call tox:

tox -epep8,py37

If you have the dependencies installed (or a warmed up virtualenv) you can run the tests by hand and exit on the first failure:

python -m subunit.run discover -f gabbi | subunit2pyunit

Testing can be limited to individual modules by specifying them after the tox invocation:

tox -epep8,py37 -- test_driver test_handlers

If you wish to avoid running tests that connect to internet hosts, set GABBI_SKIP_NETWORK to True.

Comments
  • Coerce JSON types into correct values for later $RESPONSE replacements

    Coerce JSON types into correct values for later $RESPONSE replacements

    Resolves #147, having $RESPONSE replacements that contained integer or decimal values would wind up as quoted strings after substitution, e.g. {"id": 825} would later become {"id": "825"}.

    This passes tox -epep8, but running tox -epy27 seems to have the first few tests fail, unfortunately. This patch works for my use case, however; I could not POST data to particular endpoints of an API as the number values were wrapped in quotes (and the API was doing type checks on the values).

    This may not be an ideal (or eloquent) solution, but I've also tried to keep performance in mind in lieu of large JSON response bodies, namely that I expect the exception cases to be more common than exceptional, so I've added some additional checking to see if it's really worth parsing particular values as strings (line no. 359) and or if it even looks like a number in the first place (line no. 376). That said, if this performance hit is not an issue, it certainly is a lot more readable without the checks.

    I also chose to do two try/excepts instead of simply using float. Firstly we try parsing for int and then for float as I prefer the resulting JSON to be correct i.e. I would rather not an id field that was initially an int be cast into a float. For example, consider we get a response of {"id": 825} and we simply had one try/except that used float. The value would parse, but the resulting JSON (from json.dumps) would be {"id": 825.0}. This pragmatically doesn't matter as I'm sure most endpoints will accept a decimal value with an appended .0 to be valid as an integer, but I felt the semantics would be a surprise to other users of the lib and it's still possible that certain APIs might have an issue with.

    And thanks for all the effort you've put into the lib!

    opened by justanotherdot 20
  • Unable to make a relative Content Handler import from the command-line

    Unable to make a relative Content Handler import from the command-line

    On the command line, importing a custom Response Handler using a relative path requires manipulation of the PYTHONPATH environment variable to add . to the list of paths.

    Should Gabbi allow relative imports to work out-of-the-box?

    e.g.

    gabbi-run -r foo.bar:ExampleHandler < example.yaml
    

    ... fails with, ModuleNotFoundError: No module named 'foo'.

    Updating PYTHONPATH...

    PYTHONPATH=${PYTHONPATH}:. gabbi-run -r foo.bar:ExampleHandler < example.yaml
    

    ... works.

    opened by scottwallacesh 17
  • Allow to load python object from yaml

    Allow to load python object from yaml

    It can be interesting to write custom object to compare values.

    For example, I need to ensure an output is equal to .NAN

    Because .NAN == .NAN always returns false. We currently can't compare it with assert_equals().

    With the unsafe yaml loader we can register a custom method to check NAN, for example:

    class IsNAN(object):
        @classmethod
        def constructor(cls, loader, node):
            return cls()
    
        def __eq__(self, other):
            return numpy.isnan(other)
    
    yaml.add_constructor(u'!ISNAN', ISNAN.constructor)
    
    opened by sileht 17
  • extra verbosity to include request/response bodies

    extra verbosity to include request/response bodies

    Currently it can be somewhat tricky to debug unexpected outcomes, as verbose: true only prints headers.

    In my case, I wanted to verify that a CSRF token was included in a form submission. The simplest way to check the request body was to start netcat and change my test's URL to http://localhost:9999.

    It would be useful if gabbi provided a way to inspect the entire data being sent over the wire.

    opened by FND 15
  • Ability to run gabbi test cases individually

    Ability to run gabbi test cases individually

    The ability to run an individual gabbi test without any of the tests preceding it in the yaml file could be useful. I created a project where I drive gabbi test cases using Robot Framework (https://github.com/dkt26111/robotframework-gabbilibrary). In order for that to work I explicitly set the prior field of the gabbi test case being run to None.

    opened by dkt26111 14
  • Verbose misses response body

    Verbose misses response body

    I have got the following test spec:

    tests:
    -   name: auth
        verbose: all
        url: /api/sessions
        method: POST
        data: "asdsad"
        status: 200
    

    which has got data not a proper json on purpose. The response results in 400 instead of 200 code. Verbose is set to all, but it still does not print the response body, although detects non empty content:

    ... #### auth ####
    > POST http://localhost:7000/api/sessions
    > user-agent: gabbi/1.40.0 (Python urllib3)
    
    < 400 Bad Request
    < content-length: 48
    < date: Wed, 19 Aug 2020 06:11:24 GMT
    
    ✗ gabbi-runner.input_auth
    
    FAIL: gabbi-runner.input_auth
            Traceback (most recent call last):
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 94, in wrapper
                func(self)
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 143, in test_request
                self._run_test()
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 550, in _run_test
                self._assert_response()
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 188, in _assert_response
                self._test_status(self.test_data['status'], self.response['status'])
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 591, in _test_status
                self.assert_in_or_print_output(observed_status, statii)
              File "/usr/lib/python3/dist-packages/gabbi/case.py", line 654, in assert_in_or_print_output
                self.assertIn(expected, iterable)
              File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 417, in assertIn
                self.assertThat(haystack, Contains(needle), message)
              File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 498, in assertThat
                raise mismatch_error
            testtools.matchers._impl.MismatchError: '400' not in ['200']
    ----------------------------------------------------------------------
    

    The expected behavior:

    • response body is printed to the stdout alongside the response headers.
    opened by avkonst 11
  • Q: Persist (across > 1 tests) value to variable based on JSON response?

    Q: Persist (across > 1 tests) value to variable based on JSON response?

    Example algorithm to be expressed in YAML:

    • Create an object given an object name.
    • Store in "$FOO" (or similar) the UUID given to object per JSON response.
    • Do unrelated tests.
    • Perform a GET using "$FOO" (the UUID)

    Thanks as always!

    enhancement 
    opened by josdotso 11
  • Variable is not replace to previous result in a request body

    Variable is not replace to previous result in a request body

    Seen from https://gabbi.readthedocs.io/en/latest/format.html#any-previous-test

    There are 2 requests:

    1. post a task, will return a taskId
    2. query the task with the taskId
    • previous test return: {"dataSet": {"header": {"serverIp": "xxx.xxx.xxx.xxx", "version": "1.0", "errorKeys": [{"error_key" : "2-0-0"}], "errorInfo": "", "returnCode": 0}, "data": {"taskId": "3008929"}}}

    • yaml define data: taskId: $HISTORY['start live migrate a vm'].$RESPONSE['$.dataSet.data.taskId']

    • actual result: "data": { "taskId": "$.dataSet.data.taskId" }

    opened by taget 9
  • jsonhandler: allow reading yaml data from disk

    jsonhandler: allow reading yaml data from disk

    This commit aims to change the jsonhandler to be able to read data from disk if it is a yaml file.

    Note:

    • Simply replacing the loads call with yaml.safe_load is not enough due to the nature of the NaN checker requiring an unsafe load[1].

    closes #253 [1] https://github.com/cdent/gabbi/commit/98adca65e05b7de4f1ab2bf90ab0521af5030f35

    opened by trevormccasland 9
  • pytest not working correctly!

    pytest not working correctly!

    Hi, I have been trying the gabbi to write some simple tests and was lucky enough when using the gabbi-run, but I need jenkins report so I tried the py.test version. with the loader code looking like this:

    import os
    
    from gabbi import driver
    
    # By convention the YAML files are put in a directory named
    # "gabbits" that is in the same directory as the Python test file. 
    TESTS_DIR = 'gabbits'
    
    def test_gabbits():
        test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
        test_generator = driver.py_test_generator(
            test_dir, host="http://www.nfdsfdsfdsf.se", port=80)
    
        for test in test_generator:
            yield test
    

    The yaml-file looks very simple:

    tests:
      - name: Do get to a faulty site
        url: /sdsdsad
        method: GET
        status: 200
    

    The problem is now that the test passes, the URL does not exist so the test has to fail with a connection refused, I have also tried with a site returning 404 but still the test passes. Am I doing something wrong here?

    opened by keyhan 9
  • Add yaml-based tests for host header and sni checking

    Add yaml-based tests for host header and sni checking

    The addition of server_hostname to the httpclient PoolManager, without sufficient testing, has revealed some issues:

    • The minimum urllib3 required is too low. server_hostname was introduced in 1.24.x
    • However, there is a bug [1] in PoolManager when mixing schemes in the same pool manager. This is being fixed so the minimum urllib3 will need to be higher still.

    Tests are added here, and the minimum value for urllib3 will be set when a release is made.

    Some of the tests are "live" meaning they require network, and can be skipped via the live test fixture if the GABBI_SKIP_NETWORK env variable is set to "true".

    [1] https://github.com/urllib3/urllib3/issues/2534

    Fixes #307 Fixes #309

    opened by cdent 8
  • gabbi doesn't support client cert yet

    gabbi doesn't support client cert yet

    Gabbi doesn't support client cert yet

    Help gabbi could support: gabbi-run ... --cacert /etc/kubernetes/pki/ca.crt --cert /etc/kubernetes/pki/client.crt --key /etc/kubernetes/pki/client.key ...

    opened by wu-wenxiang 4
  • Socket leak with large YAML test files

    Socket leak with large YAML test files

    I have a YAML file with nearly 2000 tests in it. When invoked from the command line, I run out of open file handles due to large amounts of sockets left open:

    ERROR: gabbi-runner.input_/foo/bar/__test_l__
    	[Errno 24] Too many open files
    

    By default a Linux user has 1024 file handles:

    $ ulimit -n
    1024
    

    Inspecting the open file handles:

    $ ls -l /proc/$(pgrep gabbi-run)/fd | awk '{print $NF}' | cut -f1 -d: | sort | uniq -c
          1 0
          2 /path/to/a/file.txt
          1 /path/to/another/file.yaml
       1021 socket
    
    opened by scottwallacesh 3
  • Consider per-suite pre & post executables

    Consider per-suite pre & post executables

    Like fixtures, but a call to an external executable, for when gabbi-run is being used.

    This could be explicit, by putting something in the yaml file, or implicit off the name of the yaml file. That is:

    • if gabbit is foo.yaml
    • if foo-start and foo-end exist in the same dir and are executable

    either way, when the start is called gabbi should save, as a list, the line separated stdout, if any, it produced

    and provide that as args (or stdin?) to foo-end

    this would allow passing things like pids of started stuff

    /cc @FND for sanity check

    enhancement 
    opened by cdent 7
  • some fixtures that

    some fixtures that "capture" no longer work with the removal of testtools

    In https://github.com/cdent/gabbi/pull/279 testtools was removed.

    Fixtures in the openstack community that do output capturing rely on some "end of test" handling in testtools to dump the accumulated data. You can see this by trying a LOG.critical("hi") somewhere in the (e.g.) placement code and causing a test to fail. Dropping to a gabbi <2 makes it work again.

    We're definitely not going to add testtools back in, but the test case subclass in gabbi itself may be able to handling the data gathering that's required. Some investigation required.

    /cc @lbragstad for awareness

    opened by cdent 0
  • Faster tests development with gold files

    Faster tests development with gold files

    There is a cool method to speed up development of tests. It would be great if gabbi supported it too.

    Here is the idea:

    1. a test defines that a response should be compared with a gold file (reference to gold file can be custom configurable per every test)

    2. gabbi runs tests with a new flag 'generate-gold-files', which forces gabbi to capture response bodies and headers and (re-)write gold files containing the captured response data

    3. developer reviews the gold files (usually happens one by one as tests are added one by one during development)

    4. gabbi runs tests as usually

      a) if a test has got a reference to a gold file, it captures actual response output and compares with gold file b) if content of the actual output matches the gold file content, verification is considered to be passed c) otherwise test is failed

    This would allow me to reduce size of my test files by half at least.

    opened by avkonst 3
  • test files with - in the name can lead to failing tests when looking for content-type

    test files with - in the name can lead to failing tests when looking for content-type

    Bear with me, this is hard to explain

    Python v 3.6.9

    gabbi: 1.49.0

    A test file with named device-types.yaml with a test of:

    tests:                                                                          
    - name: get only 405                                                            
      POST: /device-types                                                           
      status: 405    
    

    errors with the following when run in a unittest-style harness:

        b'Traceback (most recent call last):'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/handlers/core.py", line 68, in action'
        b'    response_value = str(response[header])'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/urllib3/_collections.py", line 156, in __getitem__'
        b'    val = self._container[key.lower()]'
        b"KeyError: 'content-type'"
        b''
        b'During handling of the above exception, another exception occurred:'
        b''
        b'Traceback (most recent call last):'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/suitemaker.py", line 96, in do_test'
        b'    return test_method(*args, **kwargs)'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/case.py", line 95, in wrapper'
        b'    func(self)'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/case.py", line 149, in test_request'
        b'    self._run_test()'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/case.py", line 556, in _run_test'
        b'    self._assert_response()'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/case.py", line 196, in _assert_response'
        b'    handler(self)'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/handlers/base.py", line 54, in __call__'
        b'    self.action(test, item, value=value)'
        b'  File "/home/cdent/.uhana/lib/python3.6/site-packages/gabbi/handlers/core.py", line 72, in action'
        b'    header, response.keys()))'
        b"AssertionError: 'content-type' header not present in response: KeysView(HTTPHeaderDict({'Vary': 'Origin', 'Date': 'Tue, 24 Mar 2020 14:17:33 GMT', 'Content-Length': '0', 'status': '405', 'reason': 'Method Not Allowed'}))"
        b''
    

    However, rename the file to foo.yaml and the test works, or run the device-types.yaml file with gabbi-run and the tests work. Presumably something about test naming.

    So the short term workaround is to rename the file, but this needs to be fixed because using - in filenames is idiomatic for gabbi.

    opened by cdent 1
Releases(2.3.0)
  • 2.3.0(Sep 3, 2021)

    • For the $ENVIRON and $RESPONSE :ref:substitutions <state-substitution> it is now possible to :ref:cast <casting> the value to a type of int, float, str, or bool.
    • The JSONHandler is now more strict about how it detects that a body content is JSON, avoiding some errors where the content-type header suggests JSON but the content cannot be decoded as such.
    • Better error message when content cannot be decoded.
    • Addition of the disable_response_handler test setting for those cases when the test author has no control over the content-type header and it is wrong.
    Source code(tar.gz)
    Source code(zip)
Enabling easy statistical significance testing for deep neural networks.

deep-significance: Easy and Better Significance Testing for Deep Neural Networks Contents ⁉️ Why 📥 Installation 🔖 Examples Intermezzo: Almost Stocha

Dennis Ulmer 270 Dec 20, 2022
Fail tests that take too long to run

GitHub | PyPI | Issues pytest-fail-slow is a pytest plugin for making tests fail that take too long to run. It adds a --fail-slow DURATION command-lin

John T. Wodder II 4 Nov 27, 2022
Python Webscraping using Selenium

Web Scraping with Python and Selenium The code shows how to do web scraping using Python and Selenium. We use as data the https://sbot.org.br/localize

Luís Miguel Massih Pereira 1 Dec 01, 2021
HTTP traffic mocking and testing made easy in Python

pook Versatile, expressive and hackable utility library for HTTP traffic mocking and expectations made easy in Python. Heavily inspired by gock. To ge

Tom 305 Dec 23, 2022
Mixer -- Is a fixtures replacement. Supported Django, Flask, SqlAlchemy and custom python objects.

The Mixer is a helper to generate instances of Django or SQLAlchemy models. It's useful for testing and fixture replacement. Fast and convenient test-

Kirill Klenov 871 Dec 25, 2022
A Modular Penetration Testing Framework

fsociety A Modular Penetration Testing Framework Install pip install fsociety Update pip install --upgrade fsociety Usage usage: fsociety [-h] [-i] [-

fsociety-team 802 Dec 31, 2022
Scalable user load testing tool written in Python

Locust Locust is an easy to use, scriptable and scalable performance testing tool. You define the behaviour of your users in regular Python code, inst

Locust.io 20.4k Jan 04, 2023
This file will contain a series of Python functions that use the Selenium library to search for elements in a web page while logging everything into a file

element_search with Selenium (Now With docstrings 😎 ) Just to mention, I'm a beginner to all this, so it it's very possible to make some mistakes The

2 Aug 12, 2021
Ab testing - basically a statistical test in which two or more variants

Ab testing - basically a statistical test in which two or more variants

Buse Yıldırım 5 Mar 13, 2022
Baseball Discord bot that can post up-to-date scores, lineups, and home runs.

Sunny Day Discord Bot Baseball Discord bot that can post up-to-date scores, lineups, and home runs. Uses webscraping techniques to scrape baseball dat

Benjamin Hammack 1 Jun 20, 2022
Automating the process of sorting files in my downloads folder by file type.

downloads-folder-automation Automating the process of sorting files in a user's downloads folder on Windows by file type. This script iterates through

Eric Mahasi 27 Jan 07, 2023
🎓 Stepik Academy Автоматизация тестирования на Python

🎓 Stepik Academy Автоматизация тестирования на Python Запуск тестов выполняется в командной строке: pytest -v --tb=line --language=en --alluredir=all

Sergey 1 Dec 03, 2021
Subprocesses for Humans 2.0.

Delegator.py — Subprocesses for Humans 2.0 Delegator.py is a simple library for dealing with subprocesses, inspired by both envoy and pexpect (in fact

Amit Tripathi 1.6k Jan 04, 2023
Thin-wrapper around the mock package for easier use with pytest

pytest-mock This plugin provides a mocker fixture which is a thin-wrapper around the patching API provided by the mock package: import os class UnixF

pytest-dev 1.5k Jan 05, 2023
Asyncio http mocking. Similar to the responses library used for 'requests'

aresponses an asyncio testing server for mocking external services Features Fast mocks using actual network connections allows mocking some types of n

93 Nov 16, 2022
PoC getting concret intel with chardet and charset-normalizer

aiohttp with charset-normalizer Context aiohttp.TCPConnector(limit=16) alpine linux nginx 1.21 python 3.9 aiohttp dev-master chardet 4.0.0 (aiohttp-ch

TAHRI Ahmed R. 2 Nov 30, 2022
This repository contnains sample problems with test cases using Cormen-Lib

Cormen Lib Sample Problems Description This repository contnains sample problems with test cases using Cormen-Lib. These problems were made for the pu

Cormen Lib 3 Jun 30, 2022
Automated tests for OKAY websites in Python (Selenium) - user friendly version

Okay Selenium Testy Aplikace určená k testování produkčních webů společnosti OKAY s.r.o. Závislosti K běhu aplikace je potřeba mít v počítači nainstal

Viktor Bem 0 Oct 01, 2022
Test python asyncio-based code with ease.

aiounittest Info The aiounittest is a helper library to ease of your pain (and boilerplate), when writing a test of the asynchronous code (asyncio). Y

Krzysztof Warunek 55 Oct 30, 2022
Based on the selenium automatic test framework of python, the program crawls the score information of the educational administration system of a unive

whpu_spider 该程序基于python的selenium自动化测试框架,对某高校的教务系统的成绩信息实时爬取,在检测到成绩更新之后,会通过电子邮件的方式,将更新的成绩以文本的方式发送给用户,可以使得用户在不必手动登录教务系统网站时,实时获取成绩更新的信息。 该程序仅供学习交流,不可用于恶意攻

1 Dec 30, 2021