Sane and flexible OpenAPI 3 schema generation for Django REST framework.

Overview

drf-spectacular

build-status-image codecov pypi-version docs

Sane and flexible OpenAPI 3.0 schema generation for Django REST framework.

This project has 3 goals:
  1. Extract as much schema information from DRF as possible.
  2. Provide flexibility to make the schema usable in the real world (not only toy examples).
  3. Generate a schema that works well with the most popular client generators.

The code is a heavily modified fork of the DRF OpenAPI generator, which is/was lacking all of the below listed features.

Features
  • Serializers modelled as components. (arbitrary nesting and recursion supported)
  • @extend_schema decorator for customization of APIView, Viewsets, function-based views, and @action
    • additional parameters
    • request/response serializer override (with status codes)
    • polymorphic responses either manually with PolymorphicProxySerializer helper or via rest_polymorphic's PolymorphicSerializer)
    • ... and more customization options
  • Authentication support (DRF natives included, easily extendable)
  • Custom serializer class support (easily extendable)
  • SerializerMethodField() type via type hinting or @extend_schema_field
  • i18n support
  • Tags extraction
  • Request/response/parameter examples
  • Description extraction from docstrings
  • Sane fallbacks
  • Sane operation_id naming (based on path)
  • Schema serving with SpectacularAPIView (Redoc and Swagger-UI views are also available)
  • Optional input/output serializer component split
  • Included support for:

For more information visit the documentation.

License

Provided by T. Franzel, Cashlink Technologies GmbH. Licensed under 3-Clause BSD.

Requirements

  • Python >= 3.6
  • Django (2.2, 3.1, 3.2)
  • Django REST Framework (3.10, 3.11, 3.12)

Installation

Install using pip...

$ pip install drf-spectacular

then add drf-spectacular to installed apps in settings.py

INSTALLED_APPS = [
    # ALL YOUR APPS
    'drf_spectacular',
]

and finally register our spectacular AutoSchema with DRF.

REST_FRAMEWORK = {
    # YOUR SETTINGS
    'DEFAULT_SCHEMA_CLASS': 'drf_spectacular.openapi.AutoSchema',
}

drf-spectacular ships with sane default settings that should work reasonably well out of the box. It is not necessary to specify any settings, but we recommend to specify at least some metadata.

SPECTACULAR_SETTINGS = {
    'TITLE': 'Your Project API',
    'DESCRIPTION': 'Your project description',
    'VERSION': '1.0.0',
    # OTHER SETTINGS
}

Release management

drf-spectacular deliberately stays below version 1.x.x to signal that every new version may potentially break you. For production we strongly recommend pinning the version and inspecting a schema diff on update.

With that said, we aim to be extremely defensive w.r.t. breaking API changes. However, we also acknowledge the fact that even slight schema changes may break your toolchain, as any existing bug may somehow also be used as a feature.

We define version increments with the following semantics. y-stream increments may contain potentially breaking changes to both API and schema. z-stream increments will never break the API and may only contain schema changes that should have a low chance of breaking you.

Take it for a spin

Generate your schema with the CLI:

$ ./manage.py spectacular --file schema.yml
$ docker run -p 80:8080 -e SWAGGER_JSON=/schema.yml -v ${PWD}/schema.yml:/schema.yml swaggerapi/swagger-ui

If you also want to validate your schema add the --validate flag. Or serve your schema directly from your API. We also provide convenience wrappers for swagger-ui or redoc.

from drf_spectacular.views import SpectacularAPIView, SpectacularRedocView, SpectacularSwaggerView
urlpatterns = [
    # YOUR PATTERNS
    path('api/schema/', SpectacularAPIView.as_view(), name='schema'),
    # Optional UI:
    path('api/schema/swagger-ui/', SpectacularSwaggerView.as_view(url_name='schema'), name='swagger-ui'),
    path('api/schema/redoc/', SpectacularRedocView.as_view(url_name='schema'), name='redoc'),
]

Usage

drf-spectacular works pretty well out of the box. You might also want to set some metadata for your API. Just create a SPECTACULAR_SETTINGS dictionary in your settings.py and override the defaults. Have a look at the available settings.

The toy examples do not cover your cases? No problem, you can heavily customize how your schema will be rendered.

Customization by using @extend_schema

Most customization cases should be covered by the extend_schema decorator. We usually get pretty far with specifying OpenApiParameter and splitting request/response serializers, but the sky is the limit.

from drf_spectacular.utils import extend_schema, OpenApiParameter, OpenApiExample
from drf_spectacular.types import OpenApiTypes

class AlbumViewset(viewset.ModelViewset)
    serializer_class = AlbumSerializer

    @extend_schema(
        request=AlbumCreationSerializer
        responses={201: AlbumSerializer},
    )
    def create(self, request):
        # your non-standard behaviour
        return super().create(request)

    @extend_schema(
        # extra parameters added to the schema
        parameters=[
            OpenApiParameter(name='artist', description='Filter by artist', required=False, type=str),
            OpenApiParameter(
                name='release',
                type=OpenApiTypes.DATE,
                location=OpenApiParameter.QUERY,
                description='Filter by release date',
                examples=[
                    OpenApiExample(
                        'Example 1',
                        summary='short optional summary',
                        description='longer description',
                        value='1993-08-23'
                    ),
                    ...
                ],
            ),
        ],
        # override default docstring extraction
        description='More descriptive text',
        # provide Authentication class that deviates from the views default
        auth=None,
        # change the auto-generated operation name
        operation_id=None,
        # or even completely override what AutoSchema would generate. Provide raw Open API spec as Dict.
        operation=None,
        # attach request/response examples to the operation.
        examples=[
            OpenApiExample(
                'Example 1',
                description='longer description',
                value=...
            ),
            ...
        ],
    )
    def list(self, request):
        # your non-standard behaviour
        return super().list(request)

    @extend_schema(
        request=AlbumLikeSerializer
        responses={204: None},
        methods=["POST"]
    )
    @extend_schema(description='Override a specific method', methods=["GET"])
    @action(detail=True, methods=['post', 'get'])
    def set_password(self, request, pk=None):
        # your action behaviour

More customization

Still not satisifed? You want more! We still got you covered. Visit customization for more information.

Testing

Install testing requirements.

$ pip install -r requirements.txt

Run with runtests.

$ ./runtests.py

You can also use the excellent tox testing tool to run the tests against all supported versions of Python and Django. Install tox globally, and then simply run:

$ tox
Owner
T. Franzel
T. Franzel
A repository of links with advice related to grad school applications, research, phd etc

A repository of links with advice related to grad school applications, research, phd etc

Shaily Bhatt 946 Dec 30, 2022
A PyTorch implementation of Deep SAD, a deep Semi-supervised Anomaly Detection method.

Deep SAD: A Method for Deep Semi-Supervised Anomaly Detection This repository provides a PyTorch implementation of the Deep SAD method presented in ou

Lukas Ruff 276 Jan 04, 2023
🏆 A ranked list of awesome python developer tools and libraries. Updated weekly.

Best-of Python Developer Tools 🏆 A ranked list of awesome python developer tools and libraries. Updated weekly. This curated list contains 250 awesom

Machine Learning Tooling 646 Jan 07, 2023
advance python series: Data Classes, OOPs, python

Working With Pydantic - Built-in Data Process ========================== Normal way to process data (reading json file): the normal princiople, it's f

Phung Hưng Binh 1 Nov 08, 2021
Mkdocs obsidian publish - Publish your obsidian vault through a python script

Mkdocs Obsidian Mkdocs Obsidian is an association between a python script and a

Mara 49 Jan 09, 2023
Run `black` on python code blocks in documentation files

blacken-docs Run black on python code blocks in documentation files. install pip install blacken-docs usage blacken-docs provides a single executable

Anthony Sottile 460 Dec 23, 2022
PyPresent - create slide presentations from notes

PyPresent Create slide presentations from notes Add some formatting to text file

1 Jan 06, 2022
Python Deep Dive Course - Accompanying Materials

Python Deep Dive Various Jupyter notebooks and Python sources associated with my Udemy Python 3 Deep Dive course series: Part 1: Mainly functional pro

Fred Baptiste 1.1k Dec 30, 2022
Reproducible Data Science at Scale!

Pachyderm: The Data Foundation for Machine Learning Pachyderm provides the data layer that allows machine learning teams to productionize and scale th

Pachyderm 5.7k Dec 29, 2022
PySpark Cheat Sheet - learn PySpark and develop apps faster

This cheat sheet will help you learn PySpark and write PySpark apps faster. Everything in here is fully functional PySpark code you can run or adapt to your programs.

Carter Shanklin 168 Jan 01, 2023
✨ Real-life Data Analysis and Model Training Workshop by Global AI Hub.

🎓 Data Analysis and Model Training Course by Global AI Hub Syllabus: Day 1 What is Data? Multimedia Structured and Unstructured Data Data Types Data

Global AI Hub 71 Oct 28, 2022
Materi workshop "Light up your Python!" Himpunan Mahasiswa Sistem Informasi Fakultas Ilmu Komputer Universitas Singaperbangsa Karawang, 4 September 2021 (Online via Zoom).

Workshop Python UNSIKA 2021 Materi workshop "Light up your Python!" Himpunan Mahasiswa Sistem Informasi Fakultas Ilmu Komputer Universitas Singaperban

Eka Putra 20 Mar 24, 2022
This program has been coded to allow the user to rename all the files in the entered folder.

Bulk_File_Renamer This program has been coded to allow the user to rename all the files in the entered folder. The only required package is "termcolor

1 Jan 06, 2022
Python script to generate Vale linting rules from word usage guidance in the Red Hat Supplementary Style Guide

ssg-vale-rules-gen Python script to generate Vale linting rules from word usage guidance in the Red Hat Supplementary Style Guide. These rules are use

Vale at Red Hat 1 Jan 13, 2022
Spin-off Notice: the modules and functions used by our research notebooks have been refactored into another repository

Fecon235 - Notebooks for financial economics. Keywords: Jupyter notebook pandas Federal Reserve FRED Ferbus GDP CPI PCE inflation unemployment wage income debt Case-Shiller housing asset portfolio eq

Adriano 825 Dec 27, 2022
Netbox Dns is a netbox plugin for managing zone, nameserver and record inventory.

Netbox DNS Netbox Dns is a netbox plugin for managing zone, nameserver and record inventory. Features Manage zones (domains) you have. Manage nameserv

Aurora Research Lab 155 Jan 06, 2023
Hjson for Python

hjson-py Hjson, a user interface for JSON Hjson works with Python 2.5+ and Python 3.3+ The Python implementation of Hjson is based on simplejson. For

Hjson 185 Dec 13, 2022
More detailed upload statistics for Nicotine+

More Upload Statistics A small plugin for Nicotine+ 3.1+ to create more detailed upload statistics. ⚠ No data previous to enabling this plugin will be

Nick 1 Dec 17, 2021
A plugin to introduce a generic API for Decompiler support in GEF

decomp2gef A plugin to introduce a generic API for Decompiler support in GEF. Like GEF, the plugin is battery-included and requires no external depend

Zion 379 Jan 08, 2023
Quickly download, clean up, and install public datasets into a database management system

Finding data is one thing. Getting it ready for analysis is another. Acquiring, cleaning, standardizing and importing publicly available data is time

Weecology 274 Jan 04, 2023