Collections of pydantic models

Overview

pydantic-collections

Build Status Coverage Status

The pydantic-collections package provides BaseCollectionModel class that allows you to manipulate collections of pydantic models (and any other types supported by pydantic).

Requirements

  • Python >= 3.7
  • pydantic >= 1.8.2

Installation

pip install pydantic-collections

Usage

Basic usage

from datetime import datetime

from pydantic import BaseModel
from pydantic_collections import BaseCollectionModel


class User(BaseModel):
    id: int
    name: str
    birth_date: datetime


class UserCollection(BaseCollectionModel[User]):
    pass


 user_data = [
        {'id': 1, 'name': 'Bender', 'birth_date': '2010-04-01T12:59:59'},
        {'id': 2, 'name': 'Balaganov', 'birth_date': '2020-04-01T12:59:59'},
    ]

users = UserCollection(user_data)
print(users)
#> UserCollection([User(id=1, name='Bender', birth_date=datetime.datetime(2010, 4, 1, 12, 59, 59)), User(id=2, name='Balaganov', birth_date=datetime.datetime(2020, 4, 1, 12, 59, 59))])
print(users.dict())
#> [{'id': 1, 'name': 'Bender', 'birth_date': datetime.datetime(2010, 4, 1, 12, 59, 59)}, {'id': 2, 'name': 'Balaganov', 'birth_date': datetime.datetime(2020, 4, 1, 12, 59, 59)}]
print(users.json())
#> [{"id": 1, "name": "Bender", "birth_date": "2010-04-01T12:59:59"}, {"id": 2, "name": "Balaganov", "birth_date": "2020-04-01T12:59:59"}]

Strict assignment validation

By default BaseCollectionModel has a strict assignment check

...
users = UserCollection()
users.append(User(id=1, name='Bender', birth_date=datetime.utcnow()))  # OK
users.append({'id': 1, 'name': 'Bender', 'birth_date': '2010-04-01T12:59:59'})
#> pydantic.error_wrappers.ValidationError: 1 validation error for UserCollection
#> __root__ -> 2
#>  instance of User expected (type=type_error.arbitrary_type; expected_arbitrary_type=User)

This behavior can be changed via Model Config

...
class UserCollection(BaseCollectionModel[User]):
    class Config:
        validate_assignment_strict = False
        
users = UserCollection()
users.append({'id': 1, 'name': 'Bender', 'birth_date': '2010-04-01T12:59:59'})  # OK
assert users[0].__class__ is User
assert users[0].id == 1

Using as a model field

BaseCollectionModel is a subclass of BaseModel, so you can use it as a model field

...
class UserContainer(BaseModel):
    users: UserCollection = []
        
data = {
    'users': [
        {'id': 1, 'name': 'Bender', 'birth_date': '2010-04-01T12:59:59'},
        {'id': 2, 'name': 'Balaganov', 'birth_date': '2020-04-01T12:59:59'},
    ]
}

container = UserContainer(**data)
container.users.append(User(...))
...
You might also like...
vartests is a Python library to perform some statistic tests to evaluate Value at Risk (VaR) Models

vartests is a Python library to perform some statistic tests to evaluate Value at Risk (VaR) Models, such as: T-test: verify if mean of distribution i

A model checker for verifying properties in epistemic models

Epistemic Model Checker This is a model checker for verifying properties in epistemic models. The goal of the model checker is to check for Pluralisti

Fit models to your data in Python with Sherpa.

Table of Contents Sherpa License How To Install Sherpa Using Anaconda Using pip Building from source History Release History Sherpa Sherpa is a modeli

 pydantic-i18n is an extension to support an i18n for the pydantic error messages.
pydantic-i18n is an extension to support an i18n for the pydantic error messages.

pydantic-i18n is an extension to support an i18n for the pydantic error messages

Python collections that are backended by sqlite3 DB and are compatible with the built-in collections

sqlitecollections Python collections that are backended by sqlite3 DB and are compatible with the built-in collections Installation $ pip install git+

Seamlessly integrate pydantic models in your Sphinx documentation.
Seamlessly integrate pydantic models in your Sphinx documentation.

Seamlessly integrate pydantic models in your Sphinx documentation.

🪄 Auto-generate Streamlit UI from Pydantic Models and Dataclasses.
🪄 Auto-generate Streamlit UI from Pydantic Models and Dataclasses.

Streamlit Pydantic Auto-generate Streamlit UI elements from Pydantic models. Getting Started • Documentation • Support • Report a Bug • Contribution •

Hyperlinks for pydantic models

Hyperlinks for pydantic models In a typical web application relationships between resources are modeled by primary and foreign keys in a database (int

Pydantic models for pywttr and aiopywttr.

Pydantic models for pywttr and aiopywttr.

EMNLP 2021 Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections

Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections Ruiqi Zhong, Kristy Lee*, Zheng Zhang*, Dan Klein EMN

PyTorch implementation of
PyTorch implementation of "Representing Shape Collections with Alignment-Aware Linear Models" paper.

deep-linear-shapes PyTorch implementation of "Representing Shape Collections with Alignment-Aware Linear Models" paper. If you find this code useful i

flask extension for integration with the awesome pydantic package

Flask-Pydantic Flask extension for integration of the awesome pydantic package with Flask. Installation python3 -m pip install Flask-Pydantic Basics v

flask extension for integration with the awesome pydantic package

Flask-Pydantic Flask extension for integration of the awesome pydantic package with Flask. Installation python3 -m pip install Flask-Pydantic Basics v

A curated list of awesome things related to Pydantic! 🌪️

Awesome Pydantic A curated list of awesome things related to Pydantic. These packages have not been vetted or approved by the pydantic team. Feel free

Pydantic model support for Django ORM

Pydantic model support for Django ORM

flask extension for integration with the awesome pydantic package

flask extension for integration with the awesome pydantic package

Flask Sugar is a web framework for building APIs with Flask, Pydantic and Python 3.6+ type hints.
Flask Sugar is a web framework for building APIs with Flask, Pydantic and Python 3.6+ type hints.

Flask Sugar is a web framework for building APIs with Flask, Pydantic and Python 3.6+ type hints. check parameters and generate API documents automatically. Flask Sugar是一个基于flask,pyddantic,类型注解的API框架, 可以检查参数并自动生成API文档

Pydantic-ish YAML configuration management.
Pydantic-ish YAML configuration management.

Pydantic-ish YAML configuration management.

(A)sync client for sms.ru with pydantic responses

🚧 aioSMSru Send SMS Check SMS status Get SMS cost Get balance Get limit Get free limit Get my senders Check login/password Add to stoplist Remove fro

Comments
  • Bug dict() method: ignore or raised exception when using dict function attribute (ex. include, exclude, etc.)

    Bug dict() method: ignore or raised exception when using dict function attribute (ex. include, exclude, etc.)

    Hi there, I tried to use the method dict but i got an error: KeyError(__root__) Here an example:

    1. Model structure:
    
    from datetime import datetime, time
    from typing import Optional, Union
    from pydantic import Field, validator, BaseModel
    from pydantic_collections import BaseCollectionModel
    
    class OpeningTime(BaseModel):
        weekday: int = Field(..., alias="weekday")
        day: Optional[str] = Field(alias="day")  # NB: keep it after number_weekday attribute
        from_time: Optional[time] = Field(alias="fromTime")
        to_time: Optional[time] = Field(alias="toTime")
    
        @validator("day", pre=True)
        def generate_weekday(cls, weekday: str, values) -> str:
            if weekday is None or len(weekday) == 0:
                return WEEKDAYS[str(values["weekday"])]
            return weekday
    
    
    
    class OpeningTimes(BaseCollectionModel[OpeningTime]):
        pass
    
    
    class PaymentMethod(BaseModel):
        type: str = Field(..., alias="type")
        card_type: str = Field(..., alias="cardType")
    
    
    class PaymentMethods(BaseCollectionModel[PaymentMethod]):
        pass
    
    
    class FuelType(BaseModel):
        type: str = Field(..., alias="Fuel")
    
    
    class FuelTypes(BaseCollectionModel[FuelType]):
        pass
    
    
    class AdditionalInfoStation(BaseModel):
        opening_times: Optional[OpeningTimes] = Field(alias="openingTimes")
        car_wash_opening_times: Optional[OpeningTimes] = Field(alias="openingTimesCarWash")
        payment_methods: PaymentMethods = Field(..., alias="paymentMethods")
        fuel_types: FuelTypes = Field(..., alias="fuelTypes")
    
    
    class Example(BaseModel):
        hash_key: int = Field(..., alias="hashKey")
        range_key: str = Field(..., alias="rangeKey")
        location_id: str = Field(..., alias="locationId")
        name: str = Field(..., alias="name")
        street: str = Field(..., alias="street")
        address_number: str = Field(..., alias="addressNumber")
        zip_code: int = Field(..., alias="zipCode")
        city: str = Field(..., alias="city")
        region: str = Field(..., alias="region")
        country: str = Field(..., alias="country")
        additional_info: Union[AdditionalInfoStation] = Field(..., alias="additionalInfo")
    
    
    class ExampleList(BaseCollectionModel[EniGeoPoint]):
        pass
    
    1. Imagine that there is an ExampleList populated object and needed filters field during apply of dict method:
    example_list: ExampleList = ExampleList.parse_obj([{......}])
    
    #This istruction raised exception
    example_list.dict(by_alias=True, inlcude={"hash_key", "range_key"})
    
    1. The last istruction raise an error: Message: KeyError('__root__')

    My env is:

    • pydantic==1.9.1
    • pydantic-collections==0.2.0
    • python version 3.9.7

    If you need more info please contact me.

    opened by aferrari94 6
Releases(v0.4.0)
Owner
Roman Snegirev
Roman Snegirev
A Python 3 library making time series data mining tasks, utilizing matrix profile algorithms

MatrixProfile MatrixProfile is a Python 3 library, brought to you by the Matrix Profile Foundation, for mining time series data. The Matrix Profile is

Matrix Profile Foundation 302 Dec 29, 2022
PyPDC is a Python package for calculating asymptotic Partial Directed Coherence estimations for brain connectivity analysis.

Python asymptotic Partial Directed Coherence and Directed Coherence estimation package for brain connectivity analysis. Free software: MIT license Doc

Heitor Baldo 3 Nov 26, 2022
Incubator for useful bioinformatics code, primarily in Python and R

Collection of useful code related to biological analysis. Much of this is discussed with examples at Blue collar bioinformatics. All code, images and

Brad Chapman 560 Jan 03, 2023
Making the DAEN information accessible.

The purpose of this repository is to make the information on Australian COVID-19 adverse events accessible. The Therapeutics Goods Administration (TGA) keeps a database of adverse reactions to medica

10 May 10, 2022
This repo contains a simple but effective tool made using python which can be used for quality control in statistical approach.

📈 Statistical Quality Control 📉 This repo contains a simple but effective tool made using python which can be used for quality control in statistica

SasiVatsal 8 Oct 18, 2022
Full automated data pipeline using docker images

Create postgres tables from CSV files This first section is only relate to creating tables from CSV files using postgres container alone. Just one of

1 Nov 21, 2021
A Python and R autograding solution

Otter-Grader Otter Grader is a light-weight, modular open-source autograder developed by the Data Science Education Program at UC Berkeley. It is desi

Infrastructure Team 93 Jan 03, 2023
A Python Tools to imaging the shallow seismic structure

ShallowSeismicImaging Tools to imaging the shallow seismic structure, above 10 km, based on the ZH ratio measured from the ambient seismic noise, and

Xiao Xiao 9 Aug 09, 2022
This is an example of how to automate Ridit Analysis for a dataset with large amount of questions and many item attributes

This is an example of how to automate Ridit Analysis for a dataset with large amount of questions and many item attributes

Ishan Hegde 1 Nov 17, 2021
Data Science Environment Setup in single line

datascienv is package that helps your to setup your environment in single line of code with all dependency and it is also include pyforest that provide single line of import all required ml libraries

Ashish Patel 55 Dec 16, 2022
First steps with Python in Life Sciences

First steps with Python in Life Sciences This course material is part of the "First Steps with Python in Life Science" three-day course of SIB-trainin

SIB Swiss Institute of Bioinformatics 22 Jan 08, 2023
Python scripts aim to use a Random Forest machine learning algorithm to predict the water affinity of Metal-Organic Frameworks

The following Python scripts aim to use a Random Forest machine learning algorithm to predict the water affinity of Metal-Organic Frameworks (MOFs). The training set is extracted from the Cambridge S

1 Jan 09, 2022
Convert tables stored as images to an usable .csv file

Convert an image of numbers to a .csv file This Python program aims to convert images of array numbers to corresponding .csv files. It uses OpenCV for

711 Dec 26, 2022
ped-crash-techvol: Texas Ped Crash Tech Volume Pack

ped-crash-techvol: Texas Ped Crash Tech Volume Pack In conjunction with the Final Report "Identifying Risk Factors that Lead to Increase in Fatal Pede

Network Modeling Center; Center for Transportation Research; The University of Texas at Austin 2 Sep 28, 2022
PyEmits, a python package for easy manipulation in time-series data.

PyEmits, a python package for easy manipulation in time-series data. Time-series data is very common in real life. Engineering FSI industry (Financial

Thompson 5 Sep 23, 2022
Automated Exploration Data Analysis on a financial dataset

Automated EDA on financial dataset Just a simple way to get automated Exploration Data Analysis from financial dataset (OHLCV) using Streamlit and ta.

Darío López Padial 28 Nov 27, 2022
Average time per match by division

HW_02 Unzip matches.rar to access .json files for matches. Get an API key to access their data at: https://developer.riotgames.com/ Average time per m

11 Jan 07, 2022
MotorcycleParts DataAnalysis python

We work with the accounting department of a company that sells motorcycle parts. The company operates three warehouses in a large metropolitan area.

NASEEM A P 1 Jan 12, 2022
Candlestick Pattern Recognition with Python and TA-Lib

Candlestick-Pattern-Recognition-with-Python-and-TA-Lib Goal Look at the S&P500 to try and get a better understanding of these candlestick patterns and

Ganesh Jainarain 11 Oct 07, 2022
Jupyter notebooks for the book "The Elements of Statistical Learning".

This repository contains Jupyter notebooks implementing the algorithms found in the book and summary of the textbook.

Madiyar 369 Dec 30, 2022