Poetry PEP 517 Build Backend & Core Utilities

Overview

Poetry Core

PyPI version Python Versions License: MIT Code style: black

A PEP 517 build backend implementation developed for Poetry. This project is intended to be a light weight, fully compliant, self-contained package allowing PEP 517 compatible build frontends to build Poetry managed projects.

Usage

In most cases, the usage of this package is transparent to the end-user as it is either made use by Poetry itself or a PEP 517 frontend (eg: pip).

In order to enable the use poetry-core as your build backend, the following snippet must be present in your project's pyproject.toml file.

[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

Once this is present, a PEP 517 frontend like pip can build and install your project from source without the need for Poetry or any of it's dependencies.

# install to current environment
pip install /path/to/poetry/managed/project

# build a wheel package
pip wheel /path/to/poetry/managed/project

Why is this required?

Prior to the release of version 1.1.0, Poetry was a build as a project management tool that included a PEP 517 build backend. This was inefficient and time consuming in majority cases a PEP 517 build was required. For example, both pip and tox (with isolated builds) would install Poetry and all dependencies it required. Most of these dependencies are not required when the objective is to simply build either a source or binary distribution of your project.

In order to improve the above situation, poetry-core was created. Shared functionality pertaining to PEP 517 build backends, including reading lock file, pyproject.toml and building wheel/sdist, were implemented in this package. This makes PEP 517 builds extremely fast for Poetry managed packages.

Comments
  • Introduce support for relative include of non-package modules (

    Introduce support for relative include of non-package modules ("workspaces")

    The purpose of the changes here is to enable Workspace support. A workspace is a place for code and projects. Within the workspace, code can be shared. A workspace is usually at the root of your repository.

    To identify a workspace in a Python repo, an empty workspace.toml file is put at the top of the workspace. Future plugins that extends workspaces could use that file to store configuration and settings.

    The feature in this pull request will make this this plugin redundant 😄 (I am the author of that plugin)

    Why workspaces? A workspace can contain more than one project. Different projects will likely use the same code. A very simplistic example would be a logger. To avoid code duplication, code could be moved out from the project into packages, and each project can reference the package from the project specific pyproject.toml file.

    This requires that Poetry allows package includes (note the difference from dependencies) that are "outside" of the project path, but within a workspace. That's what this pull request will do.

    An example & simplified tree workspace structure (note the namespacing for shared package includes):

    projects/
      my_app/
        pyproject.toml (including a shared package)
    
      my_service/
        pyproject.toml (including other shared packages)
    
    shared/
      my_namespace/
        my_package/
          __init__.py
          code.py
    
        my_other_package/
          __init__.py
          code.py
    
    workspace.toml (a file that tells the plugin where to find the workspace root)
    

    I think this feature resolves the issues raised in: https://github.com/python-poetry/poetry/issues/936 and probably also https://github.com/python-poetry/poetry/issues/2270

    • [ ] Added tests for changed code.
    • [ ] Updated documentation for changed code.
    opened by DavidVujic 36
  • Added script file feature

    Added script file feature

    Resolves: python-poetry#241

    • [x] Added tests for changed code.
    • [x] Updated documentation for changed code.

    Feature: ability to add script files to distribution packages

    This one is a long overdue, see: https://github.com/sdispater/poetry/issues/241 (June 2018)

    The implementation is based on the official specification by @abn https://github.com/python-poetry/poetry/issues/2310

    This is a very basic feature of setup.py See here: docs.

    Note: this is not the same as entry_point! This setuptools feature basically just copies over the given files to the virtualenv's bin folder so that it'll be available on the $PATH and you can simply invoke it.

    Example

    [tools.poetry]
    package = "my_package"
    
    [tools.poetry.scripts]
    migration = { source = "bin/run_db_migrations.sh", type = "file" }
    setup_script = { source = "bin/run_setup_script.sh" } # type can be omitted!
    

    Then:

    poetry install
    poetry run sh run_db_migrations.sh
    poetry run sh run_setup_script.sh
    

    Testing

    • I added a couple of automated tests
    • I tested this manually with one of our repositories and it worked
    Feature 
    opened by peterdeme 36
  • Fix bug/crash when source dir is outside project dir

    Fix bug/crash when source dir is outside project dir

    Hello poetry team,

    here is a draft fix for issue python-poetry#6521. It does not currently include a test, as I am not sure how to write a test for it. I tried perusing the unit tests in tests/masonry/builders/test_builder.py and got a bit lost :sweat_smile: Any help or guidance on unit tests would be much appreciated

    Documentation... I'm not sure if that's necessary? The purpose of this change is to make poetry do the right thing, instead of failing unexpectedly... I will defer to your judgement on that.

    replace usage of pathlib.Path.relative_to with os.path.relpath, as per https://github.com/python-poetry/poetry/issues/5621

    Resolves: python-poetry#6521

    • [ ] Added tests for changed code.
    • [ ] Updated documentation for changed code.
    opened by alextremblay 16
  • Fixes unstable next breaking version when major is 0

    Fixes unstable next breaking version when major is 0

    Resolves: https://github.com/python-poetry/poetry/issues/6519

    • [x] Added tests for changed code.
    • [ ] Updated documentation for changed code.

    What this PR contains:

    • moves Version.stable to PEP440Version.stable as Version.stable was breaking the precision half of the time.
    • adds bunch of test
    • fixes Version.next_breaking to allow for version with major == 0

    Extra info

    It would be great to have to have a poetry build step that checks the requires_dist output against poetry to make sure the package is actually usable.

    opened by mazinesy 14
  • Allow appending git username / password to dependency

    Allow appending git username / password to dependency

    Resolves: https://github.com/python-poetry/poetry/issues/2062, https://github.com/python-poetry/poetry/issues/2348. Builds on top of #96.

    Changes in #96 (copied from here):

    This PR allows appending deployment keys to usernames as used by gitlab.

    Furthermore a new property is_unsafe is introduced for ParsedUrl, which can be used for cli command like poetry add to easily return whether the git dependency contains a password.

    According to gitlab's docs a + is allowed in usernames. This is fixed as well.

    Fixes: python-poetry/poetry#2062

    (This was initially submitted to the poetry repository: python-poetry/poetry#2169)

    Changes I made on top of @finswimmer's work in #96 (rebased to abe9fe5):

    • Add tests for when x-oauth-basic GitHub URL is used in package link (https://github.com/python-poetry/poetry/issues/2348).

    • Add warning and log for when a dependency stores a password in plain text by using the is_unsafe property (based on discussion in https://github.com/python-poetry/poetry/pull/2169#issuecomment-705587436).

    • [x] Added tests for changed code.

    • [x] ~Updated documentation for changed code.~ NA

    Hoping this is at a good enough point to review and consider merging into main, to be included for a soon-ish release.

    opened by setu4993 14
  • Feature: Declare ext_modules and libraries in pyproject.toml

    Feature: Declare ext_modules and libraries in pyproject.toml

    PR summary

    A declarative approach to extensions using pyproject.toml. Eliminates the need for build.py in many cases.

    How to use

    Extension modules can now be declared in pyproject.toml:

    [tool.poetry.ext_modules.mymodule] sources = ["mymodule/mymodule.c"]

    The new logic will validate this config and use it to construct a distutils.extension.Extension object to be passed to distutils.setup()

    Config properties

    The new logic adds support for all options normally passed to distutils.extension.Extension(), and also within the libraries argument to distutils::setup(). The full list of supported config properties are:

    Extension modules [tool.poetry.ext_modules]

    • sources (required) - A list of source filenames or globs
    • include_dirs - A list of directories to search for C/C++ header files
    • define_macros - A list of macros to define; each macro is defined using a 2-tuple (name, value)
    • undef_macros - A list of macros to undefine explicitly
    • library_dirs - A list of directories to search for C/C++ libraries at link time
    • libraries - A list of library names (not filenames or paths) to link against
    • runtime_library_dirs - A list of directories to search for C/C++ libraries at run time
    • extra_objects - A list of paths or globs of extra files to link with
    • extra_compile_args - Any extra platform- and compiler-specific information to use when compiling the source files
    • extra_link_args - Any extra platform- and compiler-specific information to use when linking object files together
    • export_symbols - A list of symbols to be exported from a shared extension
    • depends - A list of paths or globs of files that the extension depends on
    • language - The extension language (i.e. 'c', 'c++', 'objc')
    • optional - Boolean, specifies that a build failure in the extension should not abort the build process

    C libraries [tool.poetry.libraries]

    • sources (required) - A list of source filenames or globs
    • include_dirs - A list of directories to search for C/C++ header files
    • macros - A list of macros to define; each macro is defined using a 2-tuple (name, value)

    Why?

    I wanted a tool that would limit everything to a single configuration file to do: dependency management, packaging and publishing.

    By eliminating the need for build.py in many cases, this PR better accomplishes the stated intent of Poetry (reducing project complexity).

    What if I still want to use build.py?

    This doesn't stop you, it just gives you another option. If your build logic is too complex (e.g. moving files around), the declarative approach may not be sufficient.

    To promote clarity and to reduce the complexity of build logic, this PR doesn't allow you to mix both approaches.

    opened by bostonrwalker 12
  • Type checking

    Type checking

    Resolves: python-poetry#

    • [ ] Added tests for changed code.
    • [ ] Updated documentation for changed code.

    More typechecking.

    Things that I am not entirely comfortable with:

    • returning empty strings from VCSDependency.reference() and VCSDependency.pretty_constraint()
    • https://github.com/python-poetry/poetry-core/blob/26e978d3b72ff58039ac921524b5ed47cb8b7353/src/poetry/core/packages/constraints/any_constraint.py#L25 looks like nonsense, what we surely want to return here is some sort of negation of other?
    • I've thrown in a few exceptions for places in constraints where cases simply weren't handled. The code would have failed anyway, I've just made it explicit: but still WIBNI these cases were handled
    opened by dimbleby 12
  • Added trusted repository option

    Added trusted repository option

    opened by maayanbar13 12
  • Explicitly print gitignored

    Explicitly print gitignored

    Resolves: python-poetry/poetry#6443

    • [ ] Added tests for changed code.
    • [ ] Updated documentation for changed code.

    Didn't add tests as it's just a matter of calling the logger. I also fixed a typo, changing explicitely_<excluded | included> to explicitly_<excluded | included>.

    PR for https://github.com/python-poetry/poetry/issues/6443

    opened by evanrittenhouse 11
  • set up mypy hook for incremental adoption

    set up mypy hook for incremental adoption

    as per the comment here - https://github.com/python-poetry/poetry/pull/4510

    for some reason i had to use different config to successfully exclude the _vendor directory

    opened by danieleades 11
  • Allow non-existant path dependencies

    Allow non-existant path dependencies

    Currently poetry lock --no-update fails when you remove any path dependency because it first loads all dependencies from the lock file (which still has a reference to the dependency you just deleted) and it fails at that point because when DirectoryDependency is instantiated with a non-existent path it immediately throws an exception.

    This change allows the process to continue, which is the same behavior VCS dependencies have.

    If the pyproject.toml has a path dependency that doesn't exist we will want to error for poetry install and poetry lock (poetry build with path dependencies is already kinda a no-go). poetry install already fails gracefully (if you lock the project then delete the path dependency and try to install; i.e. when no locking happens before we start installing), I opened https://github.com/python-poetry/poetry/pull/6844 to make poetry lock fail gracefully.

    opened by adriangb 10
  • performance fix for simplified marker simplifications

    performance fix for simplified marker simplifications

    While working on full support for overlapping markers, I encountered a combinatorial explosion when building a union of markers (see test case).

    The issue is that we introduced cnf without introducing a counterpart of union_simplify, which leads to unnecessary big markers. Therefore, I added intersect_simplify analoguous to union_simplify.

    Further, the simplifications in #530 went a bit too far in removing the common_markers/unique_markers simplification in union_simplify. I reverted this removal and added analogue functionality to intersect_simplify.

    Comparing the number of items in itertools.product in dnf of union (after cnf) shows the benefit of the simplifications:

    |union (see test case)|number and length of markers in conjunction after cnf|number of items in itertools.product in dnf| |---|---|---| without PR | 31 (1-4) | > 2**44 with simple intersect_simplifiy | 9 (1-2) | 256 with revival of common_markers/unique_markers simplification | 3 (1-2) | 4

    opened by radoering 1
  • markers: fix `get_python_constraint_by_marker()` for multi markers and marker unions without `python_version`

    markers: fix `get_python_constraint_by_marker()` for multi markers and marker unions without `python_version`

    Actually the same as #307 which only covered single markers.

    The actual fix is in only() of MultiMarker and MarkerUnion. The test shows that get_python_constraint_by_marker() is fixed.

    There were some test cases testing the faulty behaviour of only(). I did not only change the expectation (which would have been "" in most cases) but also the input in some cases to cover some more interesting cases.

    • [x] Added tests for changed code.
    • [ ] Updated documentation for changed code.
    opened by radoering 1
  • builders/wheel: Ensure dist-info is written determinisically

    builders/wheel: Ensure dist-info is written determinisically

    glob() returns values in "on disk" order. To make the RECORD file deterministic and consistent between builds we need to sort the data before adding to the records list.

    Signed-off-by: Richard Purdie [email protected]

    Resolves: python-poetry#

    • [ ] Added tests for changed code.
    • [ ] Updated documentation for changed code.
    opened by rpurdie 3
  • PEP 440 compliance: do not implicitly allow pre-releases

    PEP 440 compliance: do not implicitly allow pre-releases

    Extracted from #402. Besides being a precondition of #402 it's a fix on its own. And it requires some coordination with downstream.

    Requires: python-poetry/poetry#7225 and python-poetry/poetry#7236

    PEP 440 says:

    Pre-releases of any kind, including developmental releases, are implicitly excluded from all version specifiers, unless they are already present on the system, explicitly requested by the user, or if the only available version that satisfies the version specifier is a pre-release.

    Nothing special about "> pre-release" or ">= pre-release"

    opened by radoering 1
  • validate extras against dependencies and in schema

    validate extras against dependencies and in schema

    Resolves: python-poetry/poetry/issues/7226

    • [x] Added tests for changed code.
    • [ ] Updated documentation for changed code.

    With this change, extras are validated to only contain valid characters and extras that reference dependencies that can't be found in the main dependency group raise a warning in poetry check (not in poetry lock/poetry install, though)

    opened by Panaetius 1
Releases(1.4.0)
Owner
Poetry
Python packaging and dependency management made easy
Poetry
DANeS is an open-source E-newspaper dataset by collaboration between DATASET JSC (dataset.vn) and AIV Group (aivgroup.vn)

DANeS - Open-source E-newspaper dataset Source: Technology vector created by macrovector - www.freepik.com. DANeS is an open-source E-newspaper datase

DATASET .JSC 64 Aug 17, 2022
Code to reprudece NeurIPS paper: Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks

Accelerated Sparse Neural Training: A Provable and Efficient Method to FindN:M Transposable Masks Recently, researchers proposed pruning deep neural n

itay hubara 4 Feb 23, 2022
End-to-end image captioning with EfficientNet-b3 + LSTM with Attention

Image captioning End-to-end image captioning with EfficientNet-b3 + LSTM with Attention Model is seq2seq model. In the encoder pretrained EfficientNet

2 Feb 10, 2022
Deduplication is the task to combine different representations of the same real world entity.

Deduplication is the task to combine different representations of the same real world entity. This package implements deduplication using active learning. Active learning allows for rapid training wi

63 Nov 17, 2022
结巴中文分词

jieba “结巴”中文分词:做最好的 Python 中文分词组件 "Jieba" (Chinese for "to stutter") Chinese text segmentation: built to be the best Python Chinese word segmentation

Sun Junyi 29.8k Jan 02, 2023
A crowdsourced dataset of dialogues grounded in social contexts involving utilization of commonsense.

A crowdsourced dataset of dialogues grounded in social contexts involving utilization of commonsense.

Alexa 62 Dec 20, 2022
A Python/Pytorch app for easily synthesising human voices

Voice Cloning App A Python/Pytorch app for easily synthesising human voices Documentation Discord Server Video guide Voice Sharing Hub FAQ's System Re

Ben Andrew 840 Jan 04, 2023
Super easy library for BERT based NLP models

Fast-Bert New - Learning Rate Finder for Text Classification Training (borrowed with thanks from https://github.com/davidtvs/pytorch-lr-finder) Suppor

Utterworks 1.8k Dec 27, 2022
Data loaders and abstractions for text and NLP

torchtext This repository consists of: torchtext.data: Generic data loaders, abstractions, and iterators for text (including vocabulary and word vecto

3.2k Dec 30, 2022
An easy to use, user-friendly and efficient code for extracting OpenAI CLIP (Global/Grid) features from image and text respectively.

Extracting OpenAI CLIP (Global/Grid) Features from Image and Text This repo aims at providing an easy to use and efficient code for extracting image &

Jianjie(JJ) Luo 13 Jan 06, 2023
DeepSpeech - Easy-to-use Speech Toolkit including SOTA ASR pipeline, influential TTS with text frontend and End-to-End Speech Simultaneous Translation.

(简体中文|English) Quick Start | Documents | Models List PaddleSpeech is an open-source toolkit on PaddlePaddle platform for a variety of critical tasks i

5.6k Jan 03, 2023
Search-Engine - 📖 AI based search engine

Search Engine AI based search engine that was trained on 25000 samples, feel free to train on up to 1.2M sample from kaggle dataset, link below StackS

Vladislav Kruglikov 2 Nov 29, 2022
nlabel is a library for generating, storing and retrieving tagging information and embedding vectors from various nlp libraries through a unified interface.

nlabel is a library for generating, storing and retrieving tagging information and embedding vectors from various nlp libraries through a unified interface.

Bernhard Liebl 2 Jun 10, 2022
Text Analysis & Topic Extraction on Android App user reviews

AndroidApp_TextAnalysis Hi, there! This is code archive for Text Analysis and Topic Extraction from user_reviews of Android App. Dataset Source : http

Fitrie Ratnasari 1 Feb 14, 2022
The PyTorch based implementation of continuous integrate-and-fire (CIF) module.

CIF-PyTorch This is a PyTorch based implementation of continuous integrate-and-fire (CIF) module for end-to-end (E2E) automatic speech recognition (AS

Minglun Han 24 Dec 29, 2022
Package for controllable summarization

summarizers summarizers is package for controllable summarization based CTRLsum. currently, we only supports English. It doesn't work in other languag

Hyunwoong Ko 72 Dec 07, 2022
Code and datasets for our paper "PTR: Prompt Tuning with Rules for Text Classification"

PTR Code and datasets for our paper "PTR: Prompt Tuning with Rules for Text Classification" If you use the code, please cite the following paper: @art

THUNLP 118 Dec 30, 2022
Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing

Introduction Funnel-Transformer is a new self-attention model that gradually compresses the sequence of hidden states to a shorter one and hence reduc

GUOKUN LAI 197 Dec 11, 2022
GPT-3: Language Models are Few-Shot Learners

GPT-3: Language Models are Few-Shot Learners arXiv link Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-trainin

OpenAI 12.5k Jan 05, 2023
AMUSE - financial summarization

AMUSE AMUSE - financial summarization Unzip data.zip Train new model: python FinAnalyze.py --task train --start 0 --count how many files,-1 for all

1 Jan 11, 2022