Deep learning operations reinvented (for pytorch, tensorflow, jax and others)

Overview

einops

Build Status PyPI version Documentation

Flexible and powerful tensor operations for readable and reliable code. Supports numpy, pytorch, tensorflow, and others.

Tweets

In case you need convincing arguments for setting aside time to learn about einsum and einops... Tim Rocktäschel, FAIR

Writing better code with PyTorch and einops 👌 Andrej Karpathy, AI at Tesla

Slowly but surely, einops is seeping in to every nook and cranny of my code. If you find yourself shuffling around bazillion dimensional tensors, this might change your life Nasim Rahaman, MILA (Montreal)

Contents

Tutorials

Tutorials are the most convenient way to see einops in action (and right now work as a documentation)

Installation

Plain and simple:

pip install einops

API

einops has a minimalistic yet powerful API.

Three operations provided (einops tutorial shows those cover stacking, reshape, transposition, squeeze/unsqueeze, repeat, tile, concatenate, view and numerous reductions)

from einops import rearrange, reduce, repeat
# rearrange elements according to the pattern
output_tensor = rearrange(input_tensor, 't b c -> b c t')
# combine rearrangement and reduction
output_tensor = reduce(input_tensor, 'b c (h h2) (w w2) -> b h w c', 'mean', h2=2, w2=2)
# copy along a new axis 
output_tensor = repeat(input_tensor, 'h w -> h w c', c=3)

And two corresponding layers (einops keeps a separate version for each framework) with the same API.

from einops.layers.chainer import Rearrange, Reduce
from einops.layers.gluon import Rearrange, Reduce
from einops.layers.keras import Rearrange, Reduce
from einops.layers.torch import Rearrange, Reduce
from einops.layers.tensorflow import Rearrange, Reduce

Layers behave similarly to operations and have the same parameters (with the exception of the first argument, which is passed during call)

layer = Rearrange(pattern, **axes_lengths)
layer = Reduce(pattern, reduction, **axes_lengths)

# apply created layer to a tensor / variable
x = layer(x)

Example of using layers within a model:

# example given for pytorch, but code in other frameworks is almost identical  
from torch.nn import Sequential, Conv2d, MaxPool2d, Linear, ReLU
from einops.layers.torch import Rearrange

model = Sequential(
    Conv2d(3, 6, kernel_size=5),
    MaxPool2d(kernel_size=2),
    Conv2d(6, 16, kernel_size=5),
    MaxPool2d(kernel_size=2),
    # flattening
    Rearrange('b c h w -> b (c h w)'),  
    Linear(16*5*5, 120), 
    ReLU(),
    Linear(120, 10), 
)

Naming

einops stands for Einstein-Inspired Notation for operations (though "Einstein operations" is more attractive and easier to remember).

Notation was loosely inspired by Einstein summation (in particular by numpy.einsum operation).

Why use einops notation?!

Semantic information (being verbose in expectations)

y = x.view(x.shape[0], -1)
y = rearrange(x, 'b c h w -> b (c h w)')

While these two lines are doing the same job in some context, the second one provides information about the input and output. In other words, einops focuses on interface: what is the input and output, not how the output is computed.

The next operation looks similar:

y = rearrange(x, 'time c h w -> time (c h w)')

but it gives the reader a hint: this is not an independent batch of images we are processing, but rather a sequence (video).

Semantic information makes the code easier to read and maintain.

More checks

Reconsider the same example:

y = x.view(x.shape[0], -1) # x: (batch, 256, 19, 19)
y = rearrange(x, 'b c h w -> b (c h w)')

The second line checks that the input has four dimensions, but you can also specify particular dimensions. That's opposed to just writing comments about shapes since comments don't work and don't prevent mistakes as we know

y = x.view(x.shape[0], -1) # x: (batch, 256, 19, 19)
y = rearrange(x, 'b c h w -> b (c h w)', c=256, h=19, w=19)

Result is strictly determined

Below we have at least two ways to define the depth-to-space operation

# depth-to-space
rearrange(x, 'b c (h h2) (w w2) -> b (c h2 w2) h w', h2=2, w2=2)
rearrange(x, 'b c (h h2) (w w2) -> b (h2 w2 c) h w', h2=2, w2=2)

There are at least four more ways to do it. Which one is used by the framework?

These details are ignored, since usually it makes no difference, but it can make a big difference (e.g. if you use grouped convolutions in the next stage), and you'd like to specify this in your code.

Uniformity

reduce(x, 'b c (x dx) -> b c x', 'max', dx=2)
reduce(x, 'b c (x dx) (y dy) -> b c x y', 'max', dx=2, dy=3)
reduce(x, 'b c (x dx) (y dy) (z dz)-> b c x y z', 'max', dx=2, dy=3, dz=4)

These examples demonstrated that we don't use separate operations for 1d/2d/3d pooling, those are all defined in a uniform way.

Space-to-depth and depth-to space are defined in many frameworks but how about width-to-height?

rearrange(x, 'b c h (w w2) -> b c (h w2) w', w2=2)

Framework independent behavior

Even simple functions are defined differently by different frameworks

y = x.flatten() # or flatten(x)

Suppose x's shape was (3, 4, 5), then y has shape ...

  • numpy, cupy, chainer, pytorch: (60,)
  • keras, tensorflow.layers, mxnet and gluon: (3, 20)

Independence of framework terminology

Example: tile vs repeat causes lots of confusion. To copy image along width:

np.tile(image, (1, 2))    # in numpy
image.repeat(1, 2)        # pytorch's repeat ~ numpy's tile

With einops you don't need to decipher which axis was repeated:

repeat(image, 'h w -> h (tile w)', tile=2)  # in numpy
repeat(image, 'h w -> h (tile w)', tile=2)  # in pytorch
repeat(image, 'h w -> h (tile w)', tile=2)  # in tf
repeat(image, 'h w -> h (tile w)', tile=2)  # in jax
repeat(image, 'h w -> h (tile w)', tile=2)  # in mxnet
... (etc.)

Supported frameworks

Einops works with ...

Contributing

Best ways to contribute are

  • spread the word about einops
  • if you like explaining things, alternative tutorials are very helpful
  • translating examples in languages other than English is also a good idea
  • use einops notation in your papers to strictly define used operations!

Supported python versions

einops works with python 3.6 or later.

Comments
  • einops.einsum

    einops.einsum

    Hey! Loving einops, so much that now I feel a bit sad about standard einsum not being able to use descriptive names for dimensions. It would be amazing if einops implemented einsum with the same conveniences.

    feature suggestion 
    opened by cgarciae 26
  • [Feature suggestion] Identifiers not on both sides of the expression

    [Feature suggestion] Identifiers not on both sides of the expression

    This seems like it should be (intuitively) plausible:

    rearrange(x, 'b -> a b c', a=1, c=1)

    to essentially push a vector to be compatible with some other tensors (for broadcasting operations). Currently this throws an error.

    One (sort of ugly) workaround is:

    rearrange(x, '(a b c) -> a b c', a=1, c=1)

    However, it seems like this is a bit redundant and it obfuscates the intent a bit. Thoughts?

    feature suggestion 
    opened by jotaf98 16
  • [Feature Request] functions on elements of 1 dimension: reorder (concatenate), and chunk

    [Feature Request] functions on elements of 1 dimension: reorder (concatenate), and chunk

    Thank you for making our life easier when working with tensors. I have the following suggestions based on #50 and #20.

    A. Reorder and concatenation of items of different shapes

    A.1 Reorder elements of 1 dimension

    As suggested in #50, it is indeed useful when we have an operation for reordering the elements of channels, especially for those working on images with different libraries (open-cv, PIL). It is really better than doing with boring indices.

    I totally agree with @remisphere that we can use reorder without misleading to users.

    # instead of doing this
    out = imgs[:, [2, 0, 1, 3], :, : ]
    # we can use the below
    einops.reorder(imgs, 'batch [rg b a -> b rg a] h w', rg=2, b=1, a=1)
    

    A.2 Concatenation of items of different sizes on 1 dimension

    Since we only perform operations on the single dimension, we can perform the concatenation of multiple items with different sizes on that dimension. This will easily handle the case mentioned in #20 and extremely useful for those who use concatenate in their code. I use this function many times to concatenate tensors of different shapes. For example:

    # three below tensors have different size on the 2nd dim
    print(x.shape) # [b, 10]
    print(y.shape) # [b, 15]
    print(z.shape) # [b, 20]
    
    # we can concatenate them as
    inputs = [x, y, z]
    out = einops.reorder(inputs, 'batch [x y z -> x y z]', x=10, y=15, z=20)
    

    The above call is consistent with einops.rearrange to concatenate inputs including items of the same shape.

    It is possible to split out into their components x, y, z with three lines using the below chunk function:

    x = einops.chunk(out, 'batch [x yz -> x]', x=10)
    y = einops.chunk(out, 'batch [x y z -> y]', x=10, y=15)
    z = einops.chunk(out, 'batch [xy z -> z]', z=20)
    

    B. Chunking along 1 dimension

    In contrast with #50, I don't think it is a good idea to merge chunking into reorder. We can separate these functionalities into the above reorder and chunk. Chunking is used frequently when we want to sample parts of datasets and features.

    Example in #50:

    # remove the alpha channel and the bottom half of 256*256 images:
    einops.chunk(imgs, 'batch [rg b a -> b rg] [top bottom -> top] w', rg=2, b=1, top=128, batch=10)
    

    Split dataset into train and val

    train_len = int(len(dataset) * 0.8)
    train_split = einops.chunk(dataset, '[train val -> train] c h w', train=train_len)
    val_split = einops.chunk(dataset, '[train val -> val] c h w', train=train_len)
    

    And we can get the full dataset given train_split and val_split:

    dataset = einops.reorder([train_split, val_split], '[train val -> train val] c h w', train=len(train_split), val=len(val_split))
    
    feature suggestion 
    opened by davidnvq 10
  • Create einsum operation

    Create einsum operation

    This creates the functional einsum function as requested on #73. CC @arogozhnikov @cgarciae

    The current implementation simply parses the string and converts it to einsum notation by mapping axis names to single characters (I use string.ascii_letters, starting from a, b, c etc).

    Currently, it has the following features:

    • Supports the backends: tensorflow, numpy, jax, pytorch, chainer, oneflow, keras, cupy.
    • Allows for an arbitrary number of tensors passed.
    • Allows ellipsis specification, including for multiple tensors, so long as it is provided on both the left and the right of the ->.

    It does not currently support

    • Reshape operations, such as "(batch channel) feature, feature -> batch channel".
    • Custom reduction operations.

    These could be added later if desired. Some backends do not support custom reductions in their einsum implementations so it will be a bit more work.

    I also added a docstring and some unittests (in tests/test_einsum.py).

    Here are some examples of use, with the numpy backend:

    # Filter a set of images:
    >>> batched_images = np.random.randn(128, 16, 16)
    >>> filters = np.random.randn(16, 16, 30)
    >>> result = einsum(batched_images, filters,
    ...                 "batch h w, h w channel -> batch channel") 
    
    >>> result.shape
    (128, 30)
    
    # Matrix multiplication, with an unknown input shape:
    >>> batch_shape = (50, 30)
    >>> data = np.random.randn(*batch_shape, 20)
    >>> weights = np.random.randn(10, 20)
    >>> result = einsum(weights, data, 
    ...                 "out_dim in_dim, ... in_dim -> ... out_dim")
    >>> result.shape
    (50, 30, 10)
    

    Note that the number of spaces next to the comma above are arbitrary, you could do either "in_dim, ..." or "in_dim , ..." - both will work.

    Eager to hear feedback on this!

    Cheers, Miles


    Edit 1: Got working for repeat indices on one side (as used in, e.g., trace). Edit 2: Added support for chainer, oneflow, cupy, tensorflow.keras. Edit 3: Added many more tests, some mirroring those used in the np.einsum tests. Edit 4: More and more unit tests. Edit 5: Tweaked the syntax to have tensors first, pattern second. Adapted tests, and added new validation for order of arguments.

    opened by MilesCranmer 8
  • [Feature] Jax/Flax Layers, especially Einmix?

    [Feature] Jax/Flax Layers, especially Einmix?

    EinMix layer looks great but cannot be used with Jax since it's not a function. Would be great to have that EinMix layer in a Jax-based framework like Flax.

    feature suggestion 
    opened by lkhphuc 7
  • tensorflow layers

    tensorflow layers

    I love your project, it has really changed the way I write my code. I wanted to be able to use it in the TensorFlow version of Keras as well. I only had to change one thing since the dimensions are represented differently.

    opened by adam-r-kowalski 7
  • The library is not typed

    The library is not typed

    Describe the bug

    MR #211 add py.typed to enable type checking support. However the library is not fully typed (at least not the public interface, see for instance https://github.com/arogozhnikov/einops/blob/master/einops/layers/torch.py.

    Now mypy is complaining on each einops.layers.torch class call (Reduce, Rearrange, etc.).

    -> error: Call to untyped function "Rearrange" in typed context [no-untyped-call]

    The library should not have a py.typed file until its public interface is fully typed.

    Reproduction steps run mypy on any source file containing, for instance, from einops.layers.torch import Rearrange

    Expected behavior mypy should not raise [no-untyped-call] errors

    Your platform Ubuntu 22.04

    enhancement 
    opened by RomainBrault 6
  • [BUG] Updated: einops does not support torch.jit.script

    [BUG] Updated: einops does not support torch.jit.script

    Updated The original issue was closed, despite the fact that provided snippet still uses a module, as opposed to a function and tested with newer and older PyTorch versions. Opening the issue here in hope of restarting the conversation and finding possible solutions.

    Describe the bug The einops package does not support torch.jit.script. The following snippet tries a simple Rearrange operation, using a module to test this. It is observed that the operation is not supported.

    Reproduction steps The following snippet should illustrate a concise way of reproducing this issue:

    import torch
    import torch.nn as nn
    from einops.layers.torch import Rearrange
    
    class SimpleRearrange(nn.Module):
        def __init__(self):
            super().__init__()
            self.layer = nn.Sequential(Rearrange('b c h w -> b h w c'))
        
        def forward(self, x):
            result = self.layer(x)
            return result
        
    net = SimpleRearrange()
    net.eval()
    with torch.no_grad():
        torch.jit.script(net)
    

    Expected behavior This is the expected output: 148459117-250414c0-3b85-46d1-a227-bcc92745efb1

    Your platform einops version: 0.3.2 Python version: 3.8.12 PyTorch version: 1.10.0 CUDA version: 10.2

    bug 
    opened by ahatamiz 6
  • feat: Add Flax Layers

    feat: Add Flax Layers

    Adds Flax Module's for einops operations. More details can be found in #153

    einops supports JAX arrays, so I'm unsure if I should add tests or not ?

    CC: @arogozhnikov

    opened by SauravMaheshkar 5
  • [Feature suggestion] splatting list of anonymous dimensions of any length

    [Feature suggestion] splatting list of anonymous dimensions of any length

    Hi Alex again! :wave: Thank you again for this wonderful work :pray: I still can't get over how beautiful the abstraction is, and how much of a multiplier it is in my work.

    I was wondering what your thoughts are about this potential extension to einops? A common pattern I run into is the need to flatten dimensions (for some operation), and then to reconstitute it later. To do this, I usually need to save the dimensions to a list or tuple, and then later run a reshape on the output. To do this with einops rearrange currently is difficult, as it is unable to take in a list of dimensions (it needs each dimension to be explicitly named), and there is no way to name the ellipsis-es.

    I hacked up a prototype here https://github.com/lucidrains/memorizing-transformers-pytorch/commit/b190c5ec6144d6dafb11eb356fe51dd10bccc052 to give you an idea of the use-case. Would be curious to hear your thoughts, whether you can think of some even more general way to handle this with the internal ellipsis code you have already, or whether you think this doesn't belong in einops proper

    Thank you!

    feature suggestion 
    opened by lucidrains 5
  • einops does not support torch.jit.script ?

    einops does not support torch.jit.script ?

    Describe the bug Thanks for this great work. In recent updates, it is mentioned that einops supports torch.jit.script for PyTorch layers. I and @yiheng-wang-nv have been looking into this to support TorchScript for a number of models. However, we are not able to utilize this functionality for simple operations such as Rearrange.

    Reproduction steps The following snippet should illustrate a concise way of reproducing this issue:

    import torch
    import torch.nn as nn
    from einops.layers.torch import Rearrange
    
    class SimpleRearrange(nn.Module):
        def __init__(self):
            super().__init__()
            self.layer = nn.Sequential(Rearrange('b c h w -> b h w c'))
        
        def forward(self, x):
            result = self.layer(x)
            return result
        
    net = SimpleRearrange()
    net.eval()
    with torch.no_grad():
        torch.jit.script(net)
    

    Expected behavior This is the expected output:

    check einops

    Your platform einops version: 0.3.2 Python version: 3.8.12 PyTorch version: 1.6.0 CUDA version: 10.2

    Based on this, I believe einops does not support torch.jit.script, unless we are missing something. Appreciate your inputs here.

    bug 
    opened by ahatamiz 5
  • instructions for contributing?

    instructions for contributing?

    Are there instructions for how to set up the project for contributing? The link on the README is broken. I tried building the project locally, but I had trouble with dependencies; the pyproject.toml file doesn't list any dependencies, but the tests rely on e.g. numpy. Sorry if this is a stupid question; I'm pretty new to this!

    opened by elenishor 1
  • einops tutorial on youtube

    einops tutorial on youtube

    Hi,

    The API design, documentation, and examples in this repository are so good that they do not need any other tutorial.

    But to make others aware of the existence of this fantastic work I have created a tutorial on it. https://www.youtube.com/watch?v=xGy75Pjsqzo

    I have only talked about the rearrange API and maybe later would cover other goodies from this package.

    Many thanks for this great work

    Regards Kapil

    opened by ksachdeva 1
  • [Feature suggestion] UnPack proportionally

    [Feature suggestion] UnPack proportionally

    1. Try to collect use-cases If you don't have access to the input API to use pack but still want to unpack it with einops, you would need to know the size of the output. This is not ideal if you known the proportion that you want to unpack into.
    x = MyBigPretrainedLibraryImportModel(size='base')  # base, large, huge 
    
    # Instead of
    b, s, c = x.shape  # c is different depending on model size
    c_param = c // 2
    x_mean, x_logvar = unpack(x, [[c_param], [c_param]], 'b s *')
    
    # Accept this
    x_mean, x_logvar = unpack(x, [[0.5], [0.5]], 'b s *')
    
    1. Integrity - does it interplay well with existing operations and notation in einops? pack and unpack currently only accepts int, so accepting float shouldn't break anything. Incase of float it will be a bit more job, like ensuring all sum to 1, each proportion will scale up to a proper integer etc.
    assert c == 512
    x1, x_2, x_3 = unpack(x, [[1/3], [1/3], [1/4], 'b s *')  # Raise error as 1/3 + 1/3 + 1/4 != 1
    x1, x_2, x_3 = unpack(x, [[1/3], [1/3], [1/3], 'b s *')  # Raise error as 512 is not divisible by 3
    
    1. Readability The complexity of checking validity of proportion might not worth adding it to the library and should be handled by a few lines of user code.
    feature suggestion 
    opened by lkhphuc 0
  • [Feature suggestion] Easy inverse for

    [Feature suggestion] Easy inverse for "rearrange" (with code suggestion)

    Francois Fleuret suggested that it'd be nice if there were an function whereby einops.rearrange could be easily 'inverted' i.e. undone or "transformed back".

    I replied that a "wrapper" function or class shouldn't be too hard, and wrote one at the following link, which includes a few examples: https://gist.github.com/drscotthawley/81865a5c5e729b769486efb9c3f2249d

    Whether such a functionality just remains as an "external wrapper" that users can add-on, or somehow gets added to the einops codebase (maybe not as this class, but something similar) is up to you, but wanted to share it here to add to the conversation!

    One way to include it into the existing codebase could be to rename einops.rearrange to einops._rearrange and then have the new einops.rearrange =RearrangeWrapper() where RearrangeWrapper's sub-methods call _rearrange (as shown in my gist example). If that sounds interesting then I could submit a PR.

    feature suggestion 
    opened by drscotthawley 3
  • [Feature suggestion] Naming einops keras layers

    [Feature suggestion] Naming einops keras layers

    Hello, it would be really nice if one could name the eniops layers just like any other keras layer. Right now, the following code triggers an error.

    from einops.layers.tensorflow import Rearrange
    
    tf.keras.Sequential([
        tf.keras.Input((224, 224, 3), name="inputs"),
        Rearrange("b h w c -> b c h w", name="rearrange_layer_1"),
    ])
    

    The error goes away if we do not name the einops layer.

    from einops.layers.tensorflow import Rearrange
    
    tf.keras.Sequential([
        tf.keras.Input((224, 224, 3), name="inputs"),
        Rearrange("b h w c -> b c h w"),
    ])
    

    Naming layers is very useful in keras, specially when using Functional models, to extract intermediate representations or to add new nodes to the graph. This process of extracting nodes is done by accessing the model's layer with model.get_layer(name).

    feature suggestion 
    opened by Sangohe 0
Releases(v0.6.0)
  • v0.6.0(Nov 9, 2022)

    What's Changed

    • Introduce einops.pack and einops.unpack by @arogozhnikov in https://github.com/arogozhnikov/einops/pull/222
    • Update example to match description by @EPronovost in https://github.com/arogozhnikov/einops/pull/217
    • Improve type hinting by @arogozhnikov in https://github.com/arogozhnikov/einops/pull/221
    • Cosmetics for pack/unpack: documentation and comments by @arogozhnikov in https://github.com/arogozhnikov/einops/pull/223
    • Preparations for 0.6.0 release by @arogozhnikov in https://github.com/arogozhnikov/einops/pull/224

    New Contributors

    • @EPronovost made their first contribution in https://github.com/arogozhnikov/einops/pull/217

    Announcement

    Sunsetting experimental mxnet support: no demand and package is outdated, with numerous deprecations and poor support of corner cases. 0.6.0 will be the last release with mxnet backend.

    Full Changelog: https://github.com/arogozhnikov/einops/compare/v0.5.0...v0.6.0

    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Oct 3, 2022)

    What's Changed

    • Create einsum operation by @MilesCranmer in https://github.com/arogozhnikov/einops/pull/197
    • Add flax layers by @arogozhnikov in https://github.com/arogozhnikov/einops/pull/214
    • Add oneflow backend by @arogozhnikov in https://github.com/arogozhnikov/einops/pull/181
    • Add oneflow backend by @rentainhe in https://github.com/arogozhnikov/einops/pull/180
    • Fix wrong error message by @arogozhnikov in https://github.com/arogozhnikov/einops/pull/196
    • Clarify documentation re. default bias on EinMix by @maxeonyx in https://github.com/arogozhnikov/einops/pull/201
    • corrected spelling mistake: einsops -> einops by @cs-mshah in https://github.com/arogozhnikov/einops/pull/205
    • add mean-reduction for bfloat16, fix #206 by @arogozhnikov in https://github.com/arogozhnikov/einops/pull/209
    • add py.typed (adopt PEP 561) by @arogozhnikov in https://github.com/arogozhnikov/einops/pull/211
    • Delete tensorflow-specific readme file by @arogozhnikov in https://github.com/arogozhnikov/einops/pull/212
    • Adopt pypa/hatch by @arogozhnikov in https://github.com/arogozhnikov/einops/pull/213

    New Contributors

    • @rentainhe made their first contribution in https://github.com/arogozhnikov/einops/pull/180
    • @MilesCranmer made their first contribution in https://github.com/arogozhnikov/einops/pull/197
    • @maxeonyx made their first contribution in https://github.com/arogozhnikov/einops/pull/201
    • @cs-mshah made their first contribution in https://github.com/arogozhnikov/einops/pull/205

    Full Changelog: https://github.com/arogozhnikov/einops/compare/v0.4.1...v0.5.0

    Source code(tar.gz)
    Source code(zip)
  • v0.4.1(Mar 4, 2022)

    What's Changed

    • fix numpy dependency problem by @lucidrains in https://github.com/arogozhnikov/einops/pull/176

    New Contributors

    • @lucidrains made their first contribution in https://github.com/arogozhnikov/einops/pull/176

    Full Changelog: https://github.com/arogozhnikov/einops/compare/v0.4.0...v0.4.1

    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Jan 18, 2022)

    Main Changes

    • torch.jit.script is supported (in addition to previous torch.jit.trace)
    • EinMix (swiss-knife for next-gen MLPs) is added. A much-improved einsum/linear layer is now available.
    • einops.repeat in torch does not create copy when possible

    Detailed PRs

    • Update documentation by @arogozhnikov in https://github.com/arogozhnikov/einops/pull/137
    • Multiple updates in docs, add Rearrange layer to torch test by @arogozhnikov in https://github.com/arogozhnikov/einops/pull/138
    • Add support for torch scripting of einops layers by @arogozhnikov in https://github.com/arogozhnikov/einops/pull/139
    • Introduce EinMix - swiss-knife for next-gen MLPs by @arogozhnikov in https://github.com/arogozhnikov/einops/pull/142
    • Docs improvements: wording, visual style, EinMix by @arogozhnikov in https://github.com/arogozhnikov/einops/pull/143
    • Move docs to a separate folder by @arogozhnikov in https://github.com/arogozhnikov/einops/pull/144
    • Type hinting + add testing for EinMix composition/decomposition by @arogozhnikov in https://github.com/arogozhnikov/einops/pull/154
    • Reject repeated axes in parse_shape by @dmitriy-serdyuk in https://github.com/arogozhnikov/einops/pull/159
    • Enable ellipsis in patterns for parse_shape. by @dmitriy-serdyuk in https://github.com/arogozhnikov/einops/pull/162

    New Contributors

    • @dmitriy-serdyuk made their first contribution in https://github.com/arogozhnikov/einops/pull/159

    Full Changelog: https://github.com/arogozhnikov/einops/compare/v0.3.2...v0.4.0

    Source code(tar.gz)
    Source code(zip)
  • v0.3.2(Aug 31, 2021)

    • documentation and domain (#75, #76, #77, #79, #81), thanks to @cgarciae
    • typos and spellcheck (thank @ollema and @GarrettMooney )
    • moved away from keras to tf.keras
    • adjustments to tutorials and testing
    • other minor improvements
    Source code(tar.gz)
    Source code(zip)
  • v0.3(Sep 8, 2020)

    • new operation: repeat (includes repeat/tiling logic, copying along a new dimension)
    • anonymous axes (specified by their length not name) are allowed:
    grayscale = reduce(image, 'h w 3 -> h w', 'mean')
    image_with_identical_channels = repeat(grayscale, 'h w -> h w 3')
    
    • 1 can be used to refer to all dimensions of length 1
    • reduced restrictions on axes names: almost any python identified can be an axis name now
    • reduction can be provided with callable not string
    • tutorials were slightly updated to include these changes
    • code in kernel undergone refactoring, and now more documented
    • support: keras layers are deprecated in favor of tf.keras layers
    • experimental layer introduced: WeightedEinsum (RFC: #71 )
    Source code(tar.gz)
    Source code(zip)
  • v0.2(Feb 15, 2020)

    • experimental support for Jax framework was added
    • testing code was rewritten and updated to work
    • tf2 always worked with einops, but tests had to be updated. So, tests are updated for tf2
    • tf readme, minor additions, comments, etc.

    Thanks to contributors

    Source code(tar.gz)
    Source code(zip)
  • v0.1(Nov 1, 2018)

    This release introduces einops, as well as its notion.

    Initial release API: Operations (ops)

    • einops.rearrange and einops.reduce

    Auxiliary

    • einops.asnumpy and einops.parse_shape

    Layers (for chainer, gluon, keras and torch)

    • Rearrange and Reduce

    Supported frameworks:

    • numpy
    • pytorch
    • tensorflow eager
    • cupy
    • chainer
    • gluon
    • tensorflow
    • mxnet (experimental)
    • and keras (experimental)
    Source code(tar.gz)
    Source code(zip)
Owner
Alex Rogozhnikov
ML + Science at scale
Alex Rogozhnikov
Implementation of C-RNN-GAN.

Implementation of C-RNN-GAN. Publication: Title: C-RNN-GAN: Continuous recurrent neural networks with adversarial training Information: http://mogren.

Olof Mogren 427 Dec 25, 2022
Uni-Fold: Training your own deep protein-folding models

Uni-Fold: Training your own deep protein-folding models. This package provides an implementation of a trainable, Transformer-based deep protein foldin

DP Technology 187 Jan 04, 2023
A PyTorch implementation of "Predict then Propagate: Graph Neural Networks meet Personalized PageRank" (ICLR 2019).

APPNP ⠀ A PyTorch implementation of Predict then Propagate: Graph Neural Networks meet Personalized PageRank (ICLR 2019). Abstract Neural message pass

Benedek Rozemberczki 329 Dec 30, 2022
Supplementary code for the paper "Meta-Solver for Neural Ordinary Differential Equations" https://arxiv.org/abs/2103.08561

Meta-Solver for Neural Ordinary Differential Equations Towards robust neural ODEs using parametrized solvers. Main idea Each Runge-Kutta (RK) solver w

Julia Gusak 25 Aug 12, 2021
A crash course in six episodes for software developers who want to become machine learning practitioners.

Featured code sample tensorflow-planespotting Code from the Google Cloud NEXT 2018 session "Tensorflow, deep learning and modern convnets, without a P

Google Cloud Platform 2.6k Jan 08, 2023
The official implementation of the IEEE S&P`22 paper "SoK: How Robust is Deep Neural Network Image Classification Watermarking".

Watermark-Robustness-Toolbox - Official PyTorch Implementation This repository contains the official PyTorch implementation of the following paper to

49 Dec 19, 2022
[CVPR'21] MonoRUn: Monocular 3D Object Detection by Reconstruction and Uncertainty Propagation

MonoRUn MonoRUn: Monocular 3D Object Detection by Reconstruction and Uncertainty Propagation. CVPR 2021. [paper] Hansheng Chen, Yuyao Huang, Wei Tian*

同济大学智能汽车研究所综合感知研究组 ( Comprehensive Perception Research Group under Institute of Intelligent Vehicles, School of Automotive Studies, Tongji University) 96 Dec 10, 2022
AI Face Mesh: This is a simple face mesh detection program based on Artificial intelligence.

AI Face Mesh: This is a simple face mesh detection program based on Artificial Intelligence which made with Python. It's able to detect 468 different

Md. Rakibul Islam 1 Jan 13, 2022
NeWT: Natural World Tasks

NeWT: Natural World Tasks This repository contains resources for working with the NeWT dataset. ❗ At this time the binary tasks are not publicly avail

Visipedia 26 Oct 18, 2022
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network

Super Resolution Examples We run this script under TensorFlow 2.0 and the TensorLayer2.0+. For TensorLayer 1.4 version, please check release. 🚀 🚀 🚀

TensorLayer Community 2.9k Jan 08, 2023
A modified version of DeepMind's Alphafold2 to divide CPU part (MSA and template searching) and GPU part (prediction model)

ParallelFold Author: Bozitao Zhong This is a modified version of DeepMind's Alphafold2 to divide CPU part (MSA and template searching) and GPU part (p

Bozitao Zhong 77 Dec 22, 2022
Keras Implementation of The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation by (Simon Jégou, Michal Drozdzal, David Vazquez, Adriana Romero, Yoshua Bengio)

The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation: Work In Progress, Results can't be replicated yet with the m

Yad Konrad 196 Aug 30, 2022
OpenMMLab Semantic Segmentation Toolbox and Benchmark.

Documentation: https://mmsegmentation.readthedocs.io/ English | 简体中文 Introduction MMSegmentation is an open source semantic segmentation toolbox based

OpenMMLab 5k Dec 31, 2022
Fast sparse deep learning on CPUs

SPARSEDNN **If you want to use this repo, please send me an email: [email pro

Ziheng Wang 44 Nov 30, 2022
Cooperative multi-agent reinforcement learning for high-dimensional nonequilibrium control

Cooperative multi-agent reinforcement learning for high-dimensional nonequilibrium control Official implementation of: Cooperative multi-agent reinfor

0 Nov 16, 2021
FLVIS: Feedback Loop Based Visual Initial SLAM

FLVIS Feedback Loop Based Visual Inertial SLAM 1-Video EuRoC DataSet MH_05 Handheld Test in Lab FlVIS on UAV Platform 2-Relevent Publication: Under Re

UAV Lab - HKPolyU 182 Dec 04, 2022
GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning

GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning, as well as corresponding mitigation strategies.

129 Dec 30, 2022
Hyperopt for solving CIFAR-100 with a convolutional neural network (CNN) built with Keras and TensorFlow, GPU backend

Hyperopt for solving CIFAR-100 with a convolutional neural network (CNN) built with Keras and TensorFlow, GPU backend This project acts as both a tuto

Guillaume Chevalier 103 Jul 22, 2022
Convert Python 3 code to CUDA code.

Py2CUDA Convert python code to CUDA. Usage To convert a python file say named py_file.py to CUDA, run python generate_cuda.py --file py_file.py --arch

Yuval Rosen 3 Jul 14, 2021
Gauge equivariant mesh cnn

Geometric Mesh CNN The code in this repository is an implementation of the Gauge Equivariant Mesh CNN introduced in the paper Gauge Equivariant Mesh C

50 Dec 18, 2022