A high-performance Python-based I/O system for large (and small) deep learning problems, with strong support for PyTorch.

Overview

Test DeepSource

WebDataset

WebDataset is a PyTorch Dataset (IterableDataset) implementation providing efficient access to datasets stored in POSIX tar archives and uses only sequential/streaming data access. This brings substantial performance advantage in many compute environments, and it is essential for very large scale training.

While WebDataset scales to very large problems, it also works well with smaller datasets and simplifies creation, management, and distribution of training data for deep learning.

WebDataset implements standard PyTorch IterableDataset interface and works with the PyTorch DataLoader. Access to datasets is as simple as:

import webdataset as wds

dataset = wds.WebDataset(url).shuffle(1000).decode("torchrgb").to_tuple("jpg;png", "json")
dataloader = torch.utils.data.DataLoader(dataset, num_workers=4, batch_size=16)

for inputs, outputs in dataloader:
    ...

In that code snippet, url can refer to a local file, a local HTTP server, a cloud storage object, an object on an object store, or even the output of arbitrary command pipelines.

WebDataset fulfills a similar function to Tensorflow's TFRecord/tf.Example classes, but it is much easier to adopt because it does not actually require any kind of data conversion: data is stored in exactly the same format inside tar files as it is on disk, and all preprocessing and data augmentation code remains unchanged.

Documentation

Installation

$ pip install webdataset

For the Github version:

$ pip install git+https://github.com/tmbdev/webdataset.git

Documentation

Introductory Videos

Here are some videos talking about WebDataset and large scale deep learning:

Related Libraries and Software

The AIStore server provides an efficient backend for WebDataset; it functions like a combination of web server, content distribution network, P2P network, and distributed file system. Together, AIStore and WebDataset can serve input data from rotational drives distributed across many servers at the speed of local SSDs to many GPUs, at a fraction of the cost. We can easily achieve hundreds of MBytes/s of I/O per GPU even in large, distributed training jobs.

The tarproc utilities provide command line manipulation and processing of webdatasets and other tar files, including splitting, concatenation, and xargs-like functionality.

The tensorcom library provides fast three-tiered I/O; it can be inserted between AIStore and WebDataset to permit distributed data augmentation and I/O. It is particularly useful when data augmentation requires more CPU than the GPU server has available.

You can find the full PyTorch ImageNet sample code converted to WebDataset at tmbdev/pytorch-imagenet-wds

Comments
  • Unable to reach maximum download speed

    Unable to reach maximum download speed

    I have an image dataset of about 250 1GiB tars on GCS, which I'm loading by using urls = f'pipe:gsutil cat {urls}'.

    I tested the maximum download speed with gsutil -m cp ... and got about 750MiB/s with 96 processes (on a 96 vCPU GCP VM), which would amount to about 15000 images/s. I'm testing the data I/O only, no accelerators.

    However, using wds.MultiDataset (96 workers) with basic decode, to_tuple etc steps (and minimal cropping with albumentations) and batching to 256 results in much slower download & processing, about 2500 images/s (this is actually already really fast but I'd like to try to get it even faster :D ).

    So I'm not bottlenecked by download speed, and CPU utilization for the 96 processes shows <30%, so I'm not bottlenecked by the CPU either...

    I profiled the code with pyinstrument and the bottleneck seems to be somewhere in the batching: Screenshot from 2020-10-09 15-11-46

    I'm working on non-public data, but basically same thing happens with the docs openimages example.

    So given that gsutil -m results in fast download speeds but webdataset doesn't, and both use python (?) multiprocessing, maybe gsutil doing something more efficiently here? I don't really know the multiprocessing etc. libraries well at all so I'm at a bit of a loss here...

    opened by harpone 11
  • syntax error in 0.2.4 release

    syntax error in 0.2.4 release

    Getting a syntax error on .../webdataset/cache.py", line 43 while data := stream.read(chunk_size): Looks like := is a valid syntax from python 3.8. Is webdataset limited to python >= 3.8?

    enhancement 
    opened by czmrand 9
  • npy support

    npy support

    in many cases reading tensor data using numpy.load is many times faster than loading a pickle file. Would it be possible to add it to the basic_handlers?

    enhancement 
    opened by shayanfazeli 7
  • DDP training problem

    DDP training problem

    Hi, how to make sure the WebLoader generated batches is different data when use multi node training? Is there any up-to-date example code using webdataset for DDP training?

    opened by marson666 6
  • Adding option to shuffle shards before splitting to workers, based on current epoch

    Adding option to shuffle shards before splitting to workers, based on current epoch

    What

    This PR adds an option to shuffle the shards before splitting them between nodes and workers. We coordinate this shuffling via a shared random seed, to ensure shards are properly distributed between workers. With epoch-based training loops, the idea is to set this random seed to a different number every epoch, ensuring shards are distributed differently at each iteration. This is completely optional, and should not affect existing workflows in any way. If users chose to use it, the only change they need to make is to simply call dataloader.set_epoch(epoch) at each epoch before iterating.

    Why

    Without this, each worker always sees the same subset of shards at every epoch, when using the standard nodesplitter and split_by_worker. We found this to have a drastic negative impact in performance when training contrastive models, since it limits the possibilities of which datapoints are being contrasted. With this fix, this issue was substantially mitigated.

    How

    The fix is two small modifications. One is to introduce a set_epoch method, which informs the dataloader (and everything that composes it up to the ShardList) of the current epoch. The second one is to use that as a random seed for shuffling shards before they are being split between workers and nodes.

    We have found this to significantly improve performance in our contrastive learning experiments, and would greatly appreciate if you could take a look at this!

    cc @mitchnw

    documentation 
    opened by gabrielilharco 6
  • aws storage

    aws storage

    The google cloud storage buckets work well (for the most part) with this framework. Is there any suggested/preferred methodology to utilize the amazon s3 storage with this? (Putting tar files in that bucket and use the URL to load them in a local machine)

    opened by shayanfazeli 6
  • Add ability to re-create bit-exact Webdatasets

    Add ability to re-create bit-exact Webdatasets

    Currently it is impossible to re-create bit-exact Webdatasets, as each file in the Tar archive has a different mtime. This has slightly annoying implications for file caching and versioning, as you have to deal with changed-but-not-really-changed files all the time.

    My first question is: Is this on purpose? Does mtime encode any valuable information for the user?

    If no, why not set it to a fixed value, e.g. 1970-01-01?

    If yes, why not make it overridable by the user, e.g. by a mtime kwarg?

    Currently, this PR does the latter: It introduces a parameter that allows me to override this value.

    To reproduce

    import webdataset as wds
    
    for f in ["test1.tar", "test2.tar"]:
        with wds.TarWriter(f) as sink:
            sink.write(
                {
                    "__key__": "sample0000",
                    "input.txt": "1",
                    "output.txt": "2",
                },
            )
    
    $ md5sum *.tar
    0f90a5b6ca6f0e423685e23cad872d36  test1.tar                                                                                                                                                                                                                                     
    49e91eaa9c54125897b2286af2757c0e  test2.tar
    $ tar tvf test1.tar 
    -r--r--r-- bigdata/bigdata   1 2021-11-28 15:16 sample0000.input.txt
    -r--r--r-- bigdata/bigdata   1 2021-11-28 15:16 sample0000.output.txt
    $ tar tvf test1.tar 
    -r--r--r-- bigdata/bigdata   1 2021-11-28 15:16 sample0000.input.txt     # the dates are off by microseconds
    -r--r--r-- bigdata/bigdata   1 2021-11-28 15:16 sample0000.output.txt    # the dates are off by microseconds
    

    Behaviour after this PR

    With this PR it is possible to override the mtime parameter of TarWriter.write(...), and to set it to arbitrary fixed values:

    import webdataset as wds
    
    for f in ["test1.tar", "test2.tar"]:
        with wds.TarWriter(f) as sink:
            sink.write(
                {
                    "__key__": "sample0000",
                    "input.txt": "1",
                    "output.txt": "2",
                },
                mtime=0.0
            )
    

    which creates bit-exact copies of the output file:

    $ md5sum *.tar
    b1de8150b28126256f05943741b2b5ab  test1.tar
    b1de8150b28126256f05943741b2b5ab  test2.tar
    $ tar tvf test1.tar 
    -r--r--r-- bigdata/bigdata   1 1970-01-01 01:00 sample0000.input.txt
    -r--r--r-- bigdata/bigdata   1 1970-01-01 01:00 sample0000.output.txt
    $ tar tvf test2.tar 
    -r--r--r-- bigdata/bigdata   1 1970-01-01 01:00 sample0000.input.txt
    -r--r--r-- bigdata/bigdata   1 1970-01-01 01:00 sample0000.output.txt
    
    opened by nils-werner 5
  • Remove torch from dependencies

    Remove torch from dependencies

    WebDataset 0.1.76 which I was using before doesn't have torch as a dependency. However, newer versions do.

    I don't think it's a good idea since TarWriter (and maybe other webdataset parts) can be used for creating the dataset in a solely data processing environment. Bringing in a huge (torch 1.10.0 is 1.8 GB!) dependency is a very big overhead.

    What do you guys this?

    opened by danielgafni 5
  • Large size of tar/shard writer output

    Large size of tar/shard writer output

    I have a text dataset with 20G of data and I tried to use webdataset ShardWriter/TarWriter to convert it. Unfortunately, the initial 20Gb data becomes astonishingly large after the conversion, here are the methods I tried and the output size in the disk.

    | Method | Output Size (Gb) | Data stored | |-------------------|------------------|------------------------------------| | ShardWritter pth | 800 | int32 tensors of variable size | | ShardWritter pdy | 300 | Raw text data | | ShardWritter ten | 260 | int32 numpy array of variable size | | ShardWritter npy | 300 | int32 numpy array of variable size | | ShardWritter json | 300 | Raw text data | | tarp cli | 210 | Raw text data |

    Is this the expected behavior? Why a raw data stored in tar files are much larger than the original data?

    opened by huberemanuel 5
  • True random access (i.e. Dataset vs. IterableDataset)

    True random access (i.e. Dataset vs. IterableDataset)

    Hi, this is a really neat library! However it would be nice to have a simpler interface to TAR files that allows random access, instead of enforcing sequential access. Let me explain why.

    The most common use-case for academic labs such as mine are not terabyte-scale remote datasets where sharding is common, but several gigabytes datasets with a few million small images.

    So, a dataset fits in a single machine; but due to automatic scheduling in a cluster we cannot have all datasets in all machines. In this context, the easiest solution is to copy the dataset to the local machine at the start of training. However, millions of small files really strain the filesystem (both for copying and accessing during training). So copying and reading from a single TAR file would be ideal -- and WebDataset seems (on the surface) to do this well.

    But the constraints of the IterableDataset are maybe a step too far. We now have to make decisions about how many shards to use, sizes of rolling buffers for shuffling, etc. This adds a ton of friction, and the uneasy feeling that now the samples are not sufficiently IID for training if you make a wrong decision. Compare this to the ease of training with tons of small images in a filesystem.

    I was trying to use WebDataset and get colleagues to adopt it, but this is a big wall for adoption. Uncompressed TAR files allow random access. Could this not be a simpler entry-point for WebDataset? A user who wants to scale things up would find it easier to adapt then to the IterableDataset, but I think that many users would be perfectly happy with the random access version, which is much less restrictive.

    enhancement 
    opened by jotaf98 5
  • Serious drop in data loading speed observed

    Serious drop in data loading speed observed

    Has anyone else noticed download issues with webdataset, about 10x drop in data loading speed? I'm observing slowdowns with everything (on GCS at least), for example also the @tmbdev 's openimages dataset...

    Please see the gist here. Should be ready to run.

    In my earlier benchmarks I was able to get about 2000 img/s with 8 processes with the above script (50 minibatches of size 256 in about 6.5 s), but now I'm getting about 400 img/s top.

    I'm on a GCP VM. Tried downgrading various libraries, spun up a VM from an image from last July etc... gsutil multiprocessing downloads from GCS are still very fast (~570MiB/s => 11400 openimages img/s)

    I'm totally confused what's going on!

    opened by harpone 5
  • Periods in base filename interpreted as extensions

    Periods in base filename interpreted as extensions

    It appears that periods in the base part of the filename cause issues. For example the filename ./235342 Track 2.0 (Clean Version).mp3 leads to an unexpected key of 0 (clean version).mp3. This caused sporadic downstream issues like:

    ValueError: didn't find ['mp3'] in ['__key__', '__url__', '0 (clean version).json', '0 (clean version).mp3']

    Looking into the code, it appears that this is by design in order to support multiple extensions like ".seg.jpg", but it would be nice to mention this in the documentation to avoid surprises.

    opened by iceboundflame 0
  • gopen_curl fails on windows

    gopen_curl fails on windows

    Loading a tarfile with a url as a webdataset fails because of a wrongly formatted curl command for Windows.

    CMD is: https://github.com/webdataset/webdataset/blob/main/webdataset/gopen.py#L195 Single quotes around the url lead to improper interpretation of urls that contain the & symbol on a windows machine.

    enhancement 
    opened by pratikgujjar 1
  • Checkpoint support

    Checkpoint support

    Hello,

    I am wondering if there is any support for checkpoint in WebDataset. Our usecase is like this, if we have a series of samples[s_1, s_2, s_3... s_n]. If it is crashed, would webdataset can support this:

    • When the training is checkpointed, the index from which the webdataset sample is loading is also checkpointed.
    • When resuming from a checkpoint, WebDataset can start loading sampels from the checkpointed index.
    opened by yncxcw 1
  • Updated syntax for multinode usage

    Updated syntax for multinode usage

    Hi, I'm working on Google Colab and trying to setup minimal example of multi-core pytorch training with webdataset using data on GCP Buckets. Specifically I've got a bucket with my training shards and another bucket with test shards.

    For colab setup, I'm using the suggested wheels from the pytorch XLA notebook for multi-core and adding webdataset to the pip installs. I see I've got webdataset 0.2.31 from pip show webdataset in the console.

    I tried to start from (https://webdataset.github.io/webdataset/multinode/) with this code in the map function:

    urls = list(braceexpand.braceexpand(FLAGS['train_urls']))
    dataset = (wds.WebDataset(urls)
                 .to_tuple('x_data.npy', 'y_data.npy')
                 .map(preprocess)
                 .batched(50))
    

    but I'm seeing some strange behavior that I don't understand. Specifically, the docs indicates that a default nodesplitter and split_by_worker should be in effect:

    urls = list(braceexpand.braceexpand("dataset-{000000..000999}.tar"))
    dataset = wds.ShardList(urls, splitter=wds.split_by_worker, nodesplitter=wds.split_by_node, shuffle=False)
    dataset = wds.Processor(dataset, wds.url_opener)
    dataset = wds.Processor(dataset, wds.tar_file_expander)
    dataset = wds.Processor(dataset, wds.group_by_keys)
    

    What I see from the following however is that every core processes every file with the following in my map function. (Next step would be to investigate worker splitting once we've got shards distributed across cores)

    loader = DataLoader(dataset, num_workers=FLAGS['num_workers']) 
    for batch, (x,y) in enumerate(loader):
      print(f'core:{rank} batch:{batch}: x_min: {torch.min(x)} x_max: {torch.max(x)} len:{x.size()}')
    

    I saw a related post with a suggestion to use v2 branch (#https://github.com/webdataset/webdataset/issues/138) but (if possible) I'd like to have my dependency be the main release that's going to come in under webdataset with pip.

    I also attempted to expand on the wds.Processor approach but that doesn't seem to be available in this version maybe? High level just looking for best advice on implementing a pretty vanilla version of this with shard shuffling and sample shuffling. I'm not sure how shard shuffling across cores is coordinated (do you use a parallel loader?) so thoughts on that would be great too. If a minimum working example notebook would improve/clarify this question that's also something I'm happy to provide.

    opened by matt-gss 0
  • when converting dataset to wds, data is getting larger.

    when converting dataset to wds, data is getting larger.

    this is my code:

    with wds.ShardWriter(pattern.format(proc_id), maxsize=int(1e10), maxcount=int(1000000)) as sink:
    
        for i in tqdm(idx_list):
            text, image = old_ds[i]
            key = uuid1().__str__()
            sink.write({
                "__key__": key,  # uuid
                "input.jpg": image,  # PIL image
                "input.txt": text,  # a string
            })
    

    My dataset is composed of 880k image-text pairs and it was 202G in LMDB format(same as small single image files on the disk, image stored as base64) When converting to wds, it became 474G

    I saw the issues before, but the dataset still getting larger when trying tar.gz ..

    opened by yli1994 4
  • AttributeError: module 'webdataset' has no attribute 'ShardList'

    AttributeError: module 'webdataset' has no attribute 'ShardList'

    From the documentation: https://webdataset.github.io/webdataset/multinode/#splitting-shards-across-nodes-and-workers

    urls = list(braceexpand.braceexpand("dataset-{000000..000999}.tar"))
    dataset = wds.ShardList(urls, splitter=wds.split_by_worker, nodesplitter=wds.split_by_node, shuffle=False)
    dataset = wds.Processor(dataset, wds.url_opener)
    dataset = wds.Processor(dataset, wds.tar_file_expander)
    dataset = wds.Processor(dataset, wds.group_by_keys)
    

    Running this results in

    AttributeError: module 'webdataset' has no attribute 'ShardList'
    
    documentation 
    opened by drscotthawley 2
Releases(0.2.31)
Owner
High Performance I/O for Large Scale Deep Learning
Implementation of the HMAX model of vision in PyTorch

PyTorch implementation of HMAX PyTorch implementation of the HMAX model that closely follows that of the MATLAB implementation of The Laboratory for C

Marijn van Vliet 52 Oct 13, 2022
An implementation of the methods presented in Causal-BALD: Deep Bayesian Active Learning of Outcomes to Infer Treatment-Effects from Observational Data.

An implementation of the methods presented in Causal-BALD: Deep Bayesian Active Learning of Outcomes to Infer Treatment-Effects from Observational Data.

Andrew Jesson 9 Apr 04, 2022
Time Series Forecasting with Temporal Fusion Transformer in Pytorch

Forecasting with the Temporal Fusion Transformer Multi-horizon forecasting often contains a complex mix of inputs – including static (i.e. time-invari

Nicolás Fornasari 6 Jan 24, 2022
LSTM-VAE Implementation and Relevant Evaluations

LSTM-VAE Implementation and Relevant Evaluations Before using any file in this repository, please create two directories under the root directory name

Lan Zhang 5 Oct 08, 2022
Generate images from texts. In Russian

ruDALL-E Generate images from texts pip install rudalle==1.1.0rc0 🤗 HF Models: ruDALL-E Malevich (XL) ruDALL-E Emojich (XL) (readme here) ruDALL-E S

AI Forever 1.6k Dec 31, 2022
A few stylization coreML models that I've trained with CreateML

CoreML-StyleTransfer A few stylization coreML models that I've trained with CreateML You can open and use the .mlmodel files in the "models" folder in

Doron Adler 8 Aug 18, 2022
A quick recipe to learn all about Transformers

Transformers have accelerated the development of new techniques and models for natural language processing (NLP) tasks.

DAIR.AI 772 Dec 31, 2022
Amazing-Python-Scripts - 🚀 Curated collection of Amazing Python scripts from Basics to Advance with automation task scripts.

📑 Introduction A curated collection of Amazing Python scripts from Basics to Advance with automation task scripts. This is your Personal space to fin

Avinash Ranjan 1.1k Dec 29, 2022
Semantic graph parser based on Categorial grammars

Lambekseq "Everyone who failed Greek or Latin hates it." This package is for proving theorems in Categorial grammars (CG) and constructing semantic gr

10 Aug 19, 2022
Learning nonlinear operators via DeepONet

DeepONet: Learning nonlinear operators The source code for the paper Learning nonlinear operators via DeepONet based on the universal approximation th

Lu Lu 239 Jan 02, 2023
Deep learned, hardware-accelerated 3D object pose estimation

Isaac ROS Pose Estimation Overview This repository provides NVIDIA GPU-accelerated packages for 3D object pose estimation. Using a deep learned pose e

NVIDIA Isaac ROS 41 Dec 18, 2022
This reposityory contains the PyTorch implementation of our paper "Generative Dynamic Patch Attack".

Generative Dynamic Patch Attack This reposityory contains the PyTorch implementation of our paper "Generative Dynamic Patch Attack". Requirements PyTo

Xiang Li 8 Nov 17, 2022
U-Net: Convolutional Networks for Biomedical Image Segmentation

Deep Learning Tutorial for Kaggle Ultrasound Nerve Segmentation competition, using Keras This tutorial shows how to use Keras library to build deep ne

Yihui He 401 Nov 21, 2022
Deep Learning to Create StepMania SM FIles

StepCOVNet Running Audio to SM File Generator Currently only produces .txt files. Use SMDataTools to convert .txt to .sm python stepmania_note_generat

Chimezie Iwuanyanwu 8 Jan 08, 2023
Object detection GUI based on PaddleDetection

PP-Tracking GUI界面测试版 本项目是基于飞桨开源的实时跟踪系统PP-Tracking开发的可视化界面 在PaddlePaddle中加入pyqt进行GUI页面研发,可使得整个训练过程可视化,并通过GUI界面进行调参,模型预测,视频输出等,通过多种类型的识别,简化整体预测流程。 GUI界面

杨毓栋 68 Jan 02, 2023
Using image super resolution models with vapoursynth and speeding them up with TensorRT

vs-RealEsrganAnime-tensorrt-docker Using image super resolution models with vapoursynth and speeding them up with TensorRT. Also a docker image since

4 Aug 23, 2022
This repo holds code for TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation

TransUNet This repo holds code for TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation Usage

1.4k Jan 04, 2023
i3DMM: Deep Implicit 3D Morphable Model of Human Heads

i3DMM: Deep Implicit 3D Morphable Model of Human Heads CVPR 2021 (Oral) Arxiv | Poject Page This project is the official implementation our work, i3DM

Tarun Yenamandra 60 Jan 03, 2023
Points2Surf: Learning Implicit Surfaces from Point Clouds (ECCV 2020 Spotlight)

Points2Surf: Learning Implicit Surfaces from Point Clouds (ECCV 2020 Spotlight)

Philipp Erler 329 Jan 06, 2023