Here is the implementation of our paper S2VC: A Framework for Any-to-Any Voice Conversion with Self-Supervised Pretrained Representations.

Related tags

Deep LearningS2VC
Overview

S2VC

Here is the implementation of our paper S2VC: A Framework for Any-to-Any Voice Conversion with Self-Supervised Pretrained Representations. In this paper, we proposed S2VC which utilizes Self-Supervised pretrained representation to provide the latent phonetic structure of the utterance from the source speaker and the spectral features of the utterance from the target speaker.

The following is the overall model architecture.

Model architecture

For the audio samples, please refer to our demo page.

Usage

You can download the pretrained model as well as the vocoder following the link under Releases section on the sidebar.

The whole project was developed using Python 3.8, torch 1.7.1, and the pretrained model, as well as the vocoder, were turned to TorchScript, so it's not guaranteed to be backward compatible. You can install the dependencies with

pip install -r requirements.txt

If you encounter any problems while installing fairseq, please refer to pytorch/fairseq for the installation instruction.

Self-Supervised representations

Wav2vec2

In our implementation, we're using Wav2Vec 2.0 Base w/o finetuning which is trained on LibriSpeech. You can download the checkpoint wav2vec_small.pt from pytorch/fairseq.

APC(Autoregressive Predictive Coding), CPC(Contrastive Predictive Coding)

These two representations are extracted using this speech toolkit S3PRL. You can check how to extract various representations from that repo.

Vocoder

The WaveRNN-based neural vocoder is from yistLin/universal-vocoder which is based on the paper, Towards achieving robust universal neural vocoding.

Voice conversion with pretrained models

You can convert an utterance from the source speaker with multiple utterances from the target speaker by preparing a conversion pairs information file in YAML format, like

# pairs_info.yaml
pair1:
    source: VCTK-Corpus/wav48/p225/p225_001.wav
    target:
        - VCTK-Corpus/wav48/p227/p227_001.wav
pair2:
    source: VCTK-Corpus/wav48/p225/p225_001.wav
    target:
        - VCTK-Corpus/wav48/p227/p227_002.wav
        - VCTK-Corpus/wav48/p227/p227_003.wav
        - VCTK-Corpus/wav48/p227/p227_004.wav

And convert multiple pairs at the same time, e.g.

python convert_batch.py \
    -w <WAV2VEC_PATH> \
    -v <VOCODER_PATH> \
    -c <CHECKPOINT_PATH> \
    -s <SOURCE_FEATURE_NAME> \
    -r <REFERENCE_FEATURE_NAME> \
    pairs_info.yaml \
    outputs # the output directory of conversion results

After the conversion, the output directory, outputs, will be containing

pair1.wav
pair1.mel.png
pair1.attn.png
pair2.wav
pair2.mel.png
pair2.attn.png

Train from scratch

Preprocessing

You can preprocess multiple corpora by passing multiple paths. But each path should be the directory that directly contains the speaker directories. And you have to specify the feature you want to extract. Currently, we support apc, cpc, wav2vec2, and timit_posteriorgram. i.e.

python3 preprocess.py
    VCTK-Corpus/wav48 \
    <SECOND_Corpus_PATH> \ # more corpus if you want
    <FEATURE_NAME> \
    <WAV2VEC_PATH> \
    processed/<FEATURE_NAME>  # the output directory of preprocessed features

After preprocessing, the output directory will be containing:

metadata.json
utterance-000x7gsj.tar
utterance-00wq7b0f.tar
utterance-01lpqlnr.tar
...

You may need to preprocess multiple times for different features. i.e.

python3 preprocess.py
    VCTK-Corpus/wav48 apc <WAV2VEC_PATH> processed/apc
python3 preprocess.py
    VCTK-Corpus/wav48 cpc <WAV2VEC_PATH> processed/cpc
    ...

Then merge the metadata of different features.

i.e.

python3 merger.py processed

Training

python train.py processed
    --save_dir ./ckpts \
    -s <SOURCE_FEATURE_NAME> \
    -r <REFERENCE_FEATURE_NAME>

You can further specify --preload for preloading all training data into RAM to boost training speed. If --comment is specified, e.g. --comment CPC-CPC, the training logs will be placed under a newly created directory like, logs/2020-02-02_12:34:56_CPC-CPC, otherwise there won't be any logging. For more details, you can refer to the usage by python train.py -h.

You might also like...
Phonetic PosteriorGram (PPG)-Based Voice Conversion (VC)

ppg-vc Phonetic PosteriorGram (PPG)-Based Voice Conversion (VC) This repo implements different kinds of PPG-based VC models. Pretrained models. More m

The Self-Supervised Learner can be used to train a classifier with fewer labeled examples needed using self-supervised learning.
The Self-Supervised Learner can be used to train a classifier with fewer labeled examples needed using self-supervised learning.

Published by SpaceML • About SpaceML • Quick Colab Example Self-Supervised Learner The Self-Supervised Learner can be used to train a classifier with

Repository providing a wide range of self-supervised pretrained models for computer vision tasks.

Hierarchical Pretraining: Research Repository This is a research repository for reproducing the results from the project "Self-supervised pretraining

The PASS dataset: pretrained models and how to get the data -  PASS: Pictures without humAns for Self-Supervised Pretraining
The PASS dataset: pretrained models and how to get the data - PASS: Pictures without humAns for Self-Supervised Pretraining

The PASS dataset: pretrained models and how to get the data - PASS: Pictures without humAns for Self-Supervised Pretraining

Implementation of the method described in the Speech Resynthesis from Discrete Disentangled Self-Supervised Representations.
Implementation of the method described in the Speech Resynthesis from Discrete Disentangled Self-Supervised Representations.

Speech Resynthesis from Discrete Disentangled Self-Supervised Representations Implementation of the method described in the Speech Resynthesis from Di

PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation
PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation

StructDepth PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimat

We evaluate our method on different datasets (including ShapeNet, CUB-200-2011, and Pascal3D+) and achieve state-of-the-art results, outperforming all the other supervised and unsupervised methods and 3D representations, all in terms of performance, accuracy, and training time. [CVPR2021] The source code for our paper 《Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Learning》.
[CVPR2021] The source code for our paper 《Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Learning》.

TBE The source code for our paper "Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Le

Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation
Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation

CorDA Code for our paper Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation Prerequisite Please create and activate the follo

Comments
  • Cannot find f2114342ff9e813e18a580fa41418aee9925414e in https://github.com/s3prl/s3prl

    Cannot find f2114342ff9e813e18a580fa41418aee9925414e in https://github.com/s3prl/s3prl

    Running convert_batch.py throws ValueError: Cannot find f2114342ff9e813e18a580fa41418aee9925414e in https://github.com/s3prl/s3prl that originates from https://github.com/howard1337/S2VC/blob/8a6dcebc052424c41c62be0b22cb581258c5b4aa/data/feature_extract.py#L18

    File "convert_batch.py", line 61, in main
    src_feat_model = FeatureExtractor(src_feat_name, wav2vec_path, device)
    File "/deepmind/experiments/howard1337/s2vc/data/feature_extract.py", line 18, in __init__
    torch.hub.load("s3prl/s3prl:f2114342ff9e813e18a580fa41418aee9925414e", feature_name, refresh=True).eval().to(device)
    File "/storage/usr/conda/envs/s2vc/lib/python3.8/site-packages/torch/hub.py", line 402, in load
    repo_or_dir = _get_cache_or_reload(repo_or_dir, force_reload, verbose, skip_validation)
    File "/storage/usr/conda/envs/s2vc/lib/python3.8/site-packages/torch/hub.py", line 190, in _get_cache_or_reload
    _validate_not_a_forked_repo(repo_owner, repo_name, branch)
    File "/storage/usr/conda/envs/s2vc/lib/python3.8/site-packages/torch/hub.py", line 160, in _validate_not_a_forked_repo
    raise ValueError(f'Cannot find {branch} in https://github.com/{repo_owner}/{repo_name}. '
    ValueError: Cannot find f2114342ff9e813e18a580fa41418aee9925414e in https://github.com/s3prl/s3prl. If it's a commit from a forked repo, please call hub.load() with forked repo directly.
    

    Any idea on how to solve this?

    opened by jerrymatjila 1
  • Could you provide ppg-extracting code?

    Could you provide ppg-extracting code?

    Dear author,

    In your paper, you mentioned you extracted ppg and SSL features by s3prl toolkit. However, I cannot find in s3prl on how to extract ppg. Could you provide the code or guideline on extracting ppgs? Thanks a lot!
    
    opened by hongchengzhu 0
  • What are vocoder-ckpt-*.pt?

    What are vocoder-ckpt-*.pt?

    You release the following vocoder checkpoints:

    vocoder-ckpt-apc.pt
    vocoder-ckpt-cpc.pt
    vocoder-ckpt-wav2vec2.pt
    

    What are they?

    Are they vocoders fine-tuned on the output of a particular model? I didn't see that described in the paper. Why is this needed, if the S2VC output is a mel? If it's because different models produce different mels, do you use vocoder-ckpt-cpc.pt when target model is cpc? And if so, how did you do the fine-tuning?

    opened by turian 0
  • Training of other features (apc, timit_posteriorgram etc.) do not work

    Training of other features (apc, timit_posteriorgram etc.) do not work

    I have tried training with other than the cpc feature on my prepared corpus. However, the training script fails when the loss function (train.py , line 69). I found that the size of the output vector out is hard-coded, which is inconsistent with the size of the target Mel spectrogram of other features.

    The size of some vectors of the model are:

    • apc case: Input dim: 512, Reference dim: 512, Target dim: 240
    • cpc case: Input dim: 256, Reference dim: 256, Target dim: 80

    I prepared the input feature vectors by using preprocess.py, e.g. python .\preprocess.py (my own corpus) apc .\checkpoints\wav2vec_small.pt processed/apc.

    I have modified the model by changing the size of the vectors and can run train.py now. In the model.py, __init__() of S2VC function, I replace 80 with a function argument and pass the size of Mel vector size. But I cannot determine the modification is appropriate, for I am not familiar with NLP.

    convert_batch.py with pre-trained models works well as you described in README.md.

    Other details of my situation are:

    • Windows 10, PowerShell
    • pytorch 1.7.1 + cu110
    • torchaudio 0.7.1
    • sox 1.4.1
    • tqdm 4.42.0
    • librosa 0.8.1
    opened by sage-git 0
Releases(v1.0)
Generative Models as a Data Source for Multiview Representation Learning

GenRep Project Page | Paper Generative Models as a Data Source for Multiview Representation Learning Ali Jahanian, Xavier Puig, Yonglong Tian, Phillip

Ali 81 Dec 03, 2022
VSR-Transformer - This paper proposes a new Transformer for video super-resolution (called VSR-Transformer).

VSR-Transformer By Jiezhang Cao, Yawei Li, Kai Zhang, Luc Van Gool This paper proposes a new Transformer for video super-resolution (called VSR-Transf

Jiezhang Cao 225 Nov 13, 2022
Code for KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs

KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs Check out the paper on arXiv: https://arxiv.org/abs/2103.13744 This repo cont

Christian Reiser 373 Dec 20, 2022
This repository is for our EMNLP 2021 paper "Automated Generation of Accurate & Fluent Medical X-ray Reports"

Introduction: X-Ray Report Generation This repository is for our EMNLP 2021 paper "Automated Generation of Accurate & Fluent Medical X-ray Reports". O

no name 36 Dec 16, 2022
Simulator for FRC 2022 challenge: Rapid React

rrsim Simulator for FRC 2022 challenge: Rapid React out-1.mp4 Usage In order to run the simulator use the following: python3 rrsim.py [config_path] wh

1 Jan 18, 2022
Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.

Core ML Tools Use coremltools to convert machine learning models from third-party libraries to the Core ML format. The Python package contains the sup

Apple 3k Jan 08, 2023
Implementation of the HMAX model of vision in PyTorch

PyTorch implementation of HMAX PyTorch implementation of the HMAX model that closely follows that of the MATLAB implementation of The Laboratory for C

Marijn van Vliet 52 Oct 13, 2022
[NeurIPS-2021] Slow Learning and Fast Inference: Efficient Graph Similarity Computation via Knowledge Distillation

Efficient Graph Similarity Computation - (EGSC) This repo contains the source code and dataset for our paper: Slow Learning and Fast Inference: Effici

23 Nov 11, 2022
Python PID Tuner - Based on a FOPDT model obtained using a Open Loop Process Reaction Curve

PythonPID_Tuner Step 1: Takes a Process Reaction Curve in csv format - assumes data at 100ms interval (column names CV and PV) Step 2: Makes a rough e

6 Jan 14, 2022
Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX.

ONNX Object Localization Network Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX. Ori

Ibai Gorordo 15 Oct 14, 2022
Official PyTorch implementation of "Adversarial Reciprocal Points Learning for Open Set Recognition"

Adversarial Reciprocal Points Learning for Open Set Recognition Official PyTorch implementation of "Adversarial Reciprocal Points Learning for Open Se

Guangyao Chen 78 Dec 28, 2022
A denoising autoencoder + adversarial losses and attention mechanisms for face swapping.

faceswap-GAN Adding Adversarial loss and perceptual loss (VGGface) to deepfakes'(reddit user) auto-encoder architecture. Updates Date Update 2018-08-2

3.2k Dec 30, 2022
Reimplement of SimSwap training code

SimSwap-train Reimplement of SimSwap training code Instructions 1.Environment Preparation (1)Refer to the README document of SIMSWAP to configure the

seeprettyface.com 111 Dec 31, 2022
Official MegEngine implementation of CREStereo(CVPR 2022 Oral).

[CVPR 2022] Practical Stereo Matching via Cascaded Recurrent Network with Adaptive Correlation This repository contains MegEngine implementation of ou

MEGVII Research 309 Dec 30, 2022
This GitHub repo consists of Code and Some results of project- Diabetes Treatment using Gold nanoparticles. These Consist of ML Models used for prediction Diabetes and further the basic theory and working of Gold nanoparticles.

GoldNanoparticles This GitHub repo consists of Code and Some results of project- Diabetes Treatment using Gold nanoparticles. These Consist of ML Mode

1 Jan 30, 2022
Code for the ICASSP-2021 paper: Continuous Speech Separation with Conformer.

Continuous Speech Separation with Conformer Introduction We examine the use of the Conformer architecture for continuous speech separation. Conformer

Sanyuan Chen (陈三元) 81 Nov 28, 2022
Unofficial PyTorch Implementation of UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation

UnivNet UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation This is an unofficial PyTorch

MINDs Lab 170 Jan 04, 2023
This is a Keras-based Python implementation of DeepMask- a complex deep neural network for learning object segmentation masks

NNProject - DeepMask This is a Keras-based Python implementation of DeepMask- a complex deep neural network for learning object segmentation masks. Th

189 Nov 16, 2022
An implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks in PyTorch.

Neural Attention Distillation This is an implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep

Yige-Li 84 Jan 04, 2023
Time Series Forecasting with Temporal Fusion Transformer in Pytorch

Forecasting with the Temporal Fusion Transformer Multi-horizon forecasting often contains a complex mix of inputs – including static (i.e. time-invari

Nicolás Fornasari 6 Jan 24, 2022