Stitch together Nanopore tiled amplicon data without polishing a reference

Related tags

Data AnalysisLilo
Overview

logo_dark_white

Stitch together Nanopore tiled amplicon data using a reference guided approach

Tiled amplicon data, like those produced from primers designed with primal scheme, are typically assembled using methods that involve aligning them to a reference and polishing the reference into a sequence that represents the reads. This works very well for obtaining a genome with SNPs and small indels representative of the reads. However in cases where the reads cannot be mapped well to the reference (e.g. genomes containing hypervariable regions between primers) or in cases where large structrual variants are expected this method may fail as polishing tools expect the reference to originate from the reads.

Lilo uses a reference only to assign reads to the amplicon they originated from and to order and orient the polished amplicons, no reference sequence is incorporated into the final assembly. Once assigned to an amplicon, a read with high average base quality of roughly median length for that amplicon is selected as a reference and polished with up to 300x coverage three times with medaka. The polished amplicons have primers removed with porechop (fork: https://github.com/sclamons/Porechop-1) and are then assembled with scaffold_builder.

Lilo has been tested on SARS-CoV-2 with artic V3 primers. It has also been tested on 7kb and 4kb amplicons with ~100-1000bp overlaps for ASFV, PRRSV-1 and PRRSV-2, schemes for which will be made available in the near future.

Requirments not covered by conda

Install Conda :)
Install this fork of porechop and make sure it is in your path: https://github.com/sclamons/Porechop-1

Installation

git clone https://github.com/amandawarr/Lilo  
cd Lilo  
conda env create --file LILO.yaml  
conda env create --file scaffold_builder.yaml

Usage

Lilo assumes your reads are in a folder called raw/ and have the suffix .fastq.gz. Multiple samples can be processed at the same time.
Lilo requires a config file detailing the location of a reference, a primer scheme (in the form of a primal scheme style bed file), and a primers.csv file (described below).

conda activate LILO
snakemake -k -s /path/to/LILO --configfile /path/to/config.file --cores N

It is recommended to run with -k so that one sample with insufficient coverage will not stop the other jobs completing.

Input specifications

  • config.file: an example config file has been provided.
  • Primer scheme: As output by primal scheme, with alt primers removed. Bed file of primer alignment locations. Columns: reference name, start, end, primer name, pool (must end with 1 or 2).
  • Primers.csv: Comma delimited, includes alt primers, with header line. Columns: amplicon_name, F_primer_name, F_primer_sequence, R_primer_name, R_primer_sequence. If there are a lot of degenerate bases in any of the primers it is recommended to expand these, the script expand.py will expand the described csv into a longer csv with IUPAC codes expanded.
  • reference.fasta Same reference used to make the scheme file.

Output

Lilo uses the names from raw/ to name the output file. For a file named "sample.fastq.gz", the final assembly will be named "sample_Scaffold.fasta", and files produced during the pipeline will be in a folder called "sample". The output will contain amplicons that had at least 40X full length coverage. Missing amplicons will be represented by Ns. Any ambiguity at overlaps will be indicated with IUPAC codes.

Note

  • Use of the wrong fork for porechop will cause the pipeline to fail.
  • Lilo is a work in progress and has been tested on a limited number of references, amplicon sizes, and overlap sizes, I recommend checking the results carefully for each new scheme.
  • The pipeline currently assumes that any structural variants are contained between the primers of an amplicon and do not change the length of the amplicon by more than 5%. If alt amplicons produce a product of a different length to the original amplicon they may not be allocated to their amplicon. Editing it to work better with alt amplicons is on my to do list.
  • Should not be used with reads produced with rapid kits, the pipeline assumes the reads are the length of the amplicons.
  • Do let me know if it destroys any cities or steals everyone's left shoe.
You might also like...
Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.
Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

Utilize data analytics skills to solve real-world business problems using Humana’s big data

Humana-Mays-2021-HealthCare-Analytics-Case-Competition- The goal of the project is to utilize data analytics skills to solve real-world business probl

Python data processing, analysis, visualization, and data operations

Python This is a Python data processing, analysis, visualization and data operations of the source code warehouse, book ISBN: 9787115527592 Descriptio

PrimaryBid - Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift
PrimaryBid - Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift

Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift This project is composed of two parts: Part1 and Part2

Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials
Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials

Data Scientist Learning Plan Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials

PostQF is a user-friendly Postfix queue data filter which operates on data produced by postqueue -j.

PostQF Copyright © 2022 Ralph Seichter PostQF is a user-friendly Postfix queue data filter which operates on data produced by postqueue -j. See the ma

Catalogue data - A Python Scripts to prepare catalogue data

catalogue_data Scripts to prepare catalogue data. Setup Clone this repo. Install

NumPy and Pandas interface to Big Data
NumPy and Pandas interface to Big Data

Blaze translates a subset of modified NumPy and Pandas-like syntax to databases and other computing systems. Blaze allows Python users a familiar inte

:truck: Agile Data Preparation Workflows made easy with dask, cudf, dask_cudf and pyspark
:truck: Agile Data Preparation Workflows made easy with dask, cudf, dask_cudf and pyspark

To launch a live notebook server to test optimus using binder or Colab, click on one of the following badges: Optimus is the missing framework to prof

Comments
  • Error in rule reporechop:

    Error in rule reporechop:

    Hello, While running the sample dataset, I have encoutered the following error messages. I have made such that prochop is installed correctly and in the path.

    Any help is greatly appreciated.

    Error in rule reporechop: jobid: 2 output: FAT94769_pass_barcode02_66883b35_0/polished_trimmed.fa shell: porechop --adapter_threshold 72 --end_threshold 70 --end_size 30 --extra_end_trim 5 --min_trim_size 3 -f ASFV.primers.csv -i FAT94769_pass_barcode02_66883b35_0/polished_clipped_amplicons.fa --threads 8 --no_split -o FAT94769_pass_barcode02_66883b35_0/polished_trimmed.fa (one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)

    opened by tboonf 1
  • Error while running LILO

    Error while running LILO

    Dear, I get the following error while running LILO. Any idea what could be the problem?

    /bin/bash: /home/minion/anaconda3/envs/LILO/etc/profile.d/conda.sh: No such file or directory
    
    CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
    To initialize your shell, run
    
        $ conda init <SHELL_NAME>
    
    Currently supported shells are:
      - bash
      - fish
      - tcsh
      - xonsh
      - zsh
      - powershell
    
    See 'conda init --help' for more information and options.
    
    IMPORTANT: You may need to close and restart your shell after running 'conda init'.
    
    
    /bin/bash: line 2: scaffold_builder.py: command not found
    sed: can't read reads_24h_Scaffold.fasta: No such file or directory
    [Wed Aug 10 11:12:28 2022]
    Error in rule scaffold:
        jobid: 1
        output: reads_24h_Scaffold.fasta
        shell:
            source $CONDA_PREFIX/etc/profile.d/conda.sh
                    conda activate scaffold_builder
                    scaffold_builder.py -i 75 -t 3693 -g 80000 -r /home/minion/lilo-test/ASFV.reference.fasta -q reads_24h/polished_trimmed.fa -p reads_24h
                    sed -i '1 s/^.*$/>reads_24h_Lilo_scaffold/' reads_24h_Scaffold.fasta
            (one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
    
    Job failed, going on with independent jobs.
    Exiting because a job execution failed. Look above for error message
    Complete log: /home/minion/lilo-test/.snakemake/log/2022-08-10T111227.425486.snakemake.log
    

    Kind regards, Elisabeth

    opened by el-mat 1
  • LILO with SLURM

    LILO with SLURM

    Hi there,

    I'm trying to run LILO on a SLURM HPC and I'm not sure what the errors are related to. Do you have an idea? It seems really environment depended, but maybe you stumbled across something similar.

    Call:

    snakemake -k -s [...]/tools/Lilo/LILO --configfile $CONFIG --profile [...]/tools/config-snippets/snake-cookies/slurm
    

    Log:

    [...]
    MissingOutputException in line 84 of [...]/tools/Lilo/LILO:
    Job Missing files after 30 seconds:
    FAR95540_pass_unclassified_7f618209_73/split/amplicon51.fastq
    This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait.
    Job id: 133673 completed successfully, but some output files are missing. 133673
    Trying to restart job 133673.
    [...]
    Error in rule assign:
        jobid: 133673
        output: FAR95540_pass_unclassified_7f618209_73/split/amplicon51.fastq
        shell:
            bedtools intersect -F 0.9 -wa -wb -bed -abam FAR95540_pass_unclassified_7f618209_73/alignments/reads_to_ref.bam -b amplicons.bed  | grep amplicon51 - | awk '{print $4}' - | seqtk subseq porechop/FAR95540_pass_unclassified_7f618209_73.fastq.gz - > FAR95540_pass_unclassified_7f618209_73/split/amplicon51.fastq
            (one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
        cluster_jobid: 210115
    
    Error executing rule assign on cluster (jobid: 133673, external: 210115, jobscript: [...]/.snakemake/tmp.cssfeg5e/snakejob.assign.133673.sh). For error details see the cluster log and the log files of the involved rule(s).
    [...]
    Traceback (most recent call last):
      File "/scratch/lataretum/miniconda3/envs/LILO/lib/python3.8/site-packages/snakemake/__init__.py", line 701, in snakemake
        success = workflow.execute(
      File "/scratch/lataretum/miniconda3/envs/LILO/lib/python3.8/site-packages/snakemake/workflow.py", line 1077, in execute
        success = self.scheduler.schedule()
      File "/scratch/lataretum/miniconda3/envs/LILO/lib/python3.8/site-packages/snakemake/scheduler.py", line 441, in schedule
        self._error_jobs()
      File "/scratch/lataretum/miniconda3/envs/LILO/lib/python3.8/site-packages/snakemake/scheduler.py", line 557, in _error_jobs
        self._handle_error(job)
      File "/scratch/lataretum/miniconda3/envs/LILO/lib/python3.8/site-packages/snakemake/scheduler.py", line 615, in _handle_error
        self.running.remove(job)
    KeyError: assign
    

    I set --latency-wait 90 it again breaks after some time at a assign rule and a KeyError: read_select from the snakemake scheduler.

    Let me know which input/config files might be interesting to solve this. :)

    opened by MarieLataretu 7
Releases(v0.2)
Owner
Amanda Warr
Amanda Warr
signac-flow - manage workflows with signac

signac-flow - manage workflows with signac The signac framework helps users manage and scale file-based workflows, facilitating data reuse, sharing, a

Glotzer Group 44 Oct 14, 2022
CSV database for chihuahua (HUAHUA) blockchain transactions

super-fiesta Shamelessly ripped components from https://github.com/hodgerpodger/staketaxcsv - Thanks for doing all the hard work. This code does only

Arlene Macciaveli 1 Jan 07, 2022
statDistros is a Python library for dealing with various statistical distributions

StatisticalDistributions statDistros statDistros is a Python library for dealing with various statistical distributions. Now it provides various stati

1 Oct 03, 2021
Python Library for learning (Structure and Parameter) and inference (Statistical and Causal) in Bayesian Networks.

pgmpy pgmpy is a python library for working with Probabilistic Graphical Models. Documentation and list of algorithms supported is at our official sit

pgmpy 2.2k Dec 25, 2022
Fit models to your data in Python with Sherpa.

Table of Contents Sherpa License How To Install Sherpa Using Anaconda Using pip Building from source History Release History Sherpa Sherpa is a modeli

134 Jan 07, 2023
Python dataset creator to construct datasets composed of OpenFace extracted features and Shimmer3 GSR+ Sensor datas

Python dataset creator to construct datasets composed of OpenFace extracted features and Shimmer3 GSR+ Sensor datas

Gabriele 3 Jul 05, 2022
SNV calling pipeline developed explicitly to process individual or trio vcf files obtained from Illumina based pipeline (grch37/grch38).

SNV Pipeline SNV calling pipeline developed explicitly to process individual or trio vcf files obtained from Illumina based pipeline (grch37/grch38).

East Genomics 1 Nov 02, 2021
Datashredder is a simple data corruption engine written in python. You can corrupt anything text, images and video.

Datashredder is a simple data corruption engine written in python. You can corrupt anything text, images and video. You can chose the cha

2 Jul 22, 2022
INF42 - Topological Data Analysis

TDA INF421(Conception et analyse d'algorithmes) Projet : Topological Data Analysis SphereMin Etant donné un nuage des points, ce programme contient de

2 Jan 07, 2022
Hue Editor: Open source SQL Query Assistant for Databases/Warehouses

Hue Editor: Open source SQL Query Assistant for Databases/Warehouses

Cloudera 759 Jan 07, 2023
Data Analysis for First Year Laboratory at Imperial College, London.

Data Analysis for First Year Laboratory at Imperial College, London. For personal reference only, and to reference in lab reports and lab books.

Martin He 0 Aug 29, 2022
WAL enables programmable waveform analysis.

This repro introcudes the Waveform Analysis Language (WAL). The initial paper on WAL will appear at ASPDAC'22 and can be downloaded here: https://www.

Institute for Complex Systems (ICS), Johannes Kepler University Linz 40 Dec 13, 2022
Statistical Analysis 📈 focused on statistical analysis and exploration used on various data sets for personal and professional projects.

Statistical Analysis 📈 This repository focuses on statistical analysis and the exploration used on various data sets for personal and professional pr

Andy Pham 1 Sep 03, 2022
📊 Python Flask game that consolidates data from Nasdaq, allowing the user to practice buying and selling stocks.

Web Trader Web Trader is a trading website that consolidates data from Nasdaq, allowing the user to search up the ticker symbol and price of any stock

Paulina Khew 21 Aug 30, 2022
Detecting Underwater Objects (DUO)

Underwater object detection for robot picking has attracted a lot of interest. However, it is still an unsolved problem due to several challenges. We take steps towards making it more realistic by ad

27 Dec 12, 2022
A probabilistic programming library for Bayesian deep learning, generative models, based on Tensorflow

ZhuSuan is a Python probabilistic programming library for Bayesian deep learning, which conjoins the complimentary advantages of Bayesian methods and

Tsinghua Machine Learning Group 2.2k Dec 28, 2022
cLoops2: full stack analysis tool for chromatin interactions

cLoops2: full stack analysis tool for chromatin interactions Introduction cLoops2 is an extension of our previous work, cLoops. From loop-calling base

YaqiangCao 25 Dec 14, 2022
PySpark bindings for H3, a hierarchical hexagonal geospatial indexing system

h3-pyspark: Uber's H3 Hexagonal Hierarchical Geospatial Indexing System in PySpark PySpark bindings for the H3 core library. For available functions,

Kevin Schaich 12 Dec 24, 2022
apricot implements submodular optimization for the purpose of selecting subsets of massive data sets to train machine learning models quickly.

Please consider citing the manuscript if you use apricot in your academic work! You can find more thorough documentation here. apricot implements subm

Jacob Schreiber 457 Dec 20, 2022
simple way to build the declarative and destributed data pipelines with python

unipipeline simple way to build the declarative and distributed data pipelines. Why you should use it Declarative strict config Scaffolding Fully type

aliaksandr-master 0 Jan 26, 2022