A tool to visualise the results of AlphaFold2 and inspect the quality of structural predictions

Overview

AlphaFold Analyser

This program produces high quality visualisations of predicted structures produced by AlphaFold. These visualisations allow the user to view the pLDDT of each residue of a protein structure and the predicted alignment error for the entire protein to rapidly infer the quality of a predicted structure.

Dependencies

  • Python 3.7
  • AlphaFold 2.0.0
  • PyMol 2.5.2
  • Matplotlib 3.4.2

Installing AlphaFold Analyser on Linux & MacOSX

At the command line, change directory to the directory where alphafold-analyser.py was downloaded, , using the full path name.

cd <download-directory>

Now move the file to where you normally keep your binaries. This directory should be in your path. Note: you may require administrative privileges to do this (either switching user to root or by using sudo).

As root:

mv alphafold-analyser.py /usr/local/bin/

As regular user:

sudo mv alphafold-analyser.py /usr/local/bin/

alphafold-analyser.py should now run from the shell or Terminal using the command alphafold-analyser.py

Alternatively, alphafold-analyser.py can be run directly from an IDE.

AlphaFold Settings for the Analyser

For the programme to function correctly, the model names parameter should label the first two models in alphafold as model_1 and model_2_ptm. An example of how this parameter should be written when running AlphaFold is shown below.

--model_names=model_1,model_2_ptm,model_3,model_4,model_5 \

model_2_ptm is used to collect the data required to plot the Predicted Alignment Error.

All files output by alphafold are stored in a single directory. However, only the ranked_0.pdb and results_model_2_ptm.pkl file are needed for analysis.

Running AlphaFold Analyser

A directory should be created containing all necessary files (see above). AlphaFold Analyser will then ask for the following inputs:

Input Directory: The file path for the directory containing the alphafold results files

Output Directory: The file path for the directory where the Analyser results will be stored.

Protein: The name of protein being analysed. This will be used to label all files and the directory created during the analysis

Outputs

AlphaFold Analyser has produces two outputs:

  • A PyMol session labelled with the protein input (e.g protein.pse). This will contain the highest confidence structure predicted by AlphaFold. The individual residues of the structure are coloured according to their pLDDT on colour spectrum from yellow to green to blue (low to high confidence).
  • A predicted alignment error plot again labelled with the protein input (e.g protein-pae.png). The plot is colored by the confidence values for each residue using the same colour scheme as the PyMol session.
  • Comments

    Future work may involve allowing for multiple inputs at once.

    You might also like...
    Code for the TIP 2021 Paper
    Code for the TIP 2021 Paper "Salient Object Detection with Purificatory Mechanism and Structural Similarity Loss"

    PurNet Project for the TIP 2021 Paper "Salient Object Detection with Purificatory Mechanism and Structural Similarity Loss" Abstract Image-based salie

    TensorFlow code for the neural network presented in the paper:
    TensorFlow code for the neural network presented in the paper: "Structural Language Models of Code" (ICML'2020)

    SLM: Structural Language Models of Code This is an official implementation of the model described in: "Structural Language Models of Code" [PDF] To ap

    Code for "Learning Structural Edits via Incremental Tree Transformations" (ICLR'21)

    Learning Structural Edits via Incremental Tree Transformations Code for "Learning Structural Edits via Incremental Tree Transformations" (ICLR'21) 1.

    [CVPR 2021] A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts
    [CVPR 2021] A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts

    Visual-Reasoning-eXplanation [CVPR 2021 A Peek Into the Reasoning of Neural Networks: Interpreting with Structural Visual Concepts] Project Page | Vid

    A PyTorch implementation of
    A PyTorch implementation of "Graph Classification Using Structural Attention" (KDD 2018).

    GAM ⠀⠀ A PyTorch implementation of Graph Classification Using Structural Attention (KDD 2018). Abstract Graph classification is a problem with practic

    Towards Interpretable Deep Metric Learning with Structural Matching
    Towards Interpretable Deep Metric Learning with Structural Matching

    DIML Created by Wenliang Zhao*, Yongming Rao*, Ziyi Wang, Jiwen Lu, Jie Zhou This repository contains PyTorch implementation for paper Towards Interpr

    PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation
    PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation

    StructDepth PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimat

    The (Official) PyTorch Implementation of the paper
    The (Official) PyTorch Implementation of the paper "Deep Extraction of Manga Structural Lines"

    MangaLineExtraction_PyTorch The (Official) PyTorch Implementation of the paper "Deep Extraction of Manga Structural Lines" Usage model_torch.py [sourc

    A python-image-classification web application project, written in Python and served through the Flask Microframework. This Project implements the VGG16 covolutional neural network, through Keras and Tensorflow wrappers, to make predictions on uploaded images.
    Releases(v1.0.1)
    Owner
    Oliver Powell
    Biochemistry student @ UEA
    Oliver Powell
    [NeurIPS 2021] Low-Rank Subspaces in GANs

    Low-Rank Subspaces in GANs Figure: Image editing results using LowRankGAN on StyleGAN2 (first three columns) and BigGAN (last column). Low-Rank Subspa

    112 Dec 28, 2022
    Intel® Nervana™ reference deep learning framework committed to best performance on all hardware

    DISCONTINUATION OF PROJECT. This project will no longer be maintained by Intel. Intel will not provide or guarantee development of or support for this

    Nervana 3.9k Dec 20, 2022
    Proposal, Tracking and Segmentation (PTS): A Cascaded Network for Video Object Segmentation

    Proposal, Tracking and Segmentation (PTS): A Cascaded Network for Video Object Segmentation By Qiang Zhou*, Zilong Huang*, Lichao Huang, Han Shen, Yon

    Forest 117 Apr 01, 2022
    The official PyTorch implementation of Curriculum by Smoothing (NeurIPS 2020, Spotlight).

    Curriculum by Smoothing (NeurIPS 2020) The official PyTorch implementation of Curriculum by Smoothing (NeurIPS 2020, Spotlight). For any questions reg

    PAIR Lab 36 Nov 23, 2022
    Official codebase for Decision Transformer: Reinforcement Learning via Sequence Modeling.

    Decision Transformer Lili Chen*, Kevin Lu*, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas†, and Igor M

    Kevin Lu 1.4k Jan 07, 2023
    RoFormer_pytorch

    PyTorch RoFormer 原版Tensorflow权重(https://github.com/ZhuiyiTechnology/roformer) chinese_roformer_L-12_H-768_A-12.zip (提取码:xy9x) 已经转化为PyTorch权重 chinese_r

    yujun 283 Dec 12, 2022
    Tech Resources for Academic Communities

    Free tech resources for faculty, students, researchers, life-long learners, and academic community builders for use in tech based courses, workshops, and hackathons.

    Microsoft 2.5k Jan 04, 2023
    Efficient training of deep recommenders on cloud.

    HybridBackend Introduction HybridBackend is a training framework for deep recommenders which bridges the gap between evolving cloud infrastructure and

    Alibaba 111 Dec 23, 2022
    SAMO: Streaming Architecture Mapping Optimisation

    SAMO: Streaming Architecture Mapping Optimiser The SAMO framework provides a method of optimising the mapping of a Convolutional Neural Network model

    Alexander Montgomerie-Corcoran 20 Dec 10, 2022
    A deep learning framework for historical document image analysis

    DIVA-DAF Description A deep learning framework for historical document image analysis. How to run Install dependencies # clone project git clone https

    9 Aug 04, 2022
    [ICML 2020] "When Does Self-Supervision Help Graph Convolutional Networks?" by Yuning You, Tianlong Chen, Zhangyang Wang, Yang Shen

    When Does Self-Supervision Help Graph Convolutional Networks? PyTorch implementation for When Does Self-Supervision Help Graph Convolutional Networks?

    Shen Lab at Texas A&M University 106 Nov 11, 2022
    MLSpace: Hassle-free machine learning & deep learning development

    MLSpace: Hassle-free machine learning & deep learning development

    abhishek thakur 293 Jan 03, 2023
    An official implementation of "Background-Aware Pooling and Noise-Aware Loss for Weakly-Supervised Semantic Segmentation" (CVPR 2021) in PyTorch.

    BANA This is the implementation of the paper "Background-Aware Pooling and Noise-Aware Loss for Weakly-Supervised Semantic Segmentation". For more inf

    CV Lab @ Yonsei University 59 Dec 12, 2022
    Official repository of Semantic Image Matting

    Semantic Image Matting This is the official repository of Semantic Image Matting (CVPR2021). Overview Natural image matting separates the foreground f

    192 Dec 29, 2022
    DrWhy is the collection of tools for eXplainable AI (XAI). It's based on shared principles and simple grammar for exploration, explanation and visualisation of predictive models.

    Responsible Machine Learning With Great Power Comes Great Responsibility. Voltaire (well, maybe) How to develop machine learning models in a responsib

    Model Oriented 590 Dec 26, 2022
    Find the Heart simple Python Game

    This is a simple Python game for finding a heart emoji. There is a 3 x 3 matrix in which a heart emoji resides. The location of the heart is randomized and is not revealed. The player must guess the

    p.katekomol 1 Jan 24, 2022
    Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow

    eXtreme Gradient Boosting Community | Documentation | Resources | Contributors | Release Notes XGBoost is an optimized distributed gradient boosting l

    Distributed (Deep) Machine Learning Community 23.6k Dec 31, 2022
    A 1.3B text-to-image generation model trained on 14 million image-text pairs

    minDALL-E on Conceptual Captions minDALL-E, named after minGPT, is a 1.3B text-to-image generation model trained on 14 million image-text pairs for no

    Kakao Brain 604 Dec 14, 2022
    Efficient electromagnetic solver based on rigorous coupled-wave analysis for 3D and 2D multi-layered structures with in-plane periodicity

    Efficient electromagnetic solver based on rigorous coupled-wave analysis for 3D and 2D multi-layered structures with in-plane periodicity, such as gratings, photonic-crystal slabs, metasurfaces, surf

    Alex Song 17 Dec 19, 2022
    Variational autoencoder for anime face reconstruction

    VAE animeface Variational autoencoder for anime face reconstruction Introduction This repository is an exploratory example to train a variational auto

    Minzhe Zhang 2 Dec 11, 2021