Machine learning for NeuroImaging in Python

Overview
Github Actions Build Status Coverage Status Azure Build Status

nilearn

Nilearn enables approachable and versatile analyses of brain volumes. It provides statistical and machine-learning tools, with instructive documentation & friendly community.

It supports general linear model (GLM) based analysis and leverages the scikit-learn Python toolbox for multivariate statistics with applications such as predictive modelling, classification, decoding, or connectivity analysis.

Important links

Dependencies

The required dependencies to use the software are:

  • Python >= 3.5,
  • setuptools
  • Numpy >= 1.11
  • SciPy >= 0.19
  • Scikit-learn >= 0.19
  • Joblib >= 0.12
  • Nibabel >= 2.0.2

If you are using nilearn plotting functionalities or running the examples, matplotlib >= 1.5.1 is required.

If you want to run the tests, you need pytest >= 3.9 and pytest-cov for coverage reporting.

Install

First make sure you have installed all the dependencies listed above. Then you can install nilearn by running the following command in a command prompt:

pip install -U --user nilearn

More detailed instructions are available at http://nilearn.github.io/introduction.html#installation.

Development

Detailed instructions on how to contribute are available at http://nilearn.github.io/development.html

Comments
  • [ENH] Initial visual reports

    [ENH] Initial visual reports

    Closes #2022 .

    An initial implementation of visual reports for Nilearn. Adds:

    • [x] The templating library tempita as an external dependency
    • [x] A reorganization of HTMLDocument into a new reporting module
    • [x] A new reporting HTML template
    • [x] A super class Report to populate the report HTML template with tempita populated text
    • [x] Relevant CSS styling to improve report UX, using pure-css
    • [x] An ability to display reports directly in Jupyter Notebooks, without iframe rendering, thanks to @GaelVaroquaux
    • [x] Documentation of this functionality, with examples
    • [x] A new Sphinx Gallery image scraper to embed these example HTML reports

    For a current rendering of reports see: https://github.com/emdupre/nilearn/pull/4#issuecomment-527984327 and the plot_mask_computation example.

    opened by emdupre 172
  • switch from papaya to brainsprite in plotting.view_stat_map

    switch from papaya to brainsprite in plotting.view_stat_map

    I really love the new 3D interactive viewer (plotting.view_stat_map), but the notebooks it is producing are huge. In this PR, I am proposing to switch from papaya to brainsprite, which is a js library I developed for the exact purpose of embedding lightweight 3D viewers in html pages (http://github.com/simexp/brainsprite.js).

    The first difference with papaya is that it is using a jpg or a png containing all sagital slices of a volume as well as json metadata to store the brain images. That tend to be quite smaller than a nifti (depending on the numerical precision of the nifti). That also means that brainsprite can render brains with core html5 features, and no dependencies. So the brainsprite library weighs 15kb (500 lines...), as opposed to 2Mb for the current papaya html template. I have attached two brain viewers embedded in jupyter notebooks. The Papaya-based notebook is 12Mb, while the brainsprite-based notebook is 500kb. Again, this reflects a core difference in design: papaya is a full brain viewer app, featuring nifti reading as well as colorbar etc. Brainsprite is a minimal, fast brain viewer working from a pre-generated sprite.

    Which makes a transition with the second point: all the action for the generation of the brain volume happens in python. There is a new function called save_sprite that generates the brain sprite as well as the json meta data. It relies on matplotlib, as well as nilearn's own functions. In particular, thresholding and colormap generation are all done with nilearn's code. Resampling as well. This means that it will be easier to maintain and evolve for nilearn's developpers. The current version replicates all the arguments of plot_stat_map, including draw_cross, annotate, cut_coords and a few other (with a few as bonus, such as opacity).

    This PR is far from polished, there are a few oustanding issues, here. I also need to look into the doc and testing. Finally, I dumped some functions in html_stat_map.py which should probably live elsewhere. But I think it is time to get feedback, and in particular I'd like to know if there is an interest in merging this PR at all...

    opened by pbellec 166
  • [MRG] Cortex surface projections

    [MRG] Cortex surface projections

    Hello @GaelVaroquaux , @mrahim , @agramfort, @juhuntenburg and others, this is a PR about surface plotting.

    nilearn has some awesome functions to plot surface data in nilearn.plotting.surf_plotting. However, it doesn't offer a conversion from volumetric to surface data.

    It would be great to add a function to sample or project volumetric data on the nodes of a cortical mesh; this would allow users to look at surface plots of their 3d images (e.g. statistical maps).

    In this PR we will try to add this to nilearn.

    Most tools which offer this functionality (e.g. caret, freesurfer, pycortex) usually propose several projection and sampling strategies, offering different quality / speed tradeoffs. However, it seems to me that naive strategies are not so far behind more elaborate ones - see for example [Operto, Grégory, et al. "Projection of fMRI data onto the cortical surface using anatomically-informed convolution kernels." Neuroimage 39.1 (2008): 127-135]. For plotting and visualisation, the results of a simple strategy are probably accurate enough for most users.

    I therefore suggest to start by including a very simple and fast projection scheme, and we can add more elaborate ones later if we want. I'm just getting started but I think we can already start a discussion.

    The proposed strategy is simply to draw a sample from a 3mm sphere around each mesh node, and average the measures.

    The image below illustrates that strategy: each red circle is a mesh node. Samples are drawn from the blue crosses that are attached to it, and are inside the image, and then averaged to compute the color inside the circle. (This image is produced by the show_sampling.py example, which is only there to clarify the strategy implemented in this PR and will be removed).

    illustration_2d

    Here is an example surface plot for a brainpedia image (id 32015 on Neurovault, https://neurovault.org/media/images/1952/task007_face_vs_baseline_pycortex/index.html), produced by brainpedia_surface.py:

    brainpedia_inflated

    And here is the plot produced by pycortex for the same image, as shown on Neurovault:

    brainpedia_pycortex

    Note about performance: in order to choose the positions of the samples to draw from a unit ball, for now, we cluster points drawn from a uniform distribution on the ball and keep the centroids (we can think of something better). This takes a few seconds and the results are cached with joblib for the time being, but since it only needs to be done once, when we have decided how many samples we want, the positions will be hardcoded once and for all (no computing, no caching). with 100 samples per ball, projecting a stat map of the full brain with 2mm voxels on an fsaverage hemisphere mesh takes around 60 ms.

    opened by jeromedockes 79
  • (WIP) Sparse models: S-LASSO and TV-l1

    (WIP) Sparse models: S-LASSO and TV-l1

    • Supports TV-l1 and S-LASSO priors
    • Supports logistic and squared losses
    • Has cross validation
    • Can automatically select alpha by CV (+ automatic computation of useful alpha ranges for the CV)
    • Warning: User must supply l1_ratio
    opened by dohmatob 78
  • Add a Neurovault fetcher.

    Add a Neurovault fetcher.

    This is based on PR #832 opened by @bcipolli in answer to issue #640. The contribution is to add a fetcher for downloading selected

    images from http://neurovault.org/. I have tried to address some of the remarks that were made in the discussion about #832. I have included the examples plot_ica_neurovault.py and plot_neurovault_meta_analysis.py from the previous PR; they remain almost identical.

    The interface to the fetcher is similar to that proposed in #832. I have kept the possibility to filter collections and images either with a function or with a dictionary of {field: desired_value} pairs (or both), but they are now separate arguments called respectively collection_filter (image_filter for images) and collection_terms (image_terms for images). Also, now the filters passed in a dictionary are only inserted in the query URL if they are actually available on the server (which is the case for the collection owner, the collection DOI, and the collection name); otherwise they are applied to the metadata once it has been downloaded.

    For users who want a specific set of image or collection ids, downloading all the Neurovault metadata and filtering on the ids is inefficient; so they can use the image_ids or collection_ids parameters to pass the lists of ids they want to download; in this case any filter is ignored and the server is queried directly for the required image and collection ids. Note: this is also done under the hood if the collection id is used as a filter term - either by specifying the collection 'id' field or the image 'collection_id' field - except that in this case the other filters are still applied.

    I have included a ResultFilter class and a bunch of special values such as NotNull which can be used to specify filters more easily. In some neurovault metadata jsons, some values are strings such as "", "null", "None"... instead of actual null; these (more precisely strings that match ($|n/?a$|none|null) in a case-insensitive way) are replaced by a true null value (null in the json, converted to None when loaded in a python dict) upon download so that comparing to None or testing for truth should yield the expected results. In particular dict.get(field) will give the same value wether the original value was null, "null", ... , or plain missing.

    This should make using fetch_neurovault somewhat easier. However, for users who are interested in a large subset of the neurovault data and are not too short on disk space, I would recommend using only very simple filters (e.g. the defaults) when calling fetch_neurovault to download (almost all) the data, and, once it is on disk, only access it through read_sql_query, local_database_connection or local_database_cursor. The metadata is stored in an sqlite database, so instead of having to read the docstring for fetch_neurovault, writing their own filters, etc., most users will probably prefer to download it all and then simply use SQL syntax to select the subset they're interested in. read_sql_query queries the database and returns the result as an OrderedDict of columns (the default) or as a list of rows. local_database_connection gives a connection to the sqlite file, so that pandas users can load e.g., all the images' metadata by typing:

    images = pandas.read_sql_query(
         "SELECT * FROM images", neurovault.local_database_connection())
    

    Of course if they prefer manipulating sqlite3 objects directly they can use the connection given by local_database_connection or the cursor given by local_database_cursor.

    @bcipolli, @chrisfilo, @GaelVaroquaux, or anyone else, please let me know what modifications need to be made! In particular:

    • What should be the default filters? The current behaviour is to exclude:
      • Collections from a list of known bad collections (found in PR #832, with one addition).
      • Empty collections.
      • Images from a list of known bad images (found in PR #832).
      • Images that are not in MNI space.
      • Images for which the metadata field 'is_valid' is cleared.
      • Images for which the metadata field 'is_thresholded' is set.
      • Images for which 'map_type' is 'ROI/mask', 'anatomical' or 'parcellation'
      • Images for which 'image_type' is 'atlas'
    • Should the fetcher, by default, download all the images matching the filters, or only a limited number (the current behaviour)?
    opened by jeromedockes 67
  • [MRG] Glass brain visualisation

    [MRG] Glass brain visualisation

    This pull request is WIP for now and was only opened to get some feedback, both visualisation-wise and code-wise.

    Here is how it looks like atm: glass_brain

    The code to generate the plots is there.

    New feature Plotting 
    opened by lesteve 64
  • Surface plots

    Surface plots

    Hi again,

    This PR replaces #2454 which is a continuation of #1730 submitted by @dangom to resolve #1722. The intention is to check that everything other than the symmetric_cbar is working properly and merge the surface montages into master. I'll open an issue for the symmetric_cbar and fix that separately.

    Also, thank you so much @GaelVaroquaux and @effigies for your help creating the PR properly.

    opened by ZviBaratz 62
  • Brainomics/Localizer

    Brainomics/Localizer

    opened by DimitriPapadopoulos 60
  • SpaceNet (this PR succeeds PR #219)

    SpaceNet (this PR succeeds PR #219)

    This PR succeeds PR #219 (aka unicorn factory). All discussions should be done here henceforth. #219 is now classified, and should be referred to solely for histological purposes.

    opened by dohmatob 54
  • [ENH, MRG] fREM

    [ENH, MRG] fREM

    Following the merge of #2000, here is the introduction of fREMClassifier and fREMRegressor objects which run pipelines with clustering, feature selection and ensembling of best models across a grid of parameters.

    • The files changes are minor.
    • The current implementation yields results as expected on an exemple (see attached below on plot_haxby_tutorial)
    • But the test of fREM accuracy on small datasets in test_decoder.py keep failing just because accuracy of fREM is not good enough in this setting but after trying various parameters I don't know which tests to do that would be better suited to this usecase. Anybody ?
    • If I remember correctly you wanted to replace Decoder use by fREM in many examples to alleviate their computational cost @GaelVaroquaux . Which one are the main targets ? I will benchmark how many time we could gain but on ROIs it doesn't seem very useful to cluster. (e.g. it slows things down on Haxby ROI example)
    Capture d’écran 2020-03-05 à 18 40 24
    opened by thomasbazeille 52
  • [MRG] Dictionary learning + nilearn.decomposition refactoring

    [MRG] Dictionary learning + nilearn.decomposition refactoring

    Decomposition estimators (DictLearning / MultiPCA) now inherit a DecompositionEstimator.

    Loading of data is made through a PCAMultiNiftiMasker, which loads data from files and compress it.

    Potentially, the function check_masker could solve issue #688, as it factorizes the input checking of estimator to which you provide either a masker or parameters for a mask. It is tuned to be able to use PCAMultiNiftiMasker

    opened by arthurmensch 51
  • Collaboration with NiiVue and possibilities for integration

    Collaboration with NiiVue and possibilities for integration

    The NiiVue project uses WebGL 2.0 to provide interactive web-based visualization capabilities for viewing medical imaging. We have started to meet with NiiVue developers to discuss possible avenues for integration of NiiVue into Nilearn. They have started working on a Python interface so one path to take would be to work on building this with them. We can also start by providing examples of how Nilearn users can make use of NiiVue functionality. Some relevant repositories include https://github.com/niivue/niivue and https://github.com/niivue/ipyniivue. NiiVue demos can be found here: https://niivue.github.io/niivue/. We can keep track of progress and decisions made to advance this collaboration here as well as have a general discussion on benefits of the integration.

    Enhancement Discussion 
    opened by ymzayek 1
  • Remove ci-skip action from GitHub Actions workflows

    Remove ci-skip action from GitHub Actions workflows

    GitHub Actions now supports skipping workflows out of the box: https://docs.github.com/en/actions/managing-workflow-runs/skipping-workflow-runs and so the Action mstachniuk/ci-skip is deprecated and should be removed from all workflows before it starts failing.

    Infrastructure 
    opened by ymzayek 4
  • [ENH] Flat maps for all fsaverage resolutions

    [ENH] Flat maps for all fsaverage resolutions

    Closes #3171 .

    If we're happy with the flat maps generated in #3171 for all fsaverage resolutions, I suggest the following roadmap to integrate them to nilearn:

    • [x] clean up my current script so that it generates flat maps for both hemispheres and all fsaverage resolutions
    • [ ] publish it somewhere? (as a gist maybe? I'm happy to hear suggestions here)
    • [x] run the script I used in #2815 to generate our fsaverage tarballs, and add flat maps to it
    • [ ] update our OSF datasets for fsaverage 3-7
    • [ ] have at least one example's thumbnail display this feature (which is why this PR leverages unmerged changes from #3173, as I want this thumbnail to show curvature sign as well)

    As a reminder, here is what the current flat maps look like for fsaverage 3 to 7:

    image

    image

    image

    image

    image

    opened by alexisthual 7
  • `filter` option in `signal.clean` is not exposed to `nilearn.maskers.NiftiMasker` and potentially other masker objects

    `filter` option in `signal.clean` is not exposed to `nilearn.maskers.NiftiMasker` and potentially other masker objects

    However, I don't think a filter option is provided for nilearn.maskers.NiftiMasker so seems it's using default aka butterworth

    Originally posted by @DasDominus in https://github.com/nilearn/nilearn/issues/3434#issuecomment-1333064573

    Good first issue effort: low 
    opened by htwangtw 6
  • All CI failing because flake8 --diff option has been removed

    All CI failing because flake8 --diff option has been removed

    All CI are currently failing because flake8==6.0.0 (which has been released on Nov 23) does not have the --diff flag anymore.

    The reason has been explained in this issue.

    A solution which is an adaptation of this comment is replacing the flake8 --diff from build_tools/flake8_diff.sh#L79 with the following command:

    git diff --name-only $COMMIT | grep '\.py$' | xargs --delimiter='\n' --no-run-if-empty flake8 --show-source
    

    Opened PR #3432

    Bug 
    opened by RaphaelMeudec 2
A PyTorch implementation of "Predict then Propagate: Graph Neural Networks meet Personalized PageRank" (ICLR 2019).

APPNP ⠀ A PyTorch implementation of Predict then Propagate: Graph Neural Networks meet Personalized PageRank (ICLR 2019). Abstract Neural message pass

Benedek Rozemberczki 329 Dec 30, 2022
[CVPR'22] Official PyTorch Implementation of Collaborative Transformers for Grounded Situation Recognition

[CVPR'22] Collaborative Transformers for Grounded Situation Recognition Paper | Model Checkpoint This is the official PyTorch implementation of Collab

Junhyeong Cho 29 Dec 10, 2022
A commany has recently introduced a new type of bidding, the average bidding, as an alternative to the bid given to the current maximum bidding

Business Problem A commany has recently introduced a new type of bidding, the average bidding, as an alternative to the bid given to the current maxim

Kübra Bilinmiş 1 Jan 15, 2022
Scrutinizing XAI with linear ground-truth data

This repository contains all the experiments presented in the corresponding paper: "Scrutinizing XAI using linear ground-truth data with suppressor va

braindata lab 2 Oct 04, 2022
Labelbox is the fastest way to annotate data to build and ship artificial intelligence applications

Labelbox Labelbox is the fastest way to annotate data to build and ship artificial intelligence applications. Use this github repository to help you s

labelbox 1.7k Dec 29, 2022
Home for cuQuantum Python & NVIDIA cuQuantum SDK C++ samples

Welcome to the cuQuantum repository! This public repository contains two sets of files related to the NVIDIA cuQuantum SDK: samples: All C/C++ sample

NVIDIA Corporation 147 Dec 27, 2022
Sharpness-Aware Minimization for Efficiently Improving Generalization

Sharpness-Aware-Minimization-TensorFlow This repository provides a minimal implementation of sharpness-aware minimization (SAM) (Sharpness-Aware Minim

Sayak Paul 54 Dec 08, 2022
Ejemplo Algoritmo Viterbi - Example of a Viterbi algorithm applied to a hidden Markov model on DNA sequence

Ejemplo Algoritmo Viterbi Ejemplo de un algoritmo Viterbi aplicado a modelo ocul

Mateo Velásquez Molina 1 Jan 10, 2022
Deep Anomaly Detection with Outlier Exposure (ICLR 2019)

Outlier Exposure This repository contains the essential code for the paper Deep Anomaly Detection with Outlier Exposure (ICLR 2019). Requires Python 3

Dan Hendrycks 464 Dec 27, 2022
This is the 3D Implementation of 《Inconsistency-aware Uncertainty Estimation for Semi-supervised Medical Image Segmentation》

CoraNet This is the 3D Implementation of 《Inconsistency-aware Uncertainty Estimation for Semi-supervised Medical Image Segmentation》 Environment pytor

25 Nov 08, 2022
DSTC10 Track 2 - Knowledge-grounded Task-oriented Dialogue Modeling on Spoken Conversations

DSTC10 Track 2 - Knowledge-grounded Task-oriented Dialogue Modeling on Spoken Conversations This repository contains the data, scripts and baseline co

Alexa 51 Dec 17, 2022
Source code for Zalo AI 2021 submission

zalo_ltr_2021 Source code for Zalo AI 2021 submission Solution: Pipeline We use the pipepline in the picture below: Our pipeline is combination of BM2

128 Dec 27, 2022
A voice recognition assistant similar to amazon alexa, siri and google assistant.

kenyan-Siri Build an Artificial Assistant Full tutorial (video) To watch the tutorial, click on the image below Installation For windows users (run th

Alison Parker 3 Aug 19, 2022
PyTorch implementation of the R2Plus1D convolution based ResNet architecture described in the paper "A Closer Look at Spatiotemporal Convolutions for Action Recognition"

R2Plus1D-PyTorch PyTorch implementation of the R2Plus1D convolution based ResNet architecture described in the paper "A Closer Look at Spatiotemporal

Irhum Shafkat 342 Dec 16, 2022
Cache Requests in Deta Bases and Echo them with Deta Micros

Deta Echo Cache Leverage the awesome Deta Micros and Deta Base to cache requests and echo them as needed. Stop worrying about slow public APIs or agre

Gingerbreadfork 8 Dec 07, 2021
InsCLR: Improving Instance Retrieval with Self-Supervision

InsCLR: Improving Instance Retrieval with Self-Supervision This is an official PyTorch implementation of the InsCLR paper. Download Dataset Dataset Im

Zelu Deng 25 Aug 30, 2022
Iranian Cars Detection using Yolov5s, PyTorch

Iranian Cars Detection using Yolov5 Train 1- git clone https://github.com/ultralytics/yolov5 cd yolov5 pip install -r requirements.txt 2- Dataset ../

Nahid Ebrahimian 22 Dec 05, 2022
This repository contains demos I made with the Transformers library by HuggingFace.

Transformers-Tutorials Hi there! This repository contains demos I made with the Transformers library by 🤗 HuggingFace. Currently, all of them are imp

3.5k Jan 01, 2023
SustainBench: Benchmarks for Monitoring the Sustainable Development Goals with Machine Learning

Datasets | Website | Raw Data | OpenReview SustainBench: Benchmarks for Monitoring the Sustainable Development Goals with Machine Learning Christopher

67 Dec 17, 2022
Second-order Attention Network for Single Image Super-resolution (CVPR-2019)

Second-order Attention Network for Single Image Super-resolution (CVPR-2019) "Second-order Attention Network for Single Image Super-resolution" is pub

516 Dec 28, 2022