Python-based Space Physics Environment Data Analysis Software

Overview

pySPEDAS

build Coverage Status Version Language grade: Python Status License

pySPEDAS is an implementation of the SPEDAS framework for Python.

The Space Physics Environment Data Analysis Software (SPEDAS) framework is written in IDL and contains data loading, data analysis and data plotting tools for various scientific missions (NASA, NOAA, etc.) and ground magnetometers.

Please see our documentation at:

https://pyspedas.readthedocs.io/

Projects Supported

Requirements

Python 3.7+ is required.

We recommend Anaconda which comes with a suite of packages useful for scientific data analysis. Step-by-step instructions for installing Anaconda can be found at: Windows, macOS, Linux

Installation

Setup your Virtual Environment

To avoid potential dependency issues with other Python packages, we suggest creating a virtual environment for pySPEDAS; you can create a virtual environment in your terminal with:

python -m venv pyspedas

To enter your virtual environment, run the 'activate' script:

Windows

.\pyspedas\Scripts\activate

macOS and Linux

source pyspedas/bin/activate

Using Jupyter notebooks with your virtual environment

To get virtual environments working with Jupyter, in the virtual environment, type:

pip install ipykernel
python -m ipykernel install --user --name pyspedas --display-name "Python (pySPEDAS)"

(note: "pyspedas" is the name of your virtual environment)

Then once you open the notebook, go to "Kernel" then "Change kernel" and select the one named "Python (pySPEDAS)"

Install

pySPEDAS supports Windows, macOS and Linux. To get started, install the pyspedas package using PyPI:

pip install pyspedas

Upgrade

To upgrade to the latest version of pySPEDAS:

pip install pyspedas --upgrade

Local Data Directories

The recommended way of setting your local data directory is to set the SPEDAS_DATA_DIR environment variable. SPEDAS_DATA_DIR acts as a root data directory for all missions, and will also be used by IDL (if you’re running a recent copy of the bleeding edge).

Mission specific data directories (e.g., MMS_DATA_DIR for MMS, THM_DATA_DIR for THEMIS) can also be set, and these will override SPEDAS_DATA_DIR

Usage

To get started, import pyspedas and pytplot:

import pyspedas
from pytplot import tplot

You can load data into tplot variables by calling pyspedas.mission.instrument(), e.g.,

To load and plot 1 day of THEMIS FGM data for probe 'd':

thm_fgm = pyspedas.themis.fgm(trange=['2015-10-16', '2015-10-17'], probe='d')

tplot(['thd_fgs_gse', 'thd_fgs_gsm'])

To load and plot 2 minutes of MMS burst mode FGM data:

mms_fgm = pyspedas.mms.fgm(trange=['2015-10-16/13:05:30', '2015-10-16/13:07:30'], data_rate='brst')

tplot(['mms1_fgm_b_gse_brst_l2', 'mms1_fgm_b_gsm_brst_l2'])

Note: by default, pySPEDAS loads all data contained in CDFs found within the requested time range; this can potentially load data outside of your requested trange. To remove the data outside of your requested trange, set the time_clip keyword to True

To load and plot 6 hours of PSP SWEAP/SPAN-i data:

spi_vars = pyspedas.psp.spi(trange=['2018-11-5', '2018-11-5/06:00'], time_clip=True)

tplot(['DENS', 'VEL', 'T_TENSOR', 'TEMP'])

To download 5 days of STEREO magnetometer data (but not load them into tplot variables):

stereo_files = pyspedas.stereo.mag(trange=['2013-11-1', '2013-11-6'], downloadonly=True)

Standard Options

  • trange: two-element list specifying the time range of interest. This keyword accepts a wide range of formats
  • time_clip: if set, clip the variables to the exact time range specified by the trange keyword
  • suffix: string specifying a suffix to append to the loaded variables
  • varformat: string specifying which CDF variables to load; accepts the wild cards * and ?
  • varnames: string specifying which CDF variables to load (exact names)
  • get_support_data: if set, load the support variables from the CDFs
  • downloadonly: if set, download the files but do not load them into tplot
  • no_update: if set, only load the data from the local cache
  • notplot: if set, load the variables into dictionaries containing numpy arrays (instead of creating the tplot variables)

Getting Help

To find the options supported, call help on the instrument function you're interested in:

help(pyspedas.themis.fgm)

You can ask questions by creating an issue or by joining the SPEDAS mailing list.

Contributing

We welcome contributions to pySPEDAS; to learn how you can contribute, please see our Contributing Guide

Code of Conduct

In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. To learn more, please see our Code of Conduct.

Additional Information

For examples of pyspedas, see: https://github.com/spedas/pyspedas_examples

For MMS examples, see: https://github.com/spedas/mms-examples

For pytplot, see: https://github.com/MAVENSDC/PyTplot

For cdflib, see: https://github.com/MAVENSDC/cdflib

For SPEDAS, see http://spedas.org/

Comments
  • doesn't load in STATIC energy / phi / theta / etc data

    doesn't load in STATIC energy / phi / theta / etc data

    I was trying to use pyspedas to pull in STATIC data but I noticed large discrepancies in the data pulled by PySPEDAS vs pulling data in IDL. Looking at the CDF file, it seems like pyspedas is ignoring the file's metadata and just pulling variables/support variables. Unfortunately, for STATIC, important parameters (like energy, phi, theta, etc) are given in metadata rather than variables/support variables.

    Please enable reading in of the metadata parameters so STATIC data can be used to the full extent in python as well as in IDL.

    Thank you!

    opened by NeeshaRS 25
  • RBSP Hope file loading UnicodeDecodeError

    RBSP Hope file loading UnicodeDecodeError

    Trying to call pyspedas.rbsp.hope(trange = [date1, date2], probe = "a", datatype = "moments", level = "l3", notplot = True) throws a <class 'UnicodeDecodeError'> for SOME dates, but not all. For example, 2018-11-6 and 2018-11-9 fail but 2018-11-7 does not. 'ascii' codec can't decode byte 0xe0 in position 0: ordinal not in range(128).

    I was using notplot = True to get a dictionary out, if that matters.

    opened by kaltensi 6
  • For poes data

    For poes data

    Hello all,

    I am trying to use POES-MEPED data using pyspedas. I did >>>pyspedas.poes.load(trange) But I found pyspedas does not load the POES flux data for example, 'mep_ele_flux'. When I read the individual CDF files, there is 'mep_ele_flux' variable.

    Could you check all the POES data is properly loaded?

    Best regards, Inchun

    opened by chondrite1230 5
  • ylim does not work for a log axis?

    ylim does not work for a log axis?

    Although this should be asked on the pytplot page, let me ask here as I found (and confirmed) it when plotting THEMIS data. Recently my colleagues and I realized that pytplot.ylim() does not work properly for a tplot variable with ylog=1. Looking into the problem by testing with THEMIS/ESA data, we found:

    • a spectrum-type data with a 2-D "v" array (e.g., (times, 32) ----> ylim works
    • a spectrum-type data with a 1-D "v" array (e.g., (32) ----> ylim NOT work as expected

    Isn't this a bug in pytplot or pyspedas?

    Copied below are the commands that reproduce the above latter case on my environment (macOS 10.14.6, Python 3.9.6, pytplot 1.7.28, pyspedas 1.2.8).

    `import pyspedas import pytplot pyspedas.themis.spacecraft.particles.esa.esa( varformat='peir') pytplot.options('thc_peir_en_eflux', 'zlog', 1) pytplot.options('thc_peir_en_eflux', 'ylog', 1) element_thc_peir_en_eflux = pytplot.get_data('thc_peir_en_eflux') thc_peir_flux_test_meta_data = pytplot.get_data('thc_peir_en_eflux', metadata=True) pytplot.store_data('thc_peir_flux_test', data={'x': element_thc_peir_en_eflux[0], 'y': element_thc_peir_en_eflux[1], 'v': element_thc_peir_en_eflux[2][0]}, attr_dict=thc_peir_flux_test_meta_data) pytplot.options('thc_peir_flux_test', 'zlog', 1) pytplot.zlim('thc_peir_flux_test', (0.00005)*1.e+6, (500.)*1.e+6) pytplot.options('thc_peir_flux_test', 'ytitle', 'thc_peir_flux_test') pytplot.tplot('thc_peir_flux_test') # plot

    pytplot.ylim('thc_peir_flux_test', 1000, 24000) pytplot.tplot('thc_peir_flux_test') # plot with a different yrange setting`

    opened by horit 5
  • How to specify data directory

    How to specify data directory

    I am loading OMNI data, and wish to set local_data_dir to D:/data/omni. In pyspedas/omni/config.py, it seems that local _data_dir can be set using os.environ['OMNI_DATA_DIR'] However, the following code will still download OMNI data in the current directory instead of D:/data/omni. This was because os.environ is not changed in config.py The following code will reproduce the problem. Thanks for your attention.

    import pyspedas import os

    os.environ['OMNI_DATA_DIR'] = "C:/data/omni/" omni_vars = pyspedas.omni.data(trange=['2013-11-5', '2013-11-6'])

    opened by xnchu 5
  • CDF time conversion to unix time using astropy instead of cdflib

    CDF time conversion to unix time using astropy instead of cdflib

    This improves performance for CDF time to unix time conversion originally performed using cdflib.cdfepoch.unixtime, which uses python's intrinsic for loops for conversion and is way too slow. Simple testing finds that conversion via astropy is more than ten times faster. Although this introduces additional module dependency, this will be useful from users' perspective.

    Note: cdflib developers seem to be considering the integration of astropy's time module in cdflib. https://github.com/MAVENSDC/cdflib/issues/14

    opened by amanotk 5
  • failure in http digest authentication

    failure in http digest authentication

    Could you modify Lines 61 and 185 in utilities/download.py, session.auth = (username, password) to, session.auth = requests.auth.HTTPDigestAuth(username, password)? This fixes a bug causing the failure in http digest authentication.

    opened by horit 4
  • Problem using cdflib in cdf_to_tplot

    Problem using cdflib in cdf_to_tplot

    The epoch is converted to unix_time at line 195 in cdf_to_tplot.py If cdflib is used, it is converted using function unixtime at line 192 in epochs.py. The problem is line 222: unixtime.append(datetime.datetime(*date).timestamp()) It assumes local time instead of UTC. Therefore, the time is offset by your local time. The codes to reproduce the error is attached below. The result should be 2020-01-01/00:00:00.

    import pyspedas import numpy as np import pandas as pd trange=['2010-01-01/00:00:00', '2010-01-02/00:00:00'] varname = 'BX_GSE' data_omni = pyspedas.omni.data(trange=['2010-01-01/00:00:00', '2010-01-02/00:00:00'],notplot=True,varformat=varname,time_clip=True) data = np.array(data_omni[varname]['y']) unix_time = np.array(data_omni[varname]['x']) date_time = pd.to_datetime(data_omni[varname]['x'],unit='s') print(date_time[0])

    opened by xnchu 4
  • Could not import pyspedas in google colab

    Could not import pyspedas in google colab

    I was trying to use pyspedas in google colab, the kernel crashed with warning: qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.

    opened by donglai96 4
  • pyspedas/pyspedas/mms/feeps/mms_feeps_remove_bad_data.py line 50

    pyspedas/pyspedas/mms/feeps/mms_feeps_remove_bad_data.py line 50

    Are October 2017 and October 2018 the starting time for bad eyes tables? If so, then you should not take the closest table, but take the table according to the time period.

    opened by PluckZK 3
  • Some updates for ERG

    Some updates for ERG

    Could you merge ERG-related scripts from the "erg" branch? The updates include:

    • Minor bug fixes for load routines for some particle data
    • Initial import of load routines for ground observation data
    • An experimental version of part_products routines for ERG satellite data

    And could you discard the "ergsc-devel" branch? At first I tried to merge the above updates to the old branch, but failed, due to an unknown error with analysis/time_domain_filter.py. That is why I have instead created a new branch "erg" from the master and put all the updates in it. Any future updates for the ERG module would be delivered through this new branch.

    opened by horit 3
  • MMS FPI DIS moms omni spin avg doesn't seem to be averaged

    MMS FPI DIS moms omni spin avg doesn't seem to be averaged

    The FPI DIS moments energy spectra data does not seem to be spin-averaging when the omni product is loaded. dis_data = pyspedas.mms.fpi(trange=[date_start_str, date_end_str], datatype=['dis-moms'], center_measurement=True, data_rate='fast') t, y, z = get_data('mms1_dis_energyspectr_omni_fast') A difference on the time stamps results in a 4.5 second interval, which is the non-omni sampling interval.

    opened by kileyy10 1
  • conda install pyspedas [enhancement]

    conda install pyspedas [enhancement]

    It would be very useful to be able to install pyspedas using conda, since it would make using this framework much more convenient in conda environments.

    opened by johncoxon 5
  • In the case of unavailble data...

    In the case of unavailble data...

    Consider the following code:

    import pyspedas as spd
    import pytplot as ptt
    
    trange = ['2017-09-01 09:58:30', '2017-09-01 09:59:30']
    _ = spd.mms.fpi(trange=trange, probe=3, data_rate='brst', level='l2',
                    datatype='dis-moms', varnames='mms3_dis_bulkv_gse_brst', time_clip=True,
                    latest_version=True)
    fpi_time_unix = ptt.get_data(mms_fpi_varnames[0])[0]
    fpi_v_gse = ptt.get_data(mms_fpi_varnames[0])[1:][0]
    
    fpi_time_utc = spd.time_datetime(fpi_time_unix)
    
    print(f"Start time: {fpi_time_utc[0].strftime('%Y-%m-%d %H:%M:%S')}")
    print(f"End time: {fpi_time_utc[-1].strftime('%Y-%m-%d %H:%M:%S')}")
    

    Given that trange = ['2017-09-01 09:58:30', '2017-09-01 09:59:30'], one would expect the code to print those dates. Or, in the case that 'brst' data is not available, the code should should either throw an error or NaNs.

    However, in this case the dates are: Start time: 2017-09-01 09:17:03, End time: 2017-09-01 09:18:02.

    From what I can surmize, it appears that in case the burst data is not available for the specified time range. the function looks for the data closest to the specified time range and outputs those times and corresponding data, which was unexpected for me, specially because I specified the time_clip parameter.

    This made me wonder, if this is how the code is supposed to work, or if this is a bug in the code.

    opened by qudsiramiz 4
  • Please use logging instead of prints

    Please use logging instead of prints

    There was some conversation in the PyHC Element chat this morning about suppressing all the printouts pySPEDAS generates when you load data (Time clip was applied to..., Loading variables: ..., etc).

    The OP ended up using a hack like this to redirect stdout: https://gist.github.com/vikjam/755930297430091d8d8df70ac89ea9e2

    But it was brought up that if pySPEDAS used logging (https://docs.python.org/3/library/logging.html) instead of standard print(), it would allow messages to be printed by default but users would have some control over what is shown if they wanted. A few people immediately agreed this would be a good change.

    Please consider this a vote for this feature request?

    opened by sapols 1
  • Can I download latest version of PSP data?

    Can I download latest version of PSP data?

    I am trying to download PSP data, for example SPC data from the first encounter.

    On the SWEAP website: http://sweap.cfa.harvard.edu/data/sci/sweap/spc/L3/2018/11/

    I can see that there is a version 26 of the CDF file. However, when using:

            spcdata = pyspedas.psp.spc(trange=[t0, t1], datatype='l3i', level='l3', 
                                    varnames = [
                                        'np_moment',
                                        'wp_moment',
                                        'vp_moment_RTN',
                                        'vp_moment_SC',
                                        'sc_pos_HCI',
                                        'sc_vel_HCI',
                                        'carr_latitude',
                                        'carr_longitude'
                                    ], 
                                    time_clip=True)
    

    I am getting the first vesrion of the CDF file (V01). Is there a functionality that would allow me to download the latest version that I am not aware of?

    Thanks!

    opened by nsioulas 5
Releases(1.3)
  • 1.3(Jan 26, 2022)

    • First version to include PyTplot with a matplotlib backend
    • Added geopack wrappers for T89, T96, T01, TS04
    • Large updates to the MMS plug-in, includeing new tools for calculating energy and angular spectrograms, as well as moments from the FPI and HPCA plasma distribution data
    • Added the 0th (EXPERIMENTAL) version of the ERG plug-in from the Arase team in Japan
    • Added new tools for working with PyTplot variables, e.g., tkm2re, cross products, dot products, normalizing vectors
    • Added routines for wave polarization calculations
    • Added routines for field algined coordinate transformations
    • Added plug-in for Spherical Elementary Currents (SECS) and Equivalent Ionospheric Currents (EICS) from Xin Cao and Xiangning Chu at the University of Colorado Boulder
    • Added initial load routine for Heliophysics Application Programmer's Interface (HAPI) data
    • Added initial load routine for Kyoto Dst data
    • Added initial load routine for THEMIS All Sky Imager data
    • Added THEMIS FIT L1 calibration routine
    • Large updates to the documentation at: https://pyspedas.readthedocs.io/
    • Numerous other bug fixes and updates
    Source code(tar.gz)
    Source code(zip)
  • v1.2(Mar 25, 2021)

    Significant v1.2 updates:

    • Dropped support for Python 3.6; we now support only Python 3.7 and later
    • Added support for Python 3.9
    • Implemented performance enhancements for coordinate transformation routines
    • Made numerous updates and bug fixes to the MMS plug-in
    • Added initial support for Solar Orbiter data
    Source code(tar.gz)
    Source code(zip)
  • 1.1(Dec 7, 2020)

  • 1.0(Jun 16, 2020)

Owner
SPEDAS
Space Physics Environment Data Analysis Software
SPEDAS
A probabilistic programming language in TensorFlow. Deep generative models, variational inference.

Edward is a Python library for probabilistic modeling, inference, and criticism. It is a testbed for fast experimentation and research with probabilis

Blei Lab 4.7k Jan 09, 2023
Package for decomposing EMG signals into motor unit firings, as used in Formento et al 2021.

EMGDecomp Package for decomposing EMG signals into motor unit firings, created for Formento et al 2021. Based heavily on Negro et al, 2016. Supports G

13 Nov 01, 2022
Repository created with LinkedIn profile analysis project done

EN/en Repository created with LinkedIn profile analysis project done. The datase

Mayara Canaver 4 Aug 06, 2022
A simple and efficient tool to parallelize Pandas operations on all available CPUs

Pandaral·lel Without parallelization With parallelization Installation $ pip install pandarallel [--upgrade] [--user] Requirements On Windows, Pandara

Manu NALEPA 2.8k Dec 31, 2022
Data Scientist in Simple Stock Analysis of PT Bukalapak.com Tbk for Long Term Investment

Data Scientist in Simple Stock Analysis of PT Bukalapak.com Tbk for Long Term Investment Brief explanation of PT Bukalapak.com Tbk Bukalapak was found

Najibulloh Asror 2 Feb 10, 2022
Mortgage-loan-prediction - Show how to perform advanced Analytics and Machine Learning in Python using a full complement of PyData utilities

Mortgage-loan-prediction - Show how to perform advanced Analytics and Machine Learning in Python using a full complement of PyData utilities. This is aimed at those looking to get into the field of D

Joachim 1 Dec 26, 2021
Repositori untuk menyimpan material Long Course STMKGxHMGI tentang Geophysical Python for Seismic Data Analysis

Long Course "Geophysical Python for Seismic Data Analysis" Instruktur: Dr.rer.nat. Wiwit Suryanto, M.Si Dipersiapkan oleh: Anang Sahroni Waktu: Sesi 1

Anang Sahroni 0 Dec 04, 2021
This is a python script to navigate and extract the FSD50K dataset

FSD50K navigator This is a script I use to navigate the sound dataset from FSK50K.

sweemeng 2 Nov 23, 2021
An ETL Pipeline of a large data set from a fictitious music streaming service named Sparkify.

An ETL Pipeline of a large data set from a fictitious music streaming service named Sparkify. The ETL process flows from AWS's S3 into staging tables in AWS Redshift.

1 Feb 11, 2022
Statistical Rethinking: A Bayesian Course Using CmdStanPy and Plotnine

Statistical Rethinking: A Bayesian Course Using CmdStanPy and Plotnine Intro This repo contains the python/stan version of the Statistical Rethinking

Andrés Suárez 3 Nov 08, 2022
Desafio 1 ~ Bantotal

Challenge 01 | Bantotal Please read the instructions for the challenge by selecting your preferred language below: Español Português License Copyright

Maratona Behind the Code 44 Sep 28, 2022
Vectorizers for a range of different data types

Vectorizers for a range of different data types

Tutte Institute for Mathematics and Computing 69 Dec 29, 2022
A Python 3 library making time series data mining tasks, utilizing matrix profile algorithms

MatrixProfile MatrixProfile is a Python 3 library, brought to you by the Matrix Profile Foundation, for mining time series data. The Matrix Profile is

Matrix Profile Foundation 302 Dec 29, 2022
An Indexer that works out-of-the-box when you have less than 100K stored Documents

U100KIndexer An Indexer that works out-of-the-box when you have less than 100K stored Documents. U100K means under 100K. At 100K stored Documents with

Jina AI 7 Mar 15, 2022
This repo contains a simple but effective tool made using python which can be used for quality control in statistical approach.

📈 Statistical Quality Control 📉 This repo contains a simple but effective tool made using python which can be used for quality control in statistica

SasiVatsal 8 Oct 18, 2022
signac-flow - manage workflows with signac

signac-flow - manage workflows with signac The signac framework helps users manage and scale file-based workflows, facilitating data reuse, sharing, a

Glotzer Group 44 Oct 14, 2022
Pandas-based utility to calculate weighted means, medians, distributions, standard deviations, and more.

weightedcalcs weightedcalcs is a pandas-based Python library for calculating weighted means, medians, standard deviations, and more. Features Plays we

Jeremy Singer-Vine 98 Dec 31, 2022
Data cleaning tools for Business analysis

Datacleaning datacleaning tools for Business analysis This program is made for Vicky's work. You can use it, too. 数据清洗 该数据清洗工具是为了商业分析 这个程序是为了Vicky的工作而

Lin Jian 3 Nov 16, 2021
Create HTML profiling reports from pandas DataFrame objects

Pandas Profiling Documentation | Slack | Stack Overflow Generates profile reports from a pandas DataFrame. The pandas df.describe() function is great

10k Jan 01, 2023
A Streamlit web-app for a data-science project that aims to evaluate if the answer to a question is helpful.

How useful is the aswer? A Streamlit web-app for a data-science project that aims to evaluate if the answer to a question is helpful. If you want to l

1 Dec 17, 2021