EOD Historical Data Python Library (Unofficial)

Overview

EOD Historical Data Python Library (Unofficial)

https://eodhistoricaldata.com

Installation

python3 -m pip install eodhistoricaldata

Note

Demo API key below is provided by EOD Historial Data for testing purposes https://eodhistoricaldata.com/financial-apis/new-real-time-data-api-websockets

Usage

None: """Main""" websocket = WebSocketClient( # Demo API key for testing purposes api_key="OeAFFmMliFG5orCUuwAKQ8l4WWFQ67YX", endpoint="crypto", symbols=["BTC-USD"] #api_key="OeAFFmMliFG5orCUuwAKQ8l4WWFQ67YX", endpoint="forex", symbols=["EURUSD"] #api_key="OeAFFmMliFG5orCUuwAKQ8l4WWFQ67YX", endpoint="us", symbols=["AAPL"] ) websocket.start() message_count = 0 while True: if websocket: if ( message_count != websocket.message_count ): print(websocket.message) message_count = websocket.message_count sleep(0.25) # output every 1/4 second, websocket is realtime if __name__ == "__main__": main() ">
"""Sample script"""

from time import sleep
from eodhistoricaldata import WebSocketClient

def main() -> None:
    """Main"""

    websocket = WebSocketClient(
        # Demo API key for testing purposes
        api_key="OeAFFmMliFG5orCUuwAKQ8l4WWFQ67YX", endpoint="crypto", symbols=["BTC-USD"]
        #api_key="OeAFFmMliFG5orCUuwAKQ8l4WWFQ67YX", endpoint="forex", symbols=["EURUSD"]
        #api_key="OeAFFmMliFG5orCUuwAKQ8l4WWFQ67YX", endpoint="us", symbols=["AAPL"]
    )
    websocket.start()

    message_count = 0
    while True:
        if websocket:
            if (
                message_count != websocket.message_count
            ):
                print(websocket.message)
                message_count = websocket.message_count
                sleep(0.25)  # output every 1/4 second, websocket is realtime

if __name__ == "__main__":
    main()
You might also like...
TE-dependent analysis (tedana) is a Python library for denoising multi-echo functional magnetic resonance imaging (fMRI) data
TE-dependent analysis (tedana) is a Python library for denoising multi-echo functional magnetic resonance imaging (fMRI) data

tedana: TE Dependent ANAlysis TE-dependent analysis (tedana) is a Python library for denoising multi-echo functional magnetic resonance imaging (fMRI)

Hatchet is a Python-based library that allows Pandas dataframes to be indexed by structured tree and graph data.
Hatchet is a Python-based library that allows Pandas dataframes to be indexed by structured tree and graph data.

Hatchet Hatchet is a Python-based library that allows Pandas dataframes to be indexed by structured tree and graph data. It is intended for analyzing

 🧪 Panel-Chemistry - exploratory data analysis and build powerful data and viz tools within the domain of Chemistry using Python and HoloViz Panel.
🧪 Panel-Chemistry - exploratory data analysis and build powerful data and viz tools within the domain of Chemistry using Python and HoloViz Panel.

🧪📈 🐍. The purpose of the panel-chemistry project is to make it really easy for you to do DATA ANALYSIS and build powerful DATA AND VIZ APPLICATIONS within the domain of Chemistry using using Python and HoloViz Panel.

Tuplex is a parallel big data processing framework that runs data science pipelines written in Python at the speed of compiled code

Tuplex is a parallel big data processing framework that runs data science pipelines written in Python at the speed of compiled code. Tuplex has similar Python APIs to Apache Spark or Dask, but rather than invoking the Python interpreter, Tuplex generates optimized LLVM bytecode for the given pipeline and input data set.

Python data processing, analysis, visualization, and data operations

Python This is a Python data processing, analysis, visualization and data operations of the source code warehouse, book ISBN: 9787115527592 Descriptio

Catalogue data - A Python Scripts to prepare catalogue data

catalogue_data Scripts to prepare catalogue data. Setup Clone this repo. Install

fds is a tool for Data Scientists made by DAGsHub to version control data and code at once.
fds is a tool for Data Scientists made by DAGsHub to version control data and code at once.

Fast Data Science, AKA fds, is a CLI for Data Scientists to version control data and code at once, by conveniently wrapping git and dvc

A data parser for the internal syncing data format used by Fog of World.
A data parser for the internal syncing data format used by Fog of World.

A data parser for the internal syncing data format used by Fog of World. The parser is not designed to be a well-coded library with good performance, it is more like a demo for showing the data structure.

Functional Data Analysis, or FDA, is the field of Statistics that analyses data that depend on a continuous parameter.
Comments
  • Syntax issue with query Parameter in get_calendar_ functions

    Syntax issue with query Parameter in get_calendar_ functions

    Hello,

    When using the get_calendar_XXX, functions we cannot use the query parameters defined by EOD as the word "from" is forbidden by Python, for instance : earning=client.get_calendar_earnings(from='2022-11-01', to='2022-11-30')

    will raise an issue.

    Should I pass the argument differently ?

    opened by ATCBGroup 1
  • dependency on matplotlib but it is not installed with pip

    dependency on matplotlib but it is not installed with pip

    dependency on matplotlib but it is not installed with pip

    [email protected]:~/git/traderai/eod$ cat test.py
    from eodhd import APIClient
    api = APIClient("DEMO")
    
    [email protected]:~/git/traderai/eod$ python3 test.py
    Traceback (most recent call last):
      File "/home/mshamber/.local/lib/python3.8/site-packages/eodhd/eodhdgraphs.py", line 5, in <module>
        import matplotlib.pyplot as plt
    ModuleNotFoundError: No module named 'matplotlib'
    
    [email protected]:~/git/traderai/eod$ python3 -m pip install eodhd
    Requirement already satisfied: eodhd in /home/mshamber/.local/lib/python3.8/site-packages (1.0.8)
    Requirement already satisfied: websocket-client==1.3.3 in /home/mshamber/.local/lib/python3.8/site-packages (from eodhd) (1.3.3)
    Requirement already satisfied: rich==12.5.1 in /home/mshamber/.local/lib/python3.8/site-packages (from eodhd) (12.5.1)
    Requirement already satisfied: websockets==10.3 in /home/mshamber/.local/lib/python3.8/site-packages (from eodhd) (10.3)
    Requirement already satisfied: numpy==1.21.6 in /home/mshamber/.local/lib/python3.8/site-packages (from eodhd) (1.21.6)
    Requirement already satisfied: pandas==1.3.5 in /home/mshamber/.local/lib/python3.8/site-packages (from eodhd) (1.3.5)
    Requirement already satisfied: requests==2.28.1 in /home/mshamber/.local/lib/python3.8/site-packages (from eodhd) (2.28.1)
    Requirement already satisfied: commonmark<0.10.0,>=0.9.0 in /home/mshamber/.local/lib/python3.8/site-packages (from rich==12.5.1->eodhd) (0.9.1)
    Requirement already satisfied: typing-extensions<5.0,>=4.0.0; python_version < "3.9" in /home/mshamber/.local/lib/python3.8/site-packages (from rich==12.5.1->eodhd) (4.3.0)
    Requirement already satisfied: pygments<3.0.0,>=2.6.0 in /home/mshamber/.local/lib/python3.8/site-packages (from rich==12.5.1->eodhd) (2.13.0)
    Requirement already satisfied: python-dateutil>=2.7.3 in /home/mshamber/.local/lib/python3.8/site-packages (from pandas==1.3.5->eodhd) (2.8.2)
    Requirement already satisfied: pytz>=2017.3 in /home/mshamber/.local/lib/python3.8/site-packages (from pandas==1.3.5->eodhd) (2022.5)
    Requirement already satisfied: charset-normalizer<3,>=2 in /home/mshamber/.local/lib/python3.8/site-packages (from requests==2.28.1->eodhd) (2.1.1)
    Requirement already satisfied: idna<4,>=2.5 in /usr/lib/python3/dist-packages (from requests==2.28.1->eodhd) (2.8)
    Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python3/dist-packages (from requests==2.28.1->eodhd) (2019.11.28)
    Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/lib/python3/dist-packages (from requests==2.28.1->eodhd) (1.25.8)
    Requirement already satisfied: six>=1.5 in /home/mshamber/.local/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas==1.3.5->eodhd) (1.16.0)
    
    opened by opme 1
Releases(1.0.8)
Owner
Michael Whittle
Solution Architect
Michael Whittle
Evidence enables analysts to deliver a polished business intelligence system using SQL and markdown.

Evidence enables analysts to deliver a polished business intelligence system using SQL and markdown

915 Dec 26, 2022
Important dataframe statistics with a single command

quick_eda Receiving dataframe statistics with one command Project description A python package for Data Scientists, Students, ML Engineers and anyone

Sven Eschlbeck 2 Dec 19, 2021
MEAD: A Large-scale Audio-visual Dataset for Emotional Talking-face Generation [ECCV2020]

MEAD: A Large-scale Audio-visual Dataset for Emotional Talking-face Generation [ECCV2020] by Kaisiyuan Wang, Qianyi Wu, Linsen Song, Zhuoqian Yang, Wa

112 Dec 28, 2022
Stochastic Gradient Trees implementation in Python

Stochastic Gradient Trees - Python Stochastic Gradient Trees1 by Henry Gouk, Bernhard Pfahringer, and Eibe Frank implementation in Python. Based on th

John Koumentis 2 Nov 18, 2022
Finding project directories in Python (data science) projects, just like there R rprojroot and here packages

Find relative paths from a project root directory Finding project directories in Python (data science) projects, just like there R here and rprojroot

Daniel Chen 102 Nov 16, 2022
Implementation in Python of the reliability measures such as Omega.

reliabiliPy Summary Simple implementation in Python of the [reliability](https://en.wikipedia.org/wiki/Reliability_(statistics) measures for surveys:

Rafael Valero Fernández 2 Apr 27, 2022
Cleaning and analysing aggregated UK political polling data.

Analysing aggregated UK polling data The tweet collection & storage pipeline used in email-service is used to also collect tweets from @britainelects.

Ajay Pethani 0 Dec 22, 2021
A fast, flexible, and performant feature selection package for python.

linselect A fast, flexible, and performant feature selection package for python. Package in a nutshell It's built on stepwise linear regression When p

88 Dec 06, 2022
Open-source Laplacian Eigenmaps for dimensionality reduction of large data in python.

Fast Laplacian Eigenmaps in python Open-source Laplacian Eigenmaps for dimensionality reduction of large data in python. Comes with an wrapper for NMS

17 Jul 09, 2022
TextDescriptives - A Python library for calculating a large variety of statistics from text

A Python library for calculating a large variety of statistics from text(s) using spaCy v.3 pipeline components and extensions. TextDescriptives can be used to calculate several descriptive statistic

150 Dec 30, 2022
A program that uses an API and a AI model to get info of sotcks

Stock-Market-AI-Analysis I dont mind anyone using this code but please give me credit A program that uses an API and a AI model to get info of stocks

1 Dec 17, 2021
Python package for analyzing sensor-collected human motion data

Python package for analyzing sensor-collected human motion data

Simon Ho 71 Nov 05, 2022
Meltano: ELT for the DataOps era. Meltano is open source, self-hosted, CLI-first, debuggable, and extensible.

Meltano is open source, self-hosted, CLI-first, debuggable, and extensible. Pipelines are code, ready to be version c

Meltano 625 Jan 02, 2023
Exploring the Top ML and DL GitHub Repositories

This repository contains my work related to my project where I scraped data on the most popular machine learning and deep learning GitHub repositories in order to further visualize and analyze it.

Nico Van den Hooff 17 Aug 21, 2022
A collection of learning outcomes data analysis using Python and SQL, from DQLab.

Data Analyst with PYTHON Data Analyst berperan dalam menghasilkan analisa data serta mempresentasikan insight untuk membantu proses pengambilan keputu

6 Oct 11, 2022
songplays datamart provide details about the musical taste of our customers and can help us to improve our recomendation system

Songplays User activity datamart The following document describes the model used to build the songplays datamart table and the respective ETL process.

Leandro Kellermann de Oliveira 1 Jul 13, 2021
Data Science Environment Setup in single line

datascienv is package that helps your to setup your environment in single line of code with all dependency and it is also include pyforest that provide single line of import all required ml libraries

Ashish Patel 55 Dec 16, 2022
Leverage Twitter API v2 to analyze tweet metrics such as impressions and profile clicks over time.

Tweetmetric Tweetmetric allows you to track various metrics on your most recent tweets, such as impressions, retweets and clicks on your profile. The

Mathis HAMMEL 29 Oct 18, 2022
CubingB is a timer/analyzer for speedsolving Rubik's cubes, with smart cube support

CubingB is a timer/analyzer for speedsolving Rubik's cubes (and related puzzles). It focuses on supporting "smart cubes" (i.e. bluetooth cubes) for recording the exact moves of a solve in real time.

Zach Wegner 5 Sep 18, 2022
Integrate bus data from a variety of sources (batch processing and real time processing).

Purpose: This is integrate bus data from a variety of sources such as: csv, json api, sensor data ... into Relational Database (batch processing and r

1 Nov 25, 2021