BioThings API framework - Making high-performance API for biological annotation data

Overview

Downloads biothings package biothings_version biothings_version biothings_version Contributor Covenant Build Status Tests Status Documentation Status

BioThings SDK

Quick Summary

BioThings SDK provides a Python-based toolkit to build high-performance data APIs (or web services) from a single data source or multiple data sources. It has the particular focus on building data APIs for biomedical-related entities, a.k.a "BioThings" (such as genes, genetic variants, drugs, chemicals, diseases, etc.).

Documentation about BioThings SDK can be found at https://docs.biothings.io

Introduction

What's BioThings?

We use "BioThings" to refer to objects of any biomedical entity-type represented in the biological knowledge space, such as genes, genetic variants, drugs, chemicals, diseases, etc.

BioThings SDK

SDK represents "Software Development Kit". BioThings SDK provides a Python-based toolkit to build high-performance data APIs (or web services) from a single data source or multiple data sources. It has the particular focus on building data APIs for biomedical-related entities, a.k.a "BioThings", though it's not necessarily limited to the biomedical scope. For any given "BioThings" type, BioThings SDK helps developers to aggregate annotations from multiple data sources, and expose them as a clean and high-performance web API.

The BioThings SDK can be roughly divided into two main components: data hub (or just "hub") component and web component. The hub component allows developers to automate the process of monitoring, parsing and uploading your data source to an Elasticsearch backend. From here, the web component, built on the high-concurrency Tornado Web Server , allows you to easily setup a live high-performance API. The API endpoints expose simple-to-use yet powerful query features using Elasticsearch's full-text query capabilities and query language.

BioThings API

We also use "BioThings API" (or BioThings APIs) to refer to an API (or a collection of APIs) built with BioThings SDK. For example, both our popular MyGene.Info and MyVariant.Info APIs are built and maintained using this BioThings SDK.

BioThings Studio

BioThings Studio is a buildin, pre-configured environment used to build and administer a BioThings API. At its core is the Hub, a backend service responsible for maintaining data up-to-date, producing data releases and update API frontends.

Installing BioThings SDK

You can install the latest stable BioThings SDK release with pip from PyPI, like:

pip install biothings

You can install the latest development version of BioThings SDK directly from our github repository like:

pip install git+https://github.com/biothings/biothings.api.git#egg=biothings

Alternatively, you can download the source code, or clone the BioThings SDK repository and run:

python setup.py install

Get started to build a BioThings API

We recommend to follow this tutorial to develop your first BioThings API in our pre-configured BioThings Studio development environment.

Documentation

The latest documentation is available at https://docs.biothings.io.

How to contribute

Please check out this Contribution Guidelines and Code of Conduct document.

Comments
  • Unable to start either [demo_myvariant.docker, old_myvariant.docker]

    Unable to start either [demo_myvariant.docker, old_myvariant.docker]

    Following instructions from http://docs.biothings.io/en/latest/doc/standalone.html#quick-links Both images exhibit the same behavior:

    • hub not starting
      • curl http://localhost:19200/_cat/indices (returns nothing)
    • inability to ssh from host to container
    • inability to start hub cli

    Host information:

    $ lsb_release -a
    No LSB modules are available.
    Distributor ID:	Ubuntu
    Description:	Ubuntu 18.04 LTS
    Release:	18.04
    Codename:	bionic
    
    $ docker --version
    Docker version 17.12.1-ce, build 7390fc6
    
    $ docker info
    Containers: 0
     Running: 0
     Paused: 0
     Stopped: 0
    Images: 84
    Server Version: 17.12.1-ce
    Storage Driver: overlay2
     Backing Filesystem: extfs
     Supports d_type: true
     Native Overlay Diff: true
    Logging Driver: json-file
    Cgroup Driver: cgroupfs
    Plugins:
     Volume: local
     Network: bridge host macvlan null overlay
     Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
    Swarm: inactive
    Runtimes: runc
    Default Runtime: runc
    Init Binary: docker-init
    containerd version: 9b55aab90508bd389d7654c4baf173a981477d55
    runc version: 9f9c96235cc97674e935002fc3d78361b696a69e
    init version: v0.13.0 (expected: 949e6facb77383876aeff8a6944dde66b3089574)
    Security Options:
     apparmor
     seccomp
      Profile: default
    Kernel Version: 4.15.0-22-generic
    Operating System: Ubuntu 18.04 LTS
    OSType: linux
    Architecture: x86_64
    CPUs: 24
    Total Memory: 118GiB
    Name: bmeg-build
    ID: JRG3:XIRP:VOMU:Z5IM:CBNN:M2TT:QP6J:TE4G:B6V4:C5KI:RZ6T:S7ZX
    Docker Root Dir: /var/lib/docker
    Debug Mode (client): false
    Debug Mode (server): false
    Registry: https://index.docker.io/v1/
    Labels:
    Experimental: false
    Insecure Registries:
     127.0.0.0/8
    Live Restore Enabled: false
    
    
    
    # docker run --name old_myvariant -p 19080:80 -p 19200:9200 -p 19022:7022 -p 19090:7080 -d old_myvariant
    2171a1bd50736f4074c9c3102282ae4b92a8002335347217d65a5e8681b49c3f
    [email protected]:/mnt/walsbr#  curl -v http://localhost:19080/metadata
    *   Trying 127.0.0.1...
    * TCP_NODELAY set
    * Connected to localhost (127.0.0.1) port 19080 (#0)
    > GET /metadata HTTP/1.1
    > Host: localhost:19080
    > User-Agent: curl/7.58.0
    > Accept: */*
    >
    < HTTP/1.1 500 Internal Server Error
    < Date: Fri, 13 Jul 2018 20:11:23 GMT
    < Content-Type: text/html; charset=UTF-8
    < Content-Length: 93
    < Connection: keep-alive
    < Server: TornadoServer/4.5.1
    <
    * Connection #0 to host localhost left intact
    <html><title>500: Internal Server Error</title><body>500: Internal Server Error</body></html>[email protected]:/mnt/walsbr#
    
    
    [email protected]:/mnt/walsbr#  curl http://localhost:19200/_cat/indices
    
    [email protected]:/mnt/walsbr#  ssh [email protected] -p 19022
    ssh_exchange_identification: read: Connection reset by peer
    
    
    [email protected]:/mnt/walsbr# docker exec -it old_myvariant bash
    
    
    Traceback (most recent call last):
      File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
        "__main__", mod_spec)
      File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
        exec(code, run_globals)
      File "/home/biothings/myvariant.info/src/biothings/bin/autohub.py", line 16, in <module>
        biothings.config_for_app(config)
      File "/home/biothings/myvariant.info/src/biothings/__init__.py", line 55, in config_for_app
        check_config(config_mod)
      File "/home/biothings/myvariant.info/src/biothings/__init__.py", line 32, in check_config
        raise ConfigurationError("%s: %s" % (attr,str(getattr(config_mod,attr))))
    biothings.ConfigurationError: DATA_PLUGIN_FOLDER: Define path to folder which will contain all 3rd party parsers, dumpers, etc...
    (pyenv) [email protected]:~/myvariant.info/src$
    
    
    bug 
    opened by bwalsh 16
  • Fetch >1000 documents with a POST query

    Fetch >1000 documents with a POST query

    Originally from @colleenXu:

    To find associations between things, we are mostly doing POST queries to biothings apis.

    For POST queries, we can retrieve <=1000 records per input (think of a batch-query of input IDs like below). This allows a batch-query to include up to 1000 inputs.

    POST to https://mydisease.info/v1/query?fields=disgenet.xrefs,_id&size=1000 with the body: { "q": "7157,7180,7190", "scopes": "disgenet.genes_related_to_disease.gene_id" }

    My understanding is that there's only 1 way to change this situation:

    1. CANNOT DO fetch_all, since that only works for GET queries (and just using GET queries isn't a viable solution because not being able to batch-query can slow down multi-hop BTE queries quite a bit).
    2. CAN DO: The only way to get >1000 records per input is to adjust the biothings api settings - which would likely involve lowering the batch-query limit (ex: 10000 records per input and 100 IDs per batch). This can perhaps be done on a per-api basis (like specific pending apis???)

    Noting that this has been a discussion topic for a while. And for now, we've been keeping okay at keeping things at <= 1000 records per input, knowing that we are not getting the complete response. This is because it is difficult handling a node attached to lots of other entities...

    However, this is known to be more of an issue for APIs that keep many separate records for the same basic association X-related_to-Y. This happens with semmeddb (at least 1 record per publication-association) and some multiomics apis. These are all on the pending api hub.

    enhancement 
    opened by erikyao 12
  • Automated basic biothings web functionality test for data applications

    Automated basic biothings web functionality test for data applications

    Now that we have automated tests to run after each app (mygene, myvariant, ...) build under development, explore the possibility to run an automated basic biothings functionality test to ensure in addition to the customizations working, the basic features are also not affected.

    opened by namespacestd0 9
  • FTPDumper does not clean up old downloaded files when ARCHIVE is set to False

    FTPDumper does not clean up old downloaded files when ARCHIVE is set to False

    When ARCHIVE is set to False, the downloaded files will be saved to the same folder (Data_Archive_root/latest). Looks like FTPDumper does not clean up the old downloads.

    Here is an example from mychem hub: PubChemDumper.

    bug 
    opened by newgene 8
  • remove remaining boto dependency (using boto3 instead)

    remove remaining boto dependency (using boto3 instead)

    We still have a few remaining places using boto to access AWS s3 buckets:

    https://github.com/biothings/biothings.api/blob/621887f04aae13c3a775aea9aa7daacb92ae7ef0/biothings/utils/aws.py#L6

    and

    https://github.com/biothings/biothings.api/blob/621887f04aae13c3a775aea9aa7daacb92ae7ef0/biothings/hub/dataexport/ids.py#L4

    Most of other AWS related code has been migrated to use boto3. Let's remove the boto dependency completely.

    enhancement 
    opened by newgene 8
  • display optional

    display optional "description" in API metadata

    If a user wants to know what a given API is for (e.g., "repodb"), the best options now seem to be to search the API name and/or look at example records. We should also allow an optional "description" to be provided in the manifest.json metadata, which would optionally provide some human-readable description directly on the API page.

    EDIT: Sorry, just realized I should probably have created this issue in https://github.com/biothings/pending.api/issues.. Feel free to recreate it over there if helpful.

    enhancement 
    opened by andrewsu 7
  • Replace boto calls to use boto3

    Replace boto calls to use boto3

    This partially addresses Issue #133

    All uses except for one usage has been replaced, and tested.

    The explanation for the one remaining usage will be documented in comments under Issue #133

    opened by zcqian 7
  • `SnapshotTaskEnv` cannot create `ESIndexer` instances

    `SnapshotTaskEnv` cannot create `ESIndexer` instances

    When I create a new snapshot, the corresponding SnapshotTaskEnv instance cannot initiate due to the failure in the creation of the ESIndexer instance.

    Error messages are like:

    Aug 16 17:37:09 su06 python[57443]: HTTPServerRequest(protocol='http', host='localhost:19080', method='PUT', uri='/snapshot', version='HTTP/1.1', remote_ip='172.29.80.35')
    Aug 16 17:37:09 su06 python[57443]: Traceback (most recent call last):
    Aug 16 17:37:09 su06 python[57443]:   File "/opt/home/pending/venv/lib/python3.6/site-packages/tornado/web.py", line 1704, in _execute
    Aug 16 17:37:09 su06 python[57443]:     result = await result
    Aug 16 17:37:09 su06 python[57443]:   File "<string>", line 69, in put
    Aug 16 17:37:09 su06 python[57443]:   File "/opt/home/pending/venv/src/biothings/biothings/hub/dataindex/snapshooter.py", line 521, in snapshot
    Aug 16 17:37:09 su06 python[57443]:     return env_for_build.snapshot(index, snapshot=snapshot, steps=steps)
    Aug 16 17:37:09 su06 python[57443]:   File "/opt/home/pending/venv/src/biothings/biothings/hub/dataindex/snapshooter.py", line 314, in snapshot
    Aug 16 17:37:09 su06 python[57443]:     task_env = SnapshotTaskEnv(self, index, snapshot)
    Aug 16 17:37:09 su06 python[57443]:   File "/opt/home/pending/venv/src/biothings/biothings/hub/dataindex/snapshooter.py", line 244, in __init__
    Aug 16 17:37:09 su06 python[57443]:     doc_type=env.build_doc['index'][index]['doc_type'],
    Aug 16 17:37:09 su06 python[57443]: KeyError: 'doc_type'
    Aug 16 17:37:09 su06 python[57443]: ERROR:tornado.access:500 PUT /snapshot (172.29.80.35) 37.72ms
    

    The root causes are:

    1. src_build does not hold the doc_type values anymore
    2. The ESIndexer class definition is out-of-date with ES7

    A sample env.build_doc['index'][index] entry is:

            "index" : {
    		"idisk_20210812_sicwkhq0" : {
    			"host" : "su03:9200",
    			"environment" : "su03",
    			"created_at" : ISODate("2021-08-16T23:17:07.550Z"),
    			"count" : 919
    		}
    	},
    
    opened by erikyao 6
  • `GitDumper` should also checkout the `main` branches

    `GitDumper` should also checkout the `main` branches

    When adding a new plugin to pending.biothings.io, the URL to the GitHub repo will be passed to http://localhost:19080/dataplugin/register_url,

    image

    which in the end calls the AssistantManager.register_url() method (assistant.py#L699).

    The AssistantManager instance looks like to add a message (including the URL to register) to its corresponding MongoDB collection, and finally a GitDumper instance will receive the URL and then check the repo out. By default, GitDumper will only check master branches, but recently GitHub has changed its default branch name from master to main. Therefore our GitDumper cannot checkout the latest GitHub repo-based plugins.

    The root cause in the code seems to be dumper.py#L1072:

    DEFAULT_BRANCH = "master"
    
    bug enhancement 
    opened by erikyao 6
  • Implement full release installation without downtime

    Implement full release installation without downtime

    Currently from the biothings hub and studio, we can install incremental releases directly with no downtime (applying diffs on the production index). The relevant code is here:

    https://github.com/biothings/biothings.api/blob/1b96f0aded05873d642134c0c38b15fa982e3b6d/biothings/hub/standalone/init.py#L68

    In the case of deploying a full release, we currently have two options:

    1. delete the old index and then install the new index (restoring snapshots) This causes downtime, but it's brief for small data indices.

    2. we perform a manual index restoration to a different index name and then switch the alias when it's done. This has no downtime, should be preferred.

    We should implement these manual steps from the #2 option as a feature in biothings hub/studio.

    enhancement 
    opened by newgene 6
  • Can manifest-based data plugins and regular data sources stay in the same folder?

    Can manifest-based data plugins and regular data sources stay in the same folder?

    Right now, manifest-based data plugins stay in a separate folder like "plugins", and regular dumper/uploader based data sources stay in hub/dataload/sources folder.

    Can we allow them in the same folder? It may just work already, let's verify and try to make it work if not.

    • plugins folder example: https://github.com/biothings/mydisease.info/tree/master/src/plugins

    • regular data sources example: https://github.com/newgene/biothings_docker/tree/main/tests/hubapi/demohub/biothing_studio/hub/dataload/sources

    enhancement 
    opened by newgene 5
  • Create an Elasticsearch reindex helper function

    Create an Elasticsearch reindex helper function

    This should be a standalone helper function (e.g. can be under utils/es_reindex.py) used only from the Python/iPython console manually when needed. It helps to reindex an existing index by transferring both the settings, mappings and docs.

    Elasticsearch's reindex API should be used, however mappings and settings from the old index should be used to create a new empty target index first. Then reindex API can be called to transfer all docs to the new index. Optionally, the alias should be switched over to the new index too. This is useful when we need to migrate existing indices created from the older ES version to the current ES version.

    def reindex(src_index, target_index=None, settings=None, mappings=None, alias=None, delete_src=False):
    
    
    • target_index: use <src_index_name>_reindexed as default if None
    • settings: if provided as a dict, update the settings with the provided dictionary. Otherwise, keep the same from the src_index
    • mapping: if provided as a dict, update the settings with the provided mappings. Otherwise, keep the same from the src_index
    • alias: if True, switch the alias from src_index to target_index. If src_index has no alias, apply the <src_index_name> as the alias; if a string value, apply it as the alias instead
    • delete_src: If True, delete the src_index after everything is done

    And after reindex, please also do a refresh & flush and then double-check the doc counts to make sure they are equal.

    enhancement 
    opened by newgene 0
  • Fix Aggregation Web formatter

    Fix Aggregation Web formatter

    Histogram aggregations do not contain the fields aggregations.<term>.doc_count_error_upper_bound, aggregations.<term>.sum_other_doc_count

    In our es formatter, it assumes that these two values exist. https://github.com/biothings/biothings.api/blob/master/biothings/web/query/formatter.py#L426-L427

    When making a custom aggregation in the esquerybuilder, I have to override these two values.

    res[facet]['other'] = res[facet].pop('sum_other_doc_count', 0)
    res[facet]['missing'] = res[facet].pop('doc_count_error_upper_bound', 0)
    

    We should have this as a quick fix so when other users make custom aggregations they wont have to override the transform_aggs method.

    opened by jal347 0
  • DockerContainerDumper class

    DockerContainerDumper class

    This can be a new type of Dumper class, which triggers a docker container (typically runs on a different server) to run and generate the output file, and then stop the container. The dumper class will then get the processed file(s) and send it to the Uploader as normally.

    Typically, this processed file can be a NDJSON file (one JSON per line), so the uploader class can be quite simple and generic.

    The typical use case is some complex workflow with heavy dependencies, so we can isolate them in a docker container.

    enhancement 
    opened by newgene 0
  • create biothings.hub API document

    create biothings.hub API document

    Our current documentation site at https://docs.biothings.io/ does not contain API document from biothings.hub module. It was due to some errors in the past, let's re-evaluate to see if we can generate it automatically now.

    enhancement 
    opened by newgene 0
  • Evaluate and upgrade Elasticsearch v8.x client

    Evaluate and upgrade Elasticsearch v8.x client

    for both elasticsearch-py and elasticsearch-dsl packages, their ES8 support is complete. We should test and upgrade.

    All of our hubs are now using ES8, but we should target the support of both ES7 and ES8 if possible.

    enhancement 
    opened by newgene 0
Releases(v0.11.1)
  • v0.11.1(Oct 4, 2022)

    This is a bug-fix release with these CHANGES (see also CHANGES.txt):

    v0.11.1 (2022/10/03)

    • Hub improvements:
      • use pickle protocol 4 as the pickle.dump default
    • Hub bug fixes:
      • Fixed a JSON serialization error during incremental release https://github.com/newgene/biothings.api/pull/65
      • Resolved a hub error when installing a full release https://github.com/biothings/biothings.api/issues/257
      • Fixed a quick_index error when a data source has multiple uploaders https://github.com/newgene/biothings.api/pull/66
    Source code(tar.gz)
    Source code(zip)
  • v0.11.0(Sep 14, 2022)

Owner
BioThings
High Performance Data APIs in Biology
BioThings
A Telegram Most Powerful Media Info Bot.

Media Info Bot Support 🚑 Demo For The Bot -Test Our Bot By Clicking The Button Below Deploy To Heroku 🗳 Press the Deploy Button to Get Your Own Bot.

Anonymous 5 May 16, 2022
The AWS Lambda Serverless Blind XSS App

Ass The AWS Lambda Serverless Blind XSS App 利用VPS配置XSS平台太麻烦了,如果利用AWS的Lambda那不就是一个域名的事情么?剩下的环境配置、HTTPS证书、隐私性、VPS续费都不用管了, 所以根据xless重写了Lambda平台的XSS,利用sla

cocokey 2 Dec 27, 2021
Simple discord token generator good for memberboosting your server! Uses Hcaptcha bypass

discord-tokens-generator INFO This is a Simple Discord Token Generator which creates unverified discord accounts These accounts are good for member bo

Avenger 41 Dec 20, 2022
Automate coin farming for dankmemer. Unlimited accounts at once. Uses a proxy

dankmemer-farm Simple script to farm Dankmemer coins with multiple accounts at once. Requires: Proxies, Discord Tokens Disclaimer I don't take respons

Scobra 12 Dec 20, 2022
Validate all your Customer IAM Policies against AWS Access Analyzer - Policy Validation

✅ Access Analyzer - Batch Policy Validator This script will analyze using AWS Access Analyzer - Policy Validation all your account customer managed IA

Victor GRENU 41 Dec 12, 2022
A bot to share Facebook posts.

bot_share_facebook a bot to share Facebook posts. install & clone untuk menjalankan anda bisa melalui terminal contohnya termux, cmd, dan terminal lai

Muhammad Latif Harkat 7 Dec 07, 2022
This is a discord bot, which tells you food recipes.

Discord Chef Bot You have a friend, familiy or other group / channel where the topic is the food? You cannot really decide what's for Saturday lunch?

2 Apr 25, 2022
A modular Telegram Python bot running on python3 with a sqlalchemy, redis, telethon.

GilbertAnimeBot A modular Telegram Python bot running on python3 with a sqlalchemy, redis, telethon. How to setup/deploy. Read these notes carefully b

Kishore 1 Jan 23, 2022
SSH-Restricted deploys an SSH compliance rule (AWS Config) with auto-remediation via AWS Lambda if SSH access is public.

SSH-Restricted SSH-Restricted deploys an SSH compliance rule with auto-remediation via AWS Lambda if SSH access is public. SSH-Auto-Restricted checks

Adrian Hornsby 30 Nov 08, 2022
Dante, my discord bot. Open source project in development and not optimized for other filesystems, install and setup script in development

DanteMode (In private development for ~6 months) Dante, my discord bot. Open source project in development and not optimized for other filesystems, in

2 Nov 05, 2021
Ma2tl - macOS forensic timeline generator using the analysis result DBs of mac apt

ma2tl (mac_apt to timeline) This is a DFIR tool for generating a macOS forensic

Minoru Kobayashi 66 Nov 18, 2022
Powerful Telegram Maintained UserBot in Telethon

Fire-X UserBot The Awaited Bot Fire-X userbot The Most Powerful Telegram Userbot. This Userbot is Safe to use in Your Telegram Account. It is not like

22 Oct 21, 2022
A powerful bot to copy your google drive data to your team drive

⚛️ Clonebot - Heroku version ⚡ CloneBot is a telegram bot that allows you to copy folder/team drive to team drives. One of the main advantage of this

MsGsuite 269 Dec 23, 2022
This is a unofficial library for making bots in rubika.

rubika this is a unofficial library for making bots in rubika using this library you can make your own0 rubika bot and control that those bots that ma

Bahman 50 Jan 02, 2023
Slack bot to automatically delete yubisneeze / accidental yubikey presses

YubiSnooze Slack bot to automatically delete yubisneeze / accidental yubikey presses. It will search using the regex "[cbdefghijklnrtuv]{44}" and if t

Andrew MacPherson 3 Feb 09, 2022
A Python interface between Earth Engine and xarray for processing weather and climate data

wxee What is wxee? wxee was built to make processing gridded, mesoscale time series weather and climate data quick and easy by integrating the data ca

Aaron Zuspan 160 Dec 31, 2022
🔍 Google Search unofficial API for Python with no external dependencies

Python Google Search API Unofficial Google Search API for Python. It uses web scraping in the background and is compatible with both Python 2 and 3. W

Avi Aryan 204 Dec 28, 2022
Telephus is a connection pooled, low-level client API for Cassandra in Twisted python.

Telephus Son of Heracles who loved Cassandra. He went a little crazy, at one point. One might almost say he was twisted. Description Telephus is a con

Brandon Williams 93 Apr 29, 2021
discord.py bot written in Python.

bakerbot Bakerbot is a discord.py bot written in Python :) Originally made as a learning exercise, now used by friends as a somewhat useful bot and us

8 Dec 04, 2022
Source code for "Efficient Training of BERT by Progressively Stacking"

Introduction This repository is the code to reproduce the result of Efficient Training of BERT by Progressively Stacking. The code is based on Fairseq

Gong Linyuan 101 Dec 02, 2022