I³ Tracker for Essential Open Innovation Datasets

Overview

I³ Tracker for Essential Open Innovation Datasets

This repository is set up to track, version, and contribute updates to the I³ Essential Open Innovation Dataset Index, which consists of lists of datasets and tools relevant to Innovation Data. This index may be collaboratively edited, either by making edits to markdown files contained in this repository, or editing metadata in the Google Sheet.

The repository checks the Google Sheet for changes every 5min (and will update the site if there are any), and will also re-build the site automatically when somebody makes an edit via git. The site is generated from markdown files in this repository using the static site generator Jekyll.

Add/edit a Dataset using Git

Each record in the index has a corresponding markdown file (auto-generated) in the folder datasets/. These files contain the basic metadata associated with the record in the frontmatter, and also allow more long-form information, such as details of queries, images, and other written information, to be added. Both of these things are editable.

When a markdown file is added to the datasets/ folder, a GitHub action publishes the metadata in the frontmatter to the Google Sheet, and to the archive csv, to keep the records up to date. This script calls various metadata scrapers to automatically pull information like permalinks, citation information, and versioning. Once the file has been successfully committed, a second action will run to refresh the state of the website to reflect the edits.

Contribution Steps

  1. fork the repository, create a markdown in the folder 'datasets' based on the template file
  2. add as much metadata as you like, and create a pull request in this repository
  3. all being well, this should automatically merge. if not, you can check the GitHub actions log, or open an issue. (make sure it's in the correct folder, and has a .md file extension before doing so)

If the dataset is hosted on a platform with parseable citation metadata (Dataverse, Zenodo, ICPSR, and major university repositories are examples of these), then the tool will automatically pull most of the data associated with the dataset -- fields that will auto-fill are indicated by a comment. If the dataset is hosted on e.g. a personal site, then you might want to include some more information -- but ultimately, only a title and URL is really necessary. However you fill out your dataset, a uuid and timestamp will be generated for it automatically; these aren't fields you need to include (hence not included in the template).

The reason we've done this is to save you from copy-pasting a lot of information from existing repositories, and to make it easier for you to curate more useful and harder-to-scrape metadata -- such as the timeframes of datasets, links to code and documentation, and datasets that might be built on top of it but don't use an easy-to-parse citation. So definitely prioritise these fields!

If you're unsure of how to make a pull request, github has some good guides to doing this. You can also just make an edit to the Google Sheet, which will have an equivalent effect.

If there's a piece of metadata you think we should collect but don't, please add it to the frontmatter of the markdown files you contribute. (Nothing will break!) Then open an issue mentioning the new field, so that we can discuss adding it to the repository officially too.

To contribute a new dataset via pull request, please use the template file datasets/_template.md as a reference:

---
title: #required
url: #required
doi: #scrapeable
citation: #scrapeable
description: #scrapeable
timeframe:
documentation:
error_metrics:
code:
versioning: #scrapeable
terms_of_use: #scrapeable
tags:
references:
---


body text. info about `queries`, links and images goes here :)

Collections

The site also indexes collections, which are pages containing thematic information about datasets, tools and resources. These are housed in the folder collections/. The collection intro.md is an example -- this particular collection is also rendered on the front page of the site.

In the same manner as datasets, collection files can be added or edited using pull requests, where the repository is forked, and additions or edits to the collections can be made. The collections are not currently tracked via Google Sheets, and so may only be edited via git.

To create a new collection, the collection template may be copied to use as a reference:

---
title:
author:
tags:
---

Collections are a way to list resources around a theme, relevant to a research agenda or set of papers, or as an introduction to various aspects of the field. They are formatted in markdown:

To list a dataset that's in the index, use a relative link, e.g.

```markdown
[local dataset name](/datasets/dataset_shortname)

Dataset shortnames can be found either by looking at the urls directly, or through the 'shortnames' column of the Google Sheet.

Index

A versioned .csv file containing the index may be accessed in the folder index_archive. If you'd like to browse and query either sheet, you can do so using Github's Flat Data tool here. The Github Action that pulls the sheet is based on Dolthub's Gsheets-to-csv action.

Screenshot 2021-07-13 at 13 35 49

Simple package to make requests throughout Tor with circuit renewal.

AutoTor Table of Contents About the Project Contents Dependencies Getting Started Installation Coding Contributing About the Project Simple package to

Salvador Belenguer 6 Jan 01, 2023
Double Pendulum implementation in Python, now with added pendulums and trails :D

Double Pendulum Using Curses in Python. A nice relaxing double pendulum simulation using ASCII, able to simulate multiple pendulums at once, and provi

Nekurone 62 Dec 14, 2022
【AI创造营】参赛作品

-AI-emmmm 【AI创造营】参赛作品 鬼畜小视频 AiStuido地址:https://aistudio.baidu.com/aistudio/projectdetail/1647685 BiliBili视频地址:https://www.bilibili.com/video/BV1Zv411b

107 Nov 09, 2022
A Python Perforce package that doesn't bring in any other packages to work.

P4CMD 🌴 A Python Perforce package that doesn't bring in any other packages to work. Relies on p4cli installed on the system. p4cmd The p4cmd module h

Niels Vaes 13 Dec 19, 2022
Package pyVHR is a comprehensive framework for studying methods of pulse rate estimation relying on remote photoplethysmography (rPPG)

Package pyVHR (short for Python framework for Virtual Heart Rate) is a comprehensive framework for studying methods of pulse rate estimation relying on remote photoplethysmography (rPPG)

PHUSE Lab 261 Jan 03, 2023
Easy to use phishing tool with 65 website templates. Author is not responsible for any misuse.

PyPhisher [+] Description : Ultimate phishing tool in python. Includes popular websites like facebook, twitter, instagram, github, reddit, gmail and m

KasRoudra 1.1k Dec 31, 2022
A Notifier Program that Notifies you to relax your eyes Every 15 Minutes👀

Every 15 Minutes is an application that is used to Notify you to Relax your eyes Every 15 Minutes, This is fully made with Python and also with the us

FSP Gang s' YT 2 Nov 11, 2021
This repo is related to Google Coding Challenge, given to Bright Network Internship Experience 2021.

BrightNetworkUK-GCC-2021 This repo is related to Google Coding Challenge, given to Bright Network Internship Experience 2021. Language used here is py

Dareer Ahmad Mufti 28 May 23, 2022
A python based app to improve your presentation workflow

Presentation Remote A remote made for making presentations easier by enabling all the members to have access to change the slide and control the flow

Parnav 1 Oct 28, 2021
Convert a .vcf file to 'aa_table.tsv', including depth & alt frequency info

Produce an 'amino acid table' file from a vcf, including depth and alt frequency info.

Dan Fornika 1 Oct 16, 2021
Chat meetup

FLiP-Meetup-Chat Chat meetup create function bin/pulsar-admin functions create --auto-ack true --jar pulsardjlexample-1.0.jar --classname "dev.pulsarf

Timothy Spann 1 Dec 09, 2021
Navigate to your directory of choice the proceed as follows

Installation 🚀 Navigate to your directory of choice the proceed as follows; 1 .Clone the git repo and create a virtual environment Depending on your

Ondiek Elijah Ochieng 2 Jan 31, 2022
A collection of UIKit components that can be used as a Wagtail StreamField block.

Wagtail UIKit Blocks A collection of UIKit components that can be used as a Wagtail StreamField block. Available UIKit components Container Grid Headi

Krishna Prasad K 13 Dec 15, 2022
Tools, guides, and resources for blockchain analysts to interface with data on the Ergo platform.

Ergo Intelligence Objective Provide a suite of easy-to-use toolkits, guides, and resources for blockchain analysts and data scientists to quickly unde

Chris 5 Mar 15, 2022
A pypi package details search python module

A pypi package details search python module

Fayas Noushad 5 Nov 30, 2021
easy_sbatch - Batch submitting Slurm jobs with script templates

easy_sbatch - Batch submitting Slurm jobs with script templates

Wei Shen 13 Oct 11, 2022
Chalice - A tool to facilitate Python based lambda deployment

Chalice is a tool to facilitate Python based lambda deployment. This repo contains the output of my basic exploration of this tool.

Csilla Bessenyei 1 Feb 03, 2022
pyreports is a python library that allows you to create complex report from various sources

pyreports pyreports is a python library that allows you to create complex reports from various sources such as databases, text files, ldap, etc. and p

Matteo Guadrini aka GU 78 Dec 13, 2022
Todo-backend - Todo backend with python

Todo-backend - Todo backend with python

Julio C. Diaz 1 Jan 07, 2022
Randomly distribute members by groups making sure that every sector is represented

Generate Groups Randomly distribute members by groups making sure that every sector is represented The Scenario Imagine that you have a large group of

Jorge Gomes 1 Oct 22, 2021