Google-drive-to-sqlite - Create a SQLite database containing metadata from Google Drive

Overview

google-drive-to-sqlite

PyPI Changelog Tests License

Create a SQLite database containing metadata from Google Drive

If you use Google Drive, and especially if you have shared drives with other people there's a good chance you have hundreds or even thousands of files that you may not be fully aware of.

This tool can download metadata about those files - their names, sizes, folders, content types, permissions, creation dates and more - and store them in a SQLite database.

This lets you use SQL to analyze your Google Drive contents, using Datasette or the SQLite command-line tool or any other SQLite database browsing software.

Installation

Install this tool using pip:

$ pip install google-drive-to-sqlite

Authentication

⚠️ This application has not yet been verified by Google - you may find you are unable to authenticate until that verification is complete. #10

First, authenticate with Google Drive using the auth command:

% google-drive-to-sqlite auth
Visit the following URL to authenticate with Google Drive

https://accounts.google.com/o/oauth2/v2/auth?...

Then return here and paste in the resulting code:
Paste code here: 

Follow the link, sign in with Google Drive and then copy and paste the resulting code back into the tool.

This will save an authentication token to the file called auth.json in the current directory.

To specify a different location for that file, use the --auth option:

google-drive-to-sqlite auth --auth ~/google-drive-auth.json

The auth command also provides options for using a different scope, Google client ID and Google client secret. You can use these to create your own custom authentication tokens that can work with other Google APIs, see issue #5 for details.

Full --help:

Usage: google-drive-to-sqlite auth [OPTIONS]

  Authenticate user and save credentials

Options:
  -a, --auth FILE              Path to save token, defaults to auth.json
  --google-client-id TEXT      Custom Google client ID
  --google-client-secret TEXT  Custom Google client secret
  --scope TEXT                 Custom token scope
  --help                       Show this message and exit.

To revoke the token that is stored in auth.json, run the revoke command:

google-drive-to-sqlite revoke

Or if your token is stored in another location:

google-drive-to-sqlite revoke -a ~/google-drive-auth.json

google-drive-to-sqlite files

To retrieve metadata about the files in your Google Drive, or a folder or search within it, use the google-drive-to-sqlite files command.

This will default to writing details about every file in your Google Drive to a SQLite database:

google-drive-to-sqlite files files.db

Files will be written to a files table, which will be created if it does not yet exist.

If a file already exists in that table, based on a matching id, it will be replaced with fresh data.

Instead of writing to SQLite you can use --json to output as JSON, or --nl to output as newline-delimited JSON:

google-drive-to-sqlite files --nl

Use --folder ID to retrieve everything in a specified folder and its sub-folders:

google-drive-to-sqlite files files.db --folder 1E6Zg2X2bjjtPzVfX8YqdXZDCoB3AVA7i

Use --q QUERY to use a custom search query:

google-drive-to-sqlite files files.db -q 'starred = true'

Use --full-text TEXT to search for files where the full text matches a search term:

google-drive-to-sqlite files files.db --full-text 'datasette'

Use --stop-after X to stop after retrieving X files.

Full --help:

Usage: google-drive-to-sqlite files [OPTIONS] [DATABASE]

  Retrieve metadata for files in Google Drive, and write to a SQLite database or
  output as JSON.

      google-drive-to-sqlite files files.db

  Use --json to output JSON, --nl for newline-delimited JSON:

      google-drive-to-sqlite files files.db --json

  Use a folder ID to recursively fetch every file in that folder and its sub-
  folders:

      google-drive-to-sqlite files files.db --folder
      1E6Zg2X2bjjtPzVfX8YqdXZDCoB3AVA7i

Options:
  -a, --auth FILE       Path to auth.json token file
  --folder TEXT         Files in this folder ID and its sub-folders
  -q TEXT               Files matching this query
  --full-text TEXT      Search for files with text match
  --json                Output JSON rather than write to DB
  --nl                  Output newline-delimited JSON rather than write to DB
  --stop-after INTEGER  Stop paginating after X results
  --help                Show this message and exit.

google-drive-to-sqlite download FILE_ID

The download command can be used to download files from Google Drive.

You'll need one or more file IDs, which look something like 0B32uDVNZfiEKLUtIT1gzYWN2NDI4SzVQYTFWWWxCWUtvVGNB.

To download the file, run this:

google-drive-to-sqlite download 0B32uDVNZfiEKLUtIT1gzYWN2NDI4SzVQYTFWWWxCWUtvVGNB

This will detect the content type of the file and use that as the extension - so if this file is a JPEG the file would be downloaded as:

0B32uDVNZfiEKLUtIT1gzYWN2NDI4SzVQYTFWWWxCWUtvVGNB.jpeg

You can pass multiple file IDs to the command at once.

To hide the progress bar and filename output, use -s or --silent.

If you are downloading a single file you can use the -o output to specify a filename and location:

google-drive-to-sqlite download 0B32uDVNZfiEKLUtIT1gzYWN2NDI4SzVQYTFWWWxCWUtvVGNB \
  -o my-image.jpeg

Use -o - to write the file contents to standard output:

google-drive-to-sqlite download 0B32uDVNZfiEKLUtIT1gzYWN2NDI4SzVQYTFWWWxCWUtvVGNB \
  -o - > my-image.jpeg

Full --help:

Usage: google-drive-to-sqlite download [OPTIONS] FILE_IDS...

  Download one or more file IDs to disk

Options:
  -a, --auth FILE    Path to auth.json token file
  -o, --output FILE  File to write to, or - for standard output
  -s, --silent       Hide progress bar and filename
  --help             Show this message and exit.

google-drive-to-sqlite get URL

The get command makes authenticated requests to the specified URL, using credentials derived from the auth.json file.

For example:

% google-drive-to-sqlite get 'https://www.googleapis.com/drive/v3/about?fields=*'
{
    "kind": "drive#about",
    "user": {
        "kind": "drive#user",
        "displayName": "Simon Willison",
# ...

If the resource you are fetching supports pagination you can use --paginate key to paginate through all of the rows in a specified key. For example, the following API has a nextPageToken key and a files list, suggesting it supports pagination:

% google-drive-to-sqlite get https://www.googleapis.com/drive/v3/files
{
    "kind": "drive#fileList",
    "nextPageToken": "~!!~AI9...wogHHYlc=",
    "incompleteSearch": false,
    "files": [
        {
            "kind": "drive#file",
            "id": "1YEsITp_X8PtDUJWHGM0osT-TXAU1nr0e7RSWRM2Jpyg",
            "name": "Title of a spreadsheet",
            "mimeType": "application/vnd.google-apps.spreadsheet"
        },

To paginate through everything in the files list you would use --paginate files lyike this:

% google-drive-to-sqlite get https://www.googleapis.com/drive/v3/files --paginate files
[
  {
    "kind": "drive#file",
    "id": "1YEsITp_X8PtDUJWHGM0osT-TXAU1nr0e7RSWRM2Jpyg",
    "name": "Title of a spreadsheet",
    "mimeType": "application/vnd.google-apps.spreadsheet"
  },
  # ...

Add --nl to stream paginated data as newline-delimited JSON:

% google-drive-to-sqlite get https://www.googleapis.com/drive/v3/files --paginate files --nl
{"kind": "drive#file", "id": "1YEsITp_X8PtDUJWHGM0osT-TXAU1nr0e7RSWRM2Jpyg", "name": "Title of a spreadsheet", "mimeType": "application/vnd.google-apps.spreadsheet"}
{"kind": "drive#file", "id": "1E6Zg2X2bjjtPzVfX8YqdXZDCoB3AVA7i", "name": "Subfolder", "mimeType": "application/vnd.google-apps.folder"}

Add --stop-after 5 to stop after 5 records - useful for testing.

Full --help:

Usage: google-drive-to-sqlite get [OPTIONS] URL

  Make an authenticated HTTP GET to the specified URL

Options:
  -a, --auth FILE       Path to auth.json token file
  --paginate TEXT       Paginate through all results in this key
  --nl                  Output paginated data as newline-delimited JSON
  --stop-after INTEGER  Stop paginating after X results
  --help                Show this message and exit.

Privacy policy

This tool requests access to your Google Drive account in order to retrieve metadata about your files there. It also offers a feature that can download the content of those files.

The credentials used to access your account are stored in the auth.json file on your computer. The metadata and content retrieved from Google Drive is also stored only on your own personal computer.

At no point to the developers of this tool gain access to any of your data.

Development

To contribute to this tool, first checkout the code. Then create a new virtual environment:

cd google-drive-to-sqlite
python -m venv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and test dependencies:

pip install -e '.[test]'

To run the tests:

pytest
Comments
  • google-drive-to-sqlite initial release

    google-drive-to-sqlite initial release

    Basic design:

    google-drive-to-sqlite files google.db FOLDER_ID
    

    Looks for auth.json with credentials in the current directory - it wants a refresh_token or an access_token of some sort.

    Features for the initial release:

    • [x] #2
    • [x] #3
    • [x] #7
    • [x] #5
    design 
    opened by simonw 13
  • Retry once (or more?) on any TransportError

    Retry once (or more?) on any TransportError

    Got this exception:

      File "/Users/simon/Dropbox/Development/google-drive-to-sqlite/google_drive_to_sqlite/utils.py", line 79, in get
        response = httpx.get(url, params=params, headers=headers, timeout=self.timeout)
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_api.py", line 189, in get
        return request(
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_api.py", line 100, in request
        return client.request(
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_client.py", line 802, in request
        return self.send(request, auth=auth, follow_redirects=follow_redirects)
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_client.py", line 889, in send
        response = self._send_handling_auth(
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_client.py", line 917, in _send_handling_auth
        response = self._send_handling_redirects(
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_client.py", line 954, in _send_handling_redirects
        response = self._send_single_request(request)
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_client.py", line 990, in _send_single_request
        response = transport.handle_request(request)
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_transports/default.py", line 217, in handle_request
        with map_httpcore_exceptions():
      File "/Users/simon/.pyenv/versions/3.10.0/lib/python3.10/contextlib.py", line 153, in __exit__
        self.gen.throw(typ, value, traceback)
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_transports/default.py", line 77, in map_httpcore_exceptions
        raise mapped_exc(message) from exc
    httpx.RemoteProtocolError: Server disconnected without sending a response.
    

    Would be good to retry once if this happens.

    bug 
    opened by simonw 12
  • `google-drive-to-sqlite download FILE_ID` command

    `google-drive-to-sqlite download FILE_ID` command

    Here's the recipe that worked for retrieving the binary contents of a file - the trick is the alt=media parameter:

    def get_binary(file_id):
        return httpx.get(
            "https://www.googleapis.com/drive/v3/files/{}?alt=media".format(file_id),
            headers={
                "Authorization": "Bearer {}".format(access_token)
            }
        ).content
    

    Originally posted by @simonw in https://github.com/simonw/google-drive-to-sqlite/issues/1#issuecomment-1041026849

    enhancement 
    opened by simonw 11
  • Add `auth` options for setting client ID and secret and scope

    Add `auth` options for setting client ID and secret and scope

    If auth could optionally take an alternative client ID and secret and scope (and write them to auth.json) then the get command could be used by power users (mainly me) to explore other Google APIs.

    • #3
    enhancement 
    opened by simonw 9
  • `google-drive-to-sqlite auth` command

    `google-drive-to-sqlite auth` command

    Authentication will be tricky. For the moment I'll go with the simplest thing possible, but I may need to build a google-drive-to-sqlite auth command just to get things up and running.

    Originally posted by @simonw in https://github.com/simonw/google-drive-to-sqlite/issues/1#issuecomment-1041023730

    research 
    opened by simonw 9
  • Schema change: a file/drive can only have one owner

    Schema change: a file/drive can only have one owner

    The numbers in this screenshot seem to indicate that each file and folder have a single owner only - even though the JSON would suggest the possibility for multiple owners:

    image

    So I may be able to get rid of those many-to-many tables entirely.

    enhancement 
    opened by simonw 6
  • `files` options for listing just Google Docs documents

    `files` options for listing just Google Docs documents

    Would be neat to have options that can return:

    • All Google Docs documents of any type
    • Just docs
    • Just sheets
    • Just presentations
    • Maybe just drawings?

    Related:

    • #21
    enhancement blocked 
    opened by simonw 5
  • Invalid Credentials error if `access_token` expires

    Invalid Credentials error if `access_token` expires

    I left a files job running which took over an hour, and came back to:

    google-drive-to-sqlite big_folder.db --folder 1E6Zg2X2bjjtPzVfX8YqdXZDCoB3AVA7i
    

    google_drive_to_sqlite.utils.FilesError: {'error': {'errors': [{'domain': 'global', 'reason': 'authError', 'message': 'Invalid Credentials', 'locationType': 'header', 'location': 'Authorization'}], 'code': 401, 'message': 'Invalid Credentials'}}

    It looks like it took longer than an hour and the access_token expired!

    Since we have a refresh_token it would be feasible to catch this and try again with a new token.

    bug 
    opened by simonw 5
  • httpx TimeoutError

    httpx TimeoutError

    Got this exception while testing code from #7:

      File "/Users/simon/Dropbox/Development/google-drive-to-sqlite/google_drive_to_sqlite/utils.py", line 25, in paginate_files
        data = httpx.get(
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_api.py", line 189, in get
        return request(
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_api.py", line 100, in request
        return client.request(
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_client.py", line 802, in request
        return self.send(request, auth=auth, follow_redirects=follow_redirects)
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_client.py", line 889, in send
        response = self._send_handling_auth(
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_client.py", line 917, in _send_handling_auth
        response = self._send_handling_redirects(
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_client.py", line 954, in _send_handling_redirects
        response = self._send_single_request(request)
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_client.py", line 990, in _send_single_request
        response = transport.handle_request(request)
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_transports/default.py", line 217, in handle_request
        with map_httpcore_exceptions():
      File "/Users/simon/.pyenv/versions/3.10.0/lib/python3.10/contextlib.py", line 153, in __exit__
        self.gen.throw(typ, value, traceback)
      File "/Users/simon/.local/share/virtualenvs/google-drive-to-sqlite-Wr1nXkpK/lib/python3.10/site-packages/httpx/_transports/default.py", line 77, in map_httpcore_exceptions
        raise mapped_exc(message) from exc
    httpx.ReadTimeout: The read operation timed out
    
    bug 
    opened by simonw 5
  • `google-drive-to-sqlite files` command

    `google-drive-to-sqlite files` command

    I'm going to have this:

    google-drive-to-sqlite files google.db
    

    Retrieve ALL files in the drive.

    google-drive-to-sqlite files google.db --folder FOLDER_ID
    

    Will do just the files in that folder - using ?q= and "folder_id" in parents - but applied recursively to all of the sub-folders.

    google-drive-to-sqlite files google.db --q search_term
    

    Will allow advanced search terms (passed to ?q=).

    Originally posted by @simonw in https://github.com/simonw/google-drive-to-sqlite/issues/1#issuecomment-1041172083

    enhancement 
    opened by simonw 5
  • `drive_users` table fills up with null rows

    `drive_users` table fills up with null rows

    Something went very wrong here:

    image

    88 rows where permissionId is not null, 14,012 rows where permissionId is null.

    Originally posted by @simonw in https://github.com/simonw/google-drive-to-sqlite/issues/18#issuecomment-1046167012

    bug 
    opened by simonw 4
  • Support authentication using service account keys

    Support authentication using service account keys

    Service account keys take the form of a JSON file on disk containing a primary key.

    It's possible, albeit non-obvious, to make calls to the Google Drive API using these keys.

    They might represent a better authentication mechanism for many use-cases. See also:

    • #39
    enhancement 
    opened by simonw 4
  • OOB auth flow is scheduled for deprecation

    OOB auth flow is scheduled for deprecation

    Just learned about this: https://developers.googleblog.com/2022/02/making-oauth-flows-safer.html?m=1#disallowed-oo

    OAuth out-of-band (OOB) is a legacy flow developed to support native clients which do not have a redirect URI like web apps to accept the credentials after a user approves an OAuth consent request. The OOB flow poses a remote phishing risk and clients must migrate to an alternative method to protect against this vulnerability. New clients will be unable to use this flow starting on Feb 28, 2022.

    ...

    • Feb 28, 2022 - new OAuth usage will be blocked for the OOB flow
    • Sep 5, 2022 - a user-facing warning message may be displayed to non-compliant OAuth requests
    • Oct 3, 2022 - the OOB flow is deprecated for existing clients

    From a comment on Hacker News: https://news.ycombinator.com/item?id=30417735

    I'm using that flow here: https://github.com/simonw/google-drive-to-sqlite/blob/1215097786c0ecdb12a766c9f2c8e53b2b0cd0f9/google_drive_to_sqlite/cli.py#L56-L65

    research 
    opened by simonw 8
  • `--sql` option for `download` and `export` commands

    `--sql` option for `download` and `export` commands

    twitter-to-sqlite has this: https://datasette.io/tools/twitter-to-sqlite#user-content-providing-input-from-a-sql-query-with---sql-and---attach

    $ twitter-to-sqlite users-lookup my.db \
        --sql="select followed_id from following where followed_id not in (
            select id from users)" --ids
    

    Some kind of equivalent for the download and export commands would be neat.

    enhancement research 
    opened by simonw 3
  • Can I use the Drive Activity API to find files recently added to a nested folder?

    Can I use the Drive Activity API to find files recently added to a nested folder?

    https://developers.google.com/drive/activity/v2/reference/rest/v2/activity/query looks like it's possible it might be able to solve the problem "show me files that were added to this deeply nested folder structure since last time I looked.

    research 
    opened by simonw 0
  • Mechanism for creating a table based on csv export from a Google Sheet

    Mechanism for creating a table based on csv export from a Google Sheet

    Related to:

    • #28

    Might not be feasible, depending on how good the CSV export is especially for sheets that don't have any headers. But could be fun to explore!

    research 
    opened by simonw 0
Releases(0.4)
  • 0.4(Feb 20, 2022)

    • Redesigned the schema to reflect that Google Drive files can only have one, not multiple owners. #34
      • Removed drive_folders_owners and drive_files_owners tables.
      • drive_files and drive_folders now have a _owner column that is a foreign key to drive_users.
    • New google-drive-to-sqlite export format file_id command for exporting Google Docs files as different formats such as pdf. #21
    • New options to files: --apps for all Google Docs files of all types, or --docs, --sheets, --presentations and --drawings for Google Docs files of specific types.
    • google-drive-to-sqlite files and get commands now have a --verbose option showing what the tool is doing in detail.
    • Automatically retries up to twice on HTTP transport errors, with a 2 second delay. #18
    • The auth.json file now defaults to 0600 permissions, meaning only the user can read that file on a shared system. #37
    Source code(tar.gz)
    Source code(zip)
  • 0.3(Feb 19, 2022)

    • New design for the database schema: #9
      • drive_files contains files, with a _parent foreign key to the parent folder
      • drive_folders contains folders, with a self-referential _parent foreign key
      • drive_users contains users who may own or have modified files
      • drive_folders_owners relates folders to their owners
      • drive_files_owners relates files to their owners
      • The full schema is now included in the documentation.
    • New --import-json and --import-nl options to the files command for creating a database using JSON data retrieved previously using the --nl and --json options. #20
    • --starred, --trashed and --shared-with-me options for files as shortcuts for constructing more advanced search queries. #25
    • Documentation now has a quickstart section. #24
    Source code(tar.gz)
    Source code(zip)
  • 0.2a0(Feb 18, 2022)

    • New google-drive-to-sqlite revoke command, for revoking stored credentials. #16
    • If an hour long access_token expires while a command is running, the failed request is retried with a new token automatically generated using the refresh_token. #11
    Source code(tar.gz)
    Source code(zip)
  • 0.1a2(Feb 17, 2022)

  • 0.1a1(Feb 17, 2022)

  • 0.1a0(Feb 16, 2022)

    This initial alpha release may not work for anyone other than me, as Google still need to verify my OAuth application.

    • google-drive-to-sqlite auth command for authenticating with Google and storing the resulting credentials. #2
    • google-drive-to-sqlite files command for fetching file metadata from Google Drive and storing it in a SQLite database file, or outputting it as JSON. #7
    • google-drive-to-sqlite get command for executing other authenticated API calls against Google APIs. #3
    Source code(tar.gz)
    Source code(zip)
PerfFuzz: Automatically Generate Pathological Inputs for C/C++ programs

PerfFuzz Performance problems in software can arise unexpectedly when programs are provided with inputs that exhibit pathological behavior. But how ca

Caroline Lemieux 125 Nov 18, 2022
Predicting Semantic Map Representations from Images with Pyramid Occupancy Networks

This is the code associated with the paper Predicting Semantic Map Representations from Images with Pyramid Occupancy Networks, published at CVPR 2020.

Thomas Roddick 219 Dec 20, 2022
UMEC: Unified Model and Embedding Compression for Efficient Recommendation Systems

[ICLR 2021] "UMEC: Unified Model and Embedding Compression for Efficient Recommendation Systems" by Jiayi Shen, Haotao Wang*, Shupeng Gui*, Jianchao Tan, Zhangyang Wang, and Ji Liu

VITA 39 Dec 03, 2022
Some pvbatch (paraview) scripts for postprocessing OpenFOAM data

pvbatchForFoam Some pvbatch (paraview) scripts for postprocessing OpenFOAM data For every script there is a help message available: pvbatch pv_state_s

Morev Ilya 2 Oct 26, 2022
Drslmarkov - Distributionally Robust Structure Learning for Discrete Pairwise Markov Networks

Distributionally Robust Structure Learning for Discrete Pairwise Markov Networks

1 Nov 24, 2022
The official MegEngine implementation of the ICCV 2021 paper: GyroFlow: Gyroscope-Guided Unsupervised Optical Flow Learning

[ICCV 2021] GyroFlow: Gyroscope-Guided Unsupervised Optical Flow Learning This is the official implementation of our ICCV2021 paper GyroFlow. Our pres

MEGVII Research 36 Sep 07, 2022
transfer attack; adversarial examples; black-box attack; unrestricted Adversarial Attacks on ImageNet; CVPR2021 天池黑盒竞赛

transfer_adv CVPR-2021 AIC-VI: unrestricted Adversarial Attacks on ImageNet CVPR2021 安全AI挑战者计划第六期赛道2:ImageNet无限制对抗攻击 介绍 : 深度神经网络已经在各种视觉识别问题上取得了最先进的性能。

25 Dec 08, 2022
Starter kit for getting started in the Music Demixing Challenge.

Music Demixing Challenge - Starter Kit 👉 Challenge page This repository is the Music Demixing Challenge Submission template and Starter kit! Clone th

AIcrowd 106 Dec 20, 2022
LONG-TERM SERIES FORECASTING WITH QUERYSELECTOR – EFFICIENT MODEL OF SPARSEATTENTION

Query Selector Here you can find code and data loaders for the paper https://arxiv.org/pdf/2107.08687v1.pdf . Query Selector is a novel approach to sp

MORAI 62 Dec 17, 2022
Implementation of "Scaled-YOLOv4: Scaling Cross Stage Partial Network" using PyTorch framwork.

YOLOv4-large This is the implementation of "Scaled-YOLOv4: Scaling Cross Stage Partial Network" using PyTorch framwork. YOLOv4-CSP YOLOv4-tiny YOLOv4-

Kin-Yiu, Wong 2k Jan 02, 2023
End-to-end machine learning project for rices detection

Basmatinet Welcome to this project folks ! Whether you like it or not this project is all about riiiiice or riz in french. It is also about Deep Learn

Béranger 47 Jun 18, 2022
Variational Attention: Propagating Domain-Specific Knowledge for Multi-Domain Learning in Crowd Counting (ICCV, 2021)

DKPNet ICCV 2021 Variational Attention: Propagating Domain-Specific Knowledge for Multi-Domain Learning in Crowd Counting Baseline of DKPNet is availa

19 Oct 14, 2022
NeuroLKH: Combining Deep Learning Model with Lin-Kernighan-Helsgaun Heuristic for Solving the Traveling Salesman Problem

NeuroLKH: Combining Deep Learning Model with Lin-Kernighan-Helsgaun Heuristic for Solving the Traveling Salesman Problem Liang Xin, Wen Song, Zhiguang

xinliangedu 33 Dec 27, 2022
Tensorflow implementation for "Improved Transformer for High-Resolution GANs" (NeurIPS 2021).

HiT-GAN Official TensorFlow Implementation HiT-GAN presents a Transformer-based generator that is trained based on Generative Adversarial Networks (GA

Google Research 78 Oct 31, 2022
1st Solution For NeurIPS 2021 Competition on ML4CO Dual Task

KIDA: Knowledge Inheritance in Data Aggregation This project releases our 1st place solution on NeurIPS2021 ML4CO Dual Task. Slide and model weights a

MEGVII Research 24 Sep 08, 2022
CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution

CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution This is the official implementation code of the paper "CondLaneNe

Alibaba Cloud 311 Dec 30, 2022
Code of TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation

TVT Code of TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation Datasets: Digit: MNIST, SVHN, USPS Object: Office, Office-Home, Vi

37 Dec 15, 2022
Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes

Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes

111 Dec 29, 2022
TorchGeo is a PyTorch domain library, similar to torchvision, that provides datasets, transforms, samplers, and pre-trained models specific to geospatial data.

TorchGeo is a PyTorch domain library, similar to torchvision, that provides datasets, transforms, samplers, and pre-trained models specific to geospatial data.

Microsoft 1.3k Dec 30, 2022