A production-ready, scalable Indexer for the Jina neural search framework, based on HNSW and PSQL

Overview

๐ŸŒŸ HNSW + PostgreSQL Indexer

HNSWPostgreSQLIndexer Jina is a production-ready, scalable Indexer for the Jina neural search framework.

It combines the reliability of PostgreSQL with the speed and efficiency of the HNSWlib nearest neighbor library.

It thus provides all the CRUD operations expected of a database system, while also offering fast and reliable vector lookup.

Requires a running PostgreSQL database service. For quick testing, you can run a containerized version locally with:

docker run -e POSTGRES_PASSWORD=123456 -p 127.0.0.1:5432:5432/tcp postgres:13.2

Syncing between PSQL and HNSW

By default, all data is stored in a PSQL database (as defined in the arguments). In order to add data to / build a HNSW index with your data, you need to manually call the /sync endpoint. This iterates through the data you have stored, and adds it to the HNSW index. By default, this is done incrementally, on top of whatever data the HNSW index already has. If you want to completely rebuild the index, use the parameter rebuild, like so:

flow.post(on='/sync', parameters={'rebuild': True})

At start-up time, the data from PSQL is synced into HNSW automatically. You can disable this with:

Flow().add(
    uses='jinahub://HNSWPostgresIndexer',
    uses_with={'startup_sync': False}
)

Automatic background syncing

โš  WARNING: Experimental feature

Optionally, you can enable the option for automatic background syncing of the data into HNSW. This creates a thread in the background of the main operations, that will regularly perform the synchronization. This can be done with the sync_interval constructor argument, like so:

Flow().add(
    uses='jinahub://HNSWPostgresIndexer',
    uses_with={'sync_interval': 5}
)

sync_interval argument accepts an integer that represents the amount of seconds to wait between synchronization attempts. This should be adjusted based on your specific data amounts. For the duration of the background sync, the HNSW index will be locked to avoid invalid state, so searching will be queued. When sync_interval is enabled, the index will also be locked during search mode, so that syncing will be queued.

CRUD operations

You can perform all the usual operations on the respective endpoints

  • /index. Add new data to PostgreSQL
  • /search. Query the HNSW index with your Documents.
  • /update. Update documents in PostgreSQL
  • /delete. Delete documents in PostgreSQL.

Note. This only performs soft-deletion by default. This is done in order to not break the look-up of the document id after doing a search. For a hard delete, add 'soft_delete': False' to parameters. You might also perform a cleanup after a full rebuild of the HNSW index, by calling /cleanup.

Status endpoint

You can also get the information about the status of your data via the /status endpoint. This returns a Document whose tags contain the relevant information. The information can be returned via the following keys:

  • 'psql_docs': number of Documents stored in the PSQL database (includes entries that have been "soft-deleted")
  • 'hnsw_docs': the number of Documents indexed in the HNSW index
  • 'last_sync': the time of the last synchronization of PSQL into HNSW
  • 'pea_id': the shard number

In a sharded environment (parallel>1) you will get one Document from each shard. Each shard will have its own 'hnsw_docs', 'last_sync', 'pea_id', but they will all report the same 'psql_docs' (The PSQL database is available to all your shards). You need to sum the 'hnsw_docs' across these Documents, like so

result = f.post('/status', None, return_results=True)
result_docs = result[0].docs
total_hnsw_docs = sum(d.tags['hnsw_docs'] for d in result_docs)
Comments
  • Changing how /status method returns its values to try and merge with โ€ฆ

    Changing how /status method returns its values to try and merge with โ€ฆ

    โ€ฆany pre-existing tags from previous executors if any.

    A shot at addressing the issue mentioned in https://github.com/jina-ai/executor-hnsw-postgres/issues/23

    opened by louisconcentricsky 6
  • feat: performance improvements

    feat: performance improvements

    Closes https://github.com/jina-ai/executor-hnsw-postgres/issues/6

    Results before this PR:

    indexing 1000 takes 0 seconds (0.22s)
    rolling update 3 replicas x 2 shards takes 0 seconds (0.82s)
    search with 10 takes 0 seconds (0.23s)
    
    indexing 10000 takes 0 seconds (0.75s)
    rolling update 3 replicas x 2 shards takes 9 seconds (9.08s)
    search with 10 takes 0 seconds (0.22s)
    
    indexing 100000 takes 7 seconds (7.59s)
    rolling update 3 replicas x 2 shards takes 7 minutes and 17 seconds (437.44s)
    search with 10 takes 0 seconds (0.22s)
    
    

    RESULTS NOW

    indexing 1000 takes 0 seconds (0.44s)                                                                                   
    rolling update 3 replicas x 2 shards takes 0 seconds (0.81s)
    
    indexing 10000 takes 1 second (1.01s)                                                                                   
    rolling update 3 replicas x 2 shards takes 2 seconds (2.63s)
    
    indexing 100000 takes 8 seconds (8.10s)                                                                                 
    rolling update 3 replicas x 2 shards takes 3 minutes and 27 seconds (207.14s)
    
    

    MORE BENCHMARKING

    indexing 500000 takes 30 seconds (30.07s)    
    rolling update 3 replicas x 2 shards takes 26 minutes and 57 seconds (1617.99s)
    search with 10 takes 0 seconds (0.21s)
    
    opened by cristianmtr 3
  • Status endpoint does not allow for compositing data with other executors

    Status endpoint does not allow for compositing data with other executors

    If another executor would also like to report some status information using the same status endpoint the return of the HNSQPostgresIndexer will remove it.

    It seems some manner of using object update on the tags or just placing the status under a particular key would be more friendlier.

    https://github.com/jina-ai/executor-hnsw-postgres/blob/79754090665e8bb86e85ab5693fa9b8be80977ce/executor/hnswpsql.py#L322

    opened by louisconcentricsky 1
  • feat: background sync (with threads)

    feat: background sync (with threads)

    Closes https://github.com/jina-ai/internal-tasks/issues/293

    Issues

    • [x] timestamp timezone difference
    • [x] psql connection pool gets exhausted
    • [x] locking resources in threaded access

    NOTE: Even if we don't merge this, the refactoring of PSQL Handler still needs to be merged, as the previous usage of Conn Pool had issues.

    opened by cristianmtr 1
  • fail to connect to PostgreSQL with docker-compose

    fail to connect to PostgreSQL with docker-compose

    • start a PostgreSQL service with docker:

    docker run -e POSTGRES_PASSWORD=123456 -p 127.0.0.1:5432:5432/tcp postgres:13.2

    • build a flow with one executor:HNSWPostgresIndexer

    • run the flow locally, it works well

    • expose the flow to docker-compose yaml, and run the flow with docker-compose ,get an error:

    image

    jina version info:

    
    - jina 3.3.19
    - docarray 0.12.2
    - jina-proto 0.1.8
    - jina-vcs-tag (unset)
    - protobuf 3.20.0
    - proto-backend cpp
    - grpcio 1.43.0
    - pyyaml 6.0
    - python 3.10.2
    - platform Linux
    - platform-release 4.4.0-186-generic
    - platform-version #216-Ubuntu SMP Wed Jul 1 05:34:05 UTC 2020
    - architecture x86_64
    - processor x86_64
    - uid 48710637999860
    - session-id 906abcd2-c797-11ec-b1df-2c4d544656f4
    - uptime 2022-04-29T16:37:11.758133
    - ci-vendor (unset)
    * JINA_DEFAULT_HOST (unset)
    * JINA_DEFAULT_TIMEOUT_CTRL (unset)
    * JINA_DEFAULT_WORKSPACE_BASE /home/chenhao/.jina/executor-workspace
    * JINA_DEPLOYMENT_NAME (unset)
    * JINA_DISABLE_UVLOOP (unset)
    * JINA_FULL_CLI (unset)
    * JINA_GATEWAY_IMAGE (unset)
    * JINA_GRPC_RECV_BYTES (unset)
    * JINA_GRPC_SEND_BYTES (unset)
    * JINA_HUBBLE_REGISTRY (unset)
    * JINA_HUB_CACHE_DIR (unset)
    * JINA_HUB_NO_IMAGE_REBUILD (unset)
    * JINA_HUB_ROOT (unset)
    * JINA_LOG_CONFIG (unset)
    * JINA_LOG_LEVEL (unset)
    * JINA_LOG_NO_COLOR (unset)
    * JINA_MP_START_METHOD (unset)
    * JINA_RANDOM_PORT_MAX (unset)
    * JINA_RANDOM_PORT_MIN (unset)
    * JINA_VCS_VERSION (unset)
    * JINA_CHECK_VERSION True
    
    opened by jerrychen1990 0
  • test: bug rolling update clear

    test: bug rolling update clear

    if you remove from tests/integration/test_hnsw_psql.py

    L:180

            if benchmark:
                f.post('/clear')
    

    the test test_benchmark_basic fails when it runs the second case

    even though clear is called at the beginning of the flow.

    Why?

    yes, /clear only hits one replica. but when we restart the flow there should be completely new replicas anyway

    opened by cristianmtr 0
  • performance(HNSWPSQL): syncing is slow

    performance(HNSWPSQL): syncing is slow

    Right now sync will be slow

    • [ ] we are iterating and doing individual updates (should batch somehow, per sync operation type - index, update, delete)
    • [x] if rebuild, the operations will always be index. We should optimize for this. Done in #5

    Numbers before any perf refactoring

    Performance

    indexing 1000 ...       indexing 1000 takes 0 seconds (0.22s)
    rolling update 3 replicas x 2 shards ...            [email protected][I]:Using existing table
        [email protected][I]:Using existing table
        [email protected][I]:Using existing table
        [email protected][I]:Using existing table
        [email protected][I]:Using existing table
        [email protected][I]:Using existing table
    rolling update 3 replicas x 2 shards takes 0 seconds (0.82s)
    search with 10 ...      search with 10 takes 0 seconds (0.23s)
    
    indexing 10000 ...      indexing 10000 takes 0 seconds (0.75s)
    rolling update 3 replicas x 2 shards ...            [email protected][I]:Using existing table
        [email protected][I]:Using existing table
        [email protected][I]:Using existing table
        [email protected][I]:Using existing table
        [email protected][I]:Using existing table
        [email protected][I]:Using existing table
    rolling update 3 replicas x 2 shards takes 9 seconds (9.08s)
    search with 10 ...      search with 10 takes 0 seconds (0.22s)
    
    indexing 100000 ...     indexing 100000 takes 7 seconds (7.59s)
    rolling update 3 replicas x 2 shards ...            [email protected][I]:Using existing table
        [email protected][I]:Using existing table
        [email protected][I]:Using existing table
        [email protected][I]:Using existing table
        [email protected][I]:Using existing table
        [email protected][I]:Using existing table
    rolling update 3 replicas x 2 shards takes 7 minutes and 17 seconds (437.44s)
    search with 10 ...      search with 10 takes 0 seconds (0.22s)
    
    
    priority/important-soon type/maintenance 
    opened by cristianmtr 0
Releases(v0.9)
  • v0.8(Mar 8, 2022)

  • v0.7(Feb 11, 2022)

  • v0.6(Jan 3, 2022)

    What's Changed

    • docs: fix typo in delete endpoint and clarify by @cristianmtr in https://github.com/jina-ai/executor-hnsw-postgres/pull/14

    Full Changelog: https://github.com/jina-ai/executor-hnsw-postgres/compare/v0.5...v0.6

    Source code(tar.gz)
    Source code(zip)
  • v0.5(Dec 14, 2021)

    What's Changed

    • fix: type of trav paths by @cristianmtr in https://github.com/jina-ai/executor-hnsw-postgres/pull/13

    Full Changelog: https://github.com/jina-ai/executor-hnsw-postgres/compare/v0.4...v0.5

    Source code(tar.gz)
    Source code(zip)
  • v0.4(Dec 9, 2021)

    What's Changed

    • fix: allow using Executor in local mode by @cristianmtr in https://github.com/jina-ai/executor-hnsw-postgres/pull/12

    Full Changelog: https://github.com/jina-ai/executor-hnsw-postgres/compare/v0.3...v0.4

    Source code(tar.gz)
    Source code(zip)
  • v0.3(Nov 26, 2021)

    What's Changed

    • feat: background sync (with threads) by @cristianmtr in https://github.com/jina-ai/executor-hnsw-postgres/pull/9
    • docs: add docs on bg sync by @cristianmtr in https://github.com/jina-ai/executor-hnsw-postgres/pull/11

    Full Changelog: https://github.com/jina-ai/executor-hnsw-postgres/compare/v0.2...v0.3

    Source code(tar.gz)
    Source code(zip)
  • v0.2(Nov 22, 2021)

  • v0.1(Nov 18, 2021)

Owner
Jina AI
A Neural Search Company. We provide the cloud-native neural search solution powered by state-of-the-art AI technology.
Jina AI
Code for the CVPR 2021 paper "Triple-cooperative Video Shadow Detection"

Triple-cooperative Video Shadow Detection Code and dataset for the CVPR 2021 paper "Triple-cooperative Video Shadow Detection"[arXiv link] [official l

Zhihao Chen 24 Oct 04, 2022
Build fully-functioning computer vision models with PyTorch

Detecto is a Python package that allows you to build fully-functioning computer vision and object detection models with just 5 lines of code. Inferenc

Alan Bi 576 Dec 29, 2022
ํ†ต์ผ๋œ DataScience ํด๋” ๊ตฌ์กฐ ์ œ๊ณต ๋ฐ ๊ฐ€์ƒํ™˜๊ฒฝ ์ž‘์—…์˜ ๋ถ€๋‹ด๊ฐ ํ•ด์†Œ

Lucas coded by linux shell ๋ชฉ์ฐจ Mac๋ฒ„์ „ CookieCutter (autoenv) 1.How to Install autoenv 2.ํด๋” ์ง„์ž… ์‹œ, activate ๊ตฌํ˜„ํ•˜๊ธฐ 3.ํด๋” ํƒˆ์ถœ ์‹œ, deactivate ๊ตฌํ˜„ํ•˜๊ธฐ 4.Alias ์„ค์ •ํ•˜๊ธฐ 5

ello 3 Feb 21, 2022
Official implementation for the paper "SAPE: Spatially-Adaptive Progressive Encoding for Neural Optimization".

SAPE Project page Paper Official implementation for the paper "SAPE: Spatially-Adaptive Progressive Encoding for Neural Optimization". Environment Cre

36 Dec 09, 2022
Wide Residual Networks (WideResNets) in PyTorch

Wide Residual Networks (WideResNets) in PyTorch WideResNets for CIFAR10/100 implemented in PyTorch. This implementation requires less GPU memory than

Jason Kuen 296 Dec 27, 2022
TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classification

TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classification [NeurIPS 2021] Abstract Multiple instance learn

132 Dec 30, 2022
MVSDF - Learning Signed Distance Field for Multi-view Surface Reconstruction

MVSDF - Learning Signed Distance Field for Multi-view Surface Reconstruction This is the official implementation for the ICCV 2021 paper Learning Sign

110 Dec 20, 2022
Multi-Task Learning as a Bargaining Game

Nash-MTL Official implementation of "Multi-Task Learning as a Bargaining Game". Setup environment conda create -n nashmtl python=3.9.7 conda activate

Aviv Navon 87 Dec 26, 2022
Machine Learning From Scratch. Bare bones NumPy implementations of machine learning models and algorithms with a focus on accessibility. Aims to cover everything from linear regression to deep learning.

Machine Learning From Scratch About Python implementations of some of the fundamental Machine Learning models and algorithms from scratch. The purpose

Erik Linder-Norรฉn 21.8k Jan 09, 2023
Action Recognition for Self-Driving Cars

Action Recognition for Self-Driving Cars This repo contains the codes for the 2021 Fall semester project "Action Recognition for Self-Driving Cars" at

VITA lab at EPFL 3 Apr 07, 2022
Reusable constraint types to use with typing.Annotated

annotated-types PEP-593 added typing.Annotated as a way of adding context-specific metadata to existing types, and specifies that Annotated[T, x] shou

125 Dec 26, 2022
PyTorch implementation of CloudWalk's recent work DenseBody

densebody_pytorch PyTorch implementation of CloudWalk's recent paper DenseBody. Note: For most recent updates, please check out the dev branch. Update

Lingbo Yang 401 Nov 19, 2022
Practical Blind Denoising via Swin-Conv-UNet and Data Synthesis

Practical Blind Denoising via Swin-Conv-UNet and Data Synthesis [Paper] [Online Demo] The following results are obtained by our SCUNet with purely syn

Kai Zhang 312 Jan 07, 2023
NVIDIA Merlin is an open source library providing end-to-end GPU-accelerated recommender systems, from feature engineering and preprocessing to training deep learning models and running inference in production.

NVIDIA Merlin NVIDIA Merlin is an open source library designed to accelerate recommender systems on NVIDIAโ€™s GPUs. It enables data scientists, machine

419 Jan 03, 2023
Code for our paper "Interactive Analysis of CNN Robustness"

Perturber Code for our paper "Interactive Analysis of CNN Robustness" Datasets Feature visualizations: Google Drive Fine-tuning checkpoints as saved m

Stefan Sietzen 0 Aug 17, 2021
Baseline powergrid model for NY

Baseline-powergrid-model-for-NY Table of Contents About The Project Built With Usage License Contact Acknowledgements About The Project As the urgency

Anderson Energy Lab at Cornell 6 Nov 24, 2022
As a part of the HAKE project, includes the reproduced SOTA models and the corresponding HAKE-enhanced versions (CVPR2020).

HAKE-Action HAKE-Action (TensorFlow) is a project to open the SOTA action understanding studies based on our Human Activity Knowledge Engine. It inclu

Yong-Lu Li 94 Nov 18, 2022
SEJE Pytorch implementation

SEJE is a prototype for the paper Learning Text-Image Joint Embedding for Efficient Cross-Modal Retrieval with Deep Feature Engineering. Contents Inst

0 Oct 21, 2021
Teaches a student network from the knowledge obtained via training of a larger teacher network

Distilling-the-knowledge-in-neural-network Teaches a student network from the knowledge obtained via training of a larger teacher network This is an i

Abhishek Sinha 146 Dec 11, 2022
Code for CVPR2021 paper 'Where and What? Examining Interpretable Disentangled Representations'.

PS-SC GAN This repository contains the main code for training a PS-SC GAN (a GAN implemented with the Perceptual Simplicity and Spatial Constriction c

Xinqi/Steven Zhu 40 Dec 16, 2022