Simple Similarities Service

Overview

simsity

Simsity is a Super Simple Similarities Service[tm].
It's all about building a neighborhood. Literally!

This repository contains simple tools to help in similarity retreival scenarios by making a convient wrapper around encoding strategies as well as nearest neighbor approaches. Typical usecases include early stage bulk labelling and duplication discovery.

Warning

Alpha software. Expect things to break. Do not use in production.

Quickstart

This is the basic setup for this package.

import pandas as pd

from simsity.service import Service
from simsity.indexer import PyNNDescentIndexer
from simsity.preprocessing import Identity, ColumnLister


# The Indexer handles the nearest neighbor search
# The Encoder handles the encoding of the datapoints
service = Service(
    indexer=PyNNDescentIndexer(metric="euclidean"),
    encoder=CountVectorizer()
)

# The encoder defines how we encode the data going in.
encoder = make_pipeline(
    ColumnLister(column="text"),
    CountVectorizer()
)

# The indexer handles the nearest neighbor lookup.
indexer = PyNNDescentIndexer(metric="euclidean", n_neighbors=2)

# The service combines the two into a single object.
service_clinc = Service(
    encoder=encoder,
    indexer=indexer,
)

# We can now train the service.
df_clinc = pd.read_csv("tests/data/clinc-data.csv")
service_clinc.train_from_dataf(df_clinc, features=["text"])

# Query the datapoints
service.query("give me directions", n_neighbors=20)

# Save the entire system
service.save("/tmp/simple-model")

# You can also load the model now.
reloaded = Service.load("/tmp/simple-model")

# We can also host it as a web service
reloaded.serve(host='0.0.0.0', port=8080)

# You can now POST to http://0.0.0.0:8080/query with payload:
# {"query": {"text": "hello there"}, "n_neighbors": 20}
Comments
  • Add support for pretrained encoders and transformed data

    Add support for pretrained encoders and transformed data

    First of all this project looks great! I've taken an initial stab at #12 and also tried to add support querying data that has already been transformed. If you have data that you've already transformed (e.g. a UMAP embedding), you probably don't want to rerun encoder.transform again. In this case you want to index the transformed data and query it directly.

    This is just a first crack so happy to incorporate any feedback you might have!

    opened by gclen 10
  • embetter: better embeddings

    embetter: better embeddings

    This is conceptual work in progress. The maintainer is actively researching this, please do not work on it.

    Problem Statement

    When you submit where is my phoone and you get similarities you may get things like:

    • where is my phone
    • where is my credit card

    Depending on your task, either the "where is" part of the sentence is more important or the "phone" part is more important. The encoder, however, may be very brittle when it comes to spelling errors. So to put it more generally;

    image

    The similarity in an embedded space in our case is very much "general". I'm using "general" here, as opposed to "specific" to indicate that these similarities have been constructed without having a task in mind.

    Similar Issue

    Suppose that we are deduplicating and we have a zipcode, city, first-, and last-name. How would our encoding be able to understand that having the same city is not a strong signal while having the first name certainly is? Can we really expect a standard encoding to understand this? Without labels ... I think not.

    opened by koaning 3
  • Add `Identity` as default encoder for Service.

    Add `Identity` as default encoder for Service.

    As mentioned in https://github.com/koaning/simsity/pull/13:

    I think the refit parameter should go in the Service() call. I think there should also be a parameter somewhere to avoid calling .transform() if the data has already been transformed. Do you think it is worth adding an additional parameter to Service() and keeping the indexed_from_transformed_data method?

    It's a fair remark. I think preventing a transfrom() is fair, but the solution would be to have an Identity() transformer that just keeps the data as-is. This would also make a great default value for the encoder.

    Made this issue to track progress and to discuss the approach.

    opened by koaning 2
  • Codecalm tutorial on simsity

    Codecalm tutorial on simsity

    Hi Vincent. Since I discovered you my barrier towards Python has eroded! Thank you. I'm a Data Scientist who wants to check if simsity can help with retrieving similar regions based on environmental variables.

    opened by FrancyJGLisboa 2
  • Update indexer

    Update indexer

    Hi! Are there any plans to add support for updating the indexer, i.e. add new documents without retraining the entire pipeline? Would be a very useful feature .

    from simsity.service import Service
    
    service = Service(
        indexer=indexer,
        encoder=encoder
    )
    
    service.train_from_dataf(df, features=["text"])
    
    ....
    
    service.update(new_docs, features=["text"])  # <- this
    
    
    opened by nthomsencph 1
  • New API

    New API

    I think the original design was flawed and this project should stick to the scikit-learn API more.

    from simsity.preprocessing import Grab
    from simsity.service import Service
    from simsity.indexer import (AnnoyIndexer, PynnDescentIndexed, NMSlibIndexer,
                                 PineconeIndexer, QdrantIndexer, WeviateIndexer)
    
    
    encoder = make_pipeline(
        make_union(
            make_pipeline(Grab("text"), SentenceEncoder()),
            make_pipeline(Grab("title"), SentenceEncoder())
        )
    )
    
    service = Service(encoder, indexer, batch_size=50)
    service.index(X)
    items, dists = service.query(X, n=10)
    
    opened by koaning 0
  • Education Day Goals

    Education Day Goals

    • [x] add typing + type checker
    • [x] add tests for the minhash tools
    • [ ] collect more useful datasets
    • [x] automate the benchmarking
    • [x] write getting started guides
    • [ ] record a quick demo for colleagues
    • [ ] add github actions stash
    opened by koaning 0
  • added-components

    added-components

    Adding the MinHash components. This is also an amazing opportunity to:

    • [ ] add types and a type checker
    • [ ] add some standard tests for indexers
    • [ ] add a script to run some benchmarks on the clinc dataset
    opened by koaning 0
Releases(0.1.1)
Owner
vincent d warmerdam
Solving problems involving data. Mostly NLP these days. AskMeAnything[tm].
vincent d warmerdam
Script for downloading Coursera.org videos and naming them.

Coursera Downloader Coursera Downloader Introduction Features Disclaimer Installation instructions Recommended installation method for all Operating S

Coursera Downloader 9k Jan 02, 2023
Bot developed in python, 100% open-source, compatible with Windows and Linux.

Bombcrypto Bot [Family JOW] Bot desenvolvido em python, 100% do código é aberto, para aqueles que tenham conhecimento validarem que não existe nenhum

Renato Maia 71 Dec 20, 2022
PlexAutoSkip - Automatically skip content in Plex

PlexAutoSkip Automatically skip tagged content in Plex A background python scrip

Michael Higgins 97 Dec 21, 2022
Desktop Backup Client for Borg

Vorta Backup Client Vorta is a backup client for macOS and Linux desktops. It integrates the mighty BorgBackup with your desktop environment to protec

BorgBase.com 1.5k Jan 03, 2023
A minimalistic library designed to provide native access to YNAB data from Python

pYNAB A minimalistic library designed to provide native access to YNAB data from Python. Install The simplest way is to install the latest version fro

Ivan Smirnov 92 Apr 06, 2022
Multi-Branch CI/CD Pipeline using CDK Pipelines.

Using AWS CDK Pipelines and AWS Lambda for multi-branch pipeline management and infrastructure deployment. This project shows how to use the AWS CDK P

AWS Samples 36 Dec 23, 2022
Lumberjack-bot - A game bot written for Lumberjack game at Telegram platform

This is a game bot written for Lumberjack game at Telegram platform. It is devel

Uğur Uysal 6 Apr 07, 2022
Twitter-redesign - Twitter Redesign With Django

Twitter Redesign A project that tests Django and React knowledge through a twitt

Mark Jumba 1 Jun 01, 2022
Telegram File to Link Fastest Bot , also used for movies streaming

Telegram File Stream Bot ! A Telegram bot to stream files to web. Report a Bug | Request Feature About This Bot This bot will give you stream links fo

Avishkar Patil 194 Jan 07, 2023
Jira-cache - Jira cache with python

Direct queries to Jira have two issues: they are sloooooow many queries are impo

John Scott 6 Oct 08, 2022
数字货币BTC量化交易系统-实盘行情服务器,虚拟币自动炒币-火币API-币安交易所-量化交易-网格策略。趋势跟踪策略,最简源码,可在线回测,一键部署,可定制的比特币量化交易框架,3年实盘检验!

huobi_intf 提供火币网的实时行情服务器(支持火币网所有交易对的实时行情),自带API缓存,可用于实盘交易和模拟回测。 行情数据,是一切量化交易的基础,可以获取1min、60min、4hour、1day等数据。数据能进行缓存,可以在多个币种,多个时间段查询的时候,查询速度依然很快。 服务框架

dev 258 Sep 20, 2021
Automatically searching for vaccine appointments

Vaccine Appointments Automatically searching for vaccine appointments Usage To copy this package, run: git clone https://github.com/TheIronicCurtain/v

58 Apr 13, 2021
Python library to download market data via Bloomberg, Eikon, Quandl, Yahoo etc.

findatapy findatapy creates an easy to use Python API to download market data from many sources including Quandl, Bloomberg, Yahoo, Google etc. using

Cuemacro 1.3k Jan 04, 2023
Implementation of the paper 'Sentence Bottleneck Autoencoders from Transformer Language Models'

Introduction This repository contains the code for the paper Sentence Bottleneck Autoencoders from Transformer Language Models by Ivan Montero, Nikola

Ivan Montero 14 Dec 28, 2022
Framework for Telegram users and chats investigating.

telegram_scan Fantastic and full featured framework for Telegram users and chats investigating. Prerequisites: pip3 install pyrogram; get api_id and a

71 Dec 17, 2022
Robot Swerve Test Public With Python

Robot-Swerve-Test-Public The codebase for our swerve drivetrain prototype robot.

1 Jan 09, 2022
ro.py is a modern, asynchronous Python 3 wrapper for the Roblox API.

GitHub | Discord | PyPI | Documentation | Examples | License Overview Welcome to ro.py! ro.py is an asynchronous, object-oriented wrapper for the Robl

ro.py 81 Dec 26, 2022
Create Fast and easy image datasets using reddit

Reddit-Image-Scraper Reddit Reddit is an American Social news aggregation, web content rating, and discussion website. Reddit has been devided by topi

Wasin Silakong 4 Apr 27, 2022
A simple Python wrapper for the Amazon.com Product Advertising API ⛺

Amazon Simple Product API A simple Python wrapper for the Amazon.com Product Advertising API. Features An object oriented interface to Amazon products

Yoav Aviram 789 Dec 26, 2022
This repository are used to give class about AWS

AWSTraining This repository are used to give class about AWS by Marco Antonio Pereira Linkedin: https://www.linkedin.com/in/marcoap To see the types o

Marco Antonio Pereira 6 Nov 23, 2022