Robust & Reliable Route Recommendation on Road Networks

Related tags

Deep LearningNeuroMLR
Overview

NeuroMLR: Robust & Reliable Route Recommendation on Road Networks

This repository is the official implementation of NeuroMLR: Robust & Reliable Route Recommendation on Road Networks.

Introduction

Predicting the most likely route from a source location to a destination is a core functionality in mapping services. Although the problem has been studied in the literature, two key limitations remain to be addressed. First, a significant portion of the routes recommended by existing methods fail to reach the destination. Second, existing techniques are transductive in nature; hence, they fail to recommend routes if unseen roads are encountered at inference time. We address these limitations through an inductive algorithm called NEUROMLR. NEUROMLR learns a generative model from historical trajectories by conditioning on three explanatory factors: the current location, the destination, and real-time traffic conditions. The conditional distributions are learned through a novel combination of Lipschitz embeddings with Graph Convolutional Networks (GCN) on historical trajectories.

Requirements

Dependencies

The code has been tested for Python version 3.8.10 and CUDA 10.2. We recommend that you use the same.

To create a virtual environment using conda,

conda create -n ENV_NAME python=3.8.10
conda activate ENV_NAME

All dependencies can be installed by running the following commands -

pip install -r requirements.txt
pip install --no-index torch-scatter -f https://pytorch-geometric.com/whl/torch-1.6.0+cu102.html
pip install --no-index torch-sparse -f https://pytorch-geometric.com/whl/torch-1.6.0+cu102.html
pip install --no-index torch-cluster -f https://pytorch-geometric.com/whl/torch-1.6.0+cu102.html
pip install --no-index torch-spline-conv -f https://pytorch-geometric.com/whl/torch-1.6.0+cu102.html
pip install torch-geometric

Data

Download the preprocessed data and unzip the downloaded .zip file.

Set the PREFIX_PATH variable in my_constants.py as the path to this extracted folder.

For each city (Chengdu, Harbin, Porto, Beijing, CityIndia), there are two types of data:

1. Mapmatched pickled trajectories

Stored as a python pickled list of tuples, where each tuple is of the form (trip_id, trip, time_info). Here each trip is a list of edge identifiers.

2. OSM map data

In the map folder, there are the following files-

  1. nodes.shp : Contains OSM node information (global node id mapped to (latitude, longitude))
  2. edges.shp : Contains network connectivity information (global edge id mapped to corresponding node ids)
  3. graph_with_haversine.pkl : Pickled NetworkX graph corresponding to the OSM data

Training

After setting PREFIX_PATH in the my_constants.py file, the training script can be run directly as follows-

python train.py -dataset beijing -gnn GCN -lipschitz 

Other functionality can be toggled by adding them as arguments, for example,

python train.py -dataset DATASET -gpu_index GPU_ID -eval_frequency EVALUATION_PERIOD_IN_EPOCHS -epochs NUM_EPOCHS 
python train.py -traffic
python train.py -check_script
python train.py -cpu

Brief description of other arguments/functionality -

Argument Functionality
-check_script to run on a fixed subset of train_data, as a sanity test
-cpu forces computation on a cpu instead of the available gpu
-gnn can choose between a GCN or a GAT
-gnn_layers number of layers for the graph neural network used
-epochs number of epochs to train for
-percent_data percentage data used for training
-fixed_embeddings to make the embeddings static, they aren't learnt as parameters of the network
-embedding_size the dimension of embeddings used
-hidden_size hidden dimension for the MLP
-traffic to toggle the attention module

For exact details about the expected format and possible inputs please refer to the args.py and my_constants.py files.

Evaluation

The training code generates logs for evaluation. To evaluate any pretrained model, run

python eval.py -dataset DATASET -model_path MODEL_PATH

There should be two files under MODEL_PATH, namely model.pt and model_support.pkl (refer to the function save_model() defined in train.py to understand these files).

Pre-trained Models

You can find the pretrained models in the same zip as preprocessed data. To evaluate the models, set PREFIX_PATH in the my_constants.py file and run

python eval.py -dataset DATASET

Results

We present the performance results of both versions of NeuroMLR across five datasets.

NeuroMLR-Greedy

Dataset Precision(%) Recall(%) Reachability(%) Reachability distance (km)
Beijing 75.6 74.5 99.1 0.01
Chengdu 86.1 83.8 99.9 0.0002
CityIndia 74.3 70.1 96.1 0.03
Harbin 59.6 48.6 99.1 0.02
Porto 77.3 70.7 99.6 0.001

NeuroMLR-Dijkstra

Since NeuroMLR-Dijkstra guarantees reachability, the reachability metrics are not relevant here.

Dataset Precision(%) Recall(%)
Beijing 77.9 76.5
Chengdu 86.7 84.2
CityIndia 77.9 73.1
Harbin 66.1 49.6
Porto 79.2 70.9

Contributing

If you'd like to contribute, open an issue on this GitHub repository. All contributions are welcome!

code for paper "Does Unsupervised Architecture Representation Learning Help Neural Architecture Search?"

Does Unsupervised Architecture Representation Learning Help Neural Architecture Search? Code for paper: Does Unsupervised Architecture Representation

39 Dec 17, 2022
Official code release for "Learned Spatial Representations for Few-shot Talking-Head Synthesis" ICCV 2021

Official code release for "Learned Spatial Representations for Few-shot Talking-Head Synthesis" ICCV 2021

Moustafa Meshry 16 Oct 05, 2022
"Neural Turing Machine" in Tensorflow

Neural Turing Machine in Tensorflow Tensorflow implementation of Neural Turing Machine. This implementation uses an LSTM controller. NTM models with m

Taehoon Kim 1k Dec 06, 2022
Weakly-supervised object detection.

Wetectron Wetectron is a software system that implements state-of-the-art weakly-supervised object detection algorithms. Project CVPR'20, ECCV'20 | Pa

NVIDIA Research Projects 342 Jan 05, 2023
Facial expression detector

A tensorflow convolutional neural network model to detect facial expressions.

Carlos Tardón Rubio 5 Apr 20, 2022
《K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters》(2020)

K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters This repository is the implementation of the paper "K-Adapter: Infusing Knowledge

Microsoft 118 Dec 13, 2022
Readings for "A Unified View of Relational Deep Learning for Polypharmacy Side Effect, Combination Therapy, and Drug-Drug Interaction Prediction."

Polypharmacy - DDI - Synergy Survey The Survey Paper This repository accompanies our survey paper A Unified View of Relational Deep Learning for Polyp

AstraZeneca 79 Jan 05, 2023
COCO Style Dataset Generator GUI

A simple GUI-based COCO-style JSON Polygon masks' annotation tool to facilitate quick and efficient crowd-sourced generation of annotation masks and bounding boxes. Optionally, one could choose to us

Hans Krupakar 142 Dec 09, 2022
A high-performance anchor-free YOLO. Exceeding yolov3~v5 with ONNX, TensorRT, NCNN, and Openvino supported.

YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and industrial communities. For more details, please refer to our rep

7.7k Jan 06, 2023
Pytorch and Keras Implementations of Hyperspectral Image Classification -- Traditional to Deep Models: A Survey for Future Prospects.

The repository contains the implementations for Hyperspectral Image Classification -- Traditional to Deep Models: A Survey for Future Prospects. Model

Ankur Deria 115 Jan 06, 2023
PyTorch code for Composing Partial Differential Equations with Physics-Aware Neural Networks

FInite volume Neural Network (FINN) This repository contains the PyTorch code for models, training, and testing, and Python code for data generation t

Cognitive Modeling 20 Dec 18, 2022
Fit Fast, Explain Fast

FastExplain Fit Fast, Explain Fast Installing pip install fast-explain About FastExplain FastExplain provides an out-of-the-box tool for analysts to

8 Dec 15, 2022
A state-of-the-art semi-supervised method for image recognition

Mean teachers are better role models Paper ---- NIPS 2017 poster ---- NIPS 2017 spotlight slides ---- Blog post By Antti Tarvainen, Harri Valpola (The

Curious AI 1.4k Jan 06, 2023
Ivy is a templated deep learning framework which maximizes the portability of deep learning codebases.

Ivy is a templated deep learning framework which maximizes the portability of deep learning codebases. Ivy wraps the functional APIs of existing frameworks. Framework-agnostic functions, libraries an

Ivy 8.2k Jan 02, 2023
Open CV - Convert a picture to look like a cartoon sketch in python

Use the video https://www.youtube.com/watch?v=k7cVPGpnels for initial learning.

Sammith S Bharadwaj 3 Jan 29, 2022
Active Offline Policy Selection With Python

Active Offline Policy Selection This is supporting example code for NeurIPS 2021 paper Active Offline Policy Selection by Ksenia Konyushkova*, Yutian

DeepMind 27 Oct 15, 2022
Implementation of paper "Decision-based Black-box Attack Against Vision Transformers via Patch-wise Adversarial Removal"

Patch-wise Adversarial Removal Implementation of paper "Decision-based Black-box Attack Against Vision Transformers via Patch-wise Adversarial Removal

4 Oct 12, 2022
Pytorch implementation of AREL

Status: Archive (code is provided as-is, no updates expected) Agent-Temporal Attention for Reward Redistribution in Episodic Multi-Agent Reinforcement

8 Nov 25, 2022
Official repository for "Action-Based Conversations Dataset: A Corpus for Building More In-Depth Task-Oriented Dialogue Systems"

Action-Based Conversations Dataset (ABCD) This respository contains the code and data for ABCD (Chen et al., 2021) Introduction Whereas existing goal-

ASAPP Research 49 Oct 09, 2022
Complete system for facial identity system. Include one-shot model, database operation, features visualization, monitoring

Complete system for facial identity system. Include one-shot model, database operation, features visualization, monitoring

2 Dec 28, 2021