MetaTTE: a Meta-Learning Based Travel Time Estimation Model for Multi-city Scenarios

Related tags

Deep LearningMetaTTE
Overview

MetaTTE: a Meta-Learning Based Travel Time Estimation Model for Multi-city Scenarios

This is the official TensorFlow implementation of MetaTTE in the manuscript.

Core Requirements

  • tensorflow~=2.3.0
  • numpy~=1.18.4
  • spektral~=0.6.1
  • pandas~=1.0.3
  • tqdm~=4.46.0
  • opencv-python~=4.3.0.36
  • matplotlib~=3.2.1
  • Pillow~=7.1.2
  • scipy~=1.4.1

All Dependencies can be installed using the following command:

pip install -r requirements.txt

Data Preparation

We here provide the datasets we adopted in this paper with Google Drive. After downloading the zip file, please extract all the files in data directory to the data folder in this project.

Download Link: Download

Configuration

We here list a sample of our config file, and leave the comments for explanation. \ (Please DO NOT include the comments in config files)

[General]
mode = train
# Specify the absoulute path of training, validation and testing files
train_files = ./data/chengdu/train.npy,./data/porto/train.npy
val_files = ./data/chengdu/val.npy,./data/porto/val.npy
test_files = ./data/chengdu/test.npy,./data/porto/test.npy
# Specify the batch size
batch_size = 32
# Specify the number for GPU
gpu = 7
# Specify the unique label for each experiment
prefix = tte_exp_64_gru

[Model]
# Specify the inner learning rate
learning_rate = 1e-2
# Specify the inner reduce rate of learning rate
lr_reduce = 0.5
# Specify the maximum iteration
epoch = 500000
# Specify the k shot
inner_k = 10
# Specify the outer step size
outer_step_size = 0.1
# Specify the model according to the class name
model = MSMTTEGRUAttModel
# Specify the dataset according to the class name
dataset = MyDifferDatasetWithEmbedding
# Specify the dataloader according to the class name
dataloader = MyDataLoaderWithEmbedding


# mean, standard deviation for latitudes, longitudes and travel time (Chengdu is before the comma while Porto is after the comma)
[Statistics]
lat_means = 30.651168872309235,41.16060653954797
lng_means = 104.06000501543934,-8.61946359614912
lat_stds = 0.039222931811691585,0.02315827641949562
lng_stds = 0.045337940910596744,0.029208656457667292
labels_means = 1088.0075248390972,691.2889878452086
labels_stds = 1315.707363003298,347.4765869900725

Model Training

Here are commands for training the model on both Chengdu and Porto tasks.

python main.py --config=./experiments/finetuning/64/gru.conf

Eval baseline methods

Here are commands for testing the model on both Chengdu and Porto tasks.

python main.py --config=./experiments/finetuning/64/gru.conf

Citation

We currently do not provide citations.

Owner
morningstarwang
Research assistant in ICT, P.h.D candidate in BUPT, Consultant in HBY, and Advisor in Path Academics.
morningstarwang
Ratatoskr: Worcester Tech's conference scheduling system

Ratatoskr: Worcester Tech's conference scheduling system In Norse mythology, Ratatoskr is a squirrel who runs up and down the world tree Yggdrasil to

4 Dec 22, 2022
Notification Triggers for Python

Notipyer Notification triggers for Python Send async email notifications via Python. Get updates/crashlogs from your scripts with ease. Installation p

Chirag Jain 17 May 16, 2022
Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021)

Style-based Point Generator with Adversarial Rendering for Point Cloud Completion (CVPR 2021) An efficient PyTorch library for Point Cloud Completion.

Microsoft 119 Jan 02, 2023
A PyTorch implementation of Mugs proposed by our paper "Mugs: A Multi-Granular Self-Supervised Learning Framework".

Mugs: A Multi-Granular Self-Supervised Learning Framework This is a PyTorch implementation of Mugs proposed by our paper "Mugs: A Multi-Granular Self-

Sea AI Lab 62 Nov 08, 2022
Post-Training Quantization for Vision transformers.

PTQ4ViT Post-Training Quantization Framework for Vision Transformers. We use the twin uniform quantization method to reduce the quantization error on

Zhihang Yuan 61 Dec 28, 2022
Moer Grounded Image Captioning by Distilling Image-Text Matching Model

Moer Grounded Image Captioning by Distilling Image-Text Matching Model Requirements Python 3.7 Pytorch 1.2 Prepare data Please use git clone --recurse

YE Zhou 60 Dec 16, 2022
Tutorial for the PERFECTING FACTORY 5.0 WITH EDGE-POWERED AI workshop

Workshop Advantech Jetson Nano This tutorial has been designed for the PERFECTING FACTORY 5.0 WITH EDGE-POWERED AI workshop in collaboration with Adva

Edge Impulse 18 Nov 22, 2022
Robot Hacking Manual (RHM). From robotics to cybersecurity. Papers, notes and writeups from a journey into robot cybersecurity.

RHM: Robot Hacking Manual Download in PDF RHM v0.4 ┃ Read online The Robot Hacking Manual (RHM) is an introductory series about cybersecurity for robo

Víctor Mayoral Vilches 233 Dec 30, 2022
Towards Improving Embedding Based Models of Social Network Alignment via Pseudo Anchors

PSML paper: Towards Improving Embedding Based Models of Social Network Alignment via Pseudo Anchors PSML_IONE,PSML_ABNE,PSML_DEEPLINK,PSML_SNNA: numpy

13 Nov 27, 2022
Research on Event Accumulator Settings for Event-Based SLAM

Research on Event Accumulator Settings for Event-Based SLAM This is the source code for paper "Research on Event Accumulator Settings for Event-Based

Robin Shaun 26 Dec 21, 2022
codes for paper Combining Dynamic Local Context Focus and Dependency Cluster Attention for Aspect-level sentiment classification

DLCF-DCA codes for paper Combining Dynamic Local Context Focus and Dependency Cluster Attention for Aspect-level sentiment classification. submitted t

15 Aug 30, 2022
Pytorch implementation of BRECQ, ICLR 2021

BRECQ Pytorch implementation of BRECQ, ICLR 2021 @inproceedings{ li&gong2021brecq, title={BRECQ: Pushing the Limit of Post-Training Quantization by Bl

Yuhang Li 148 Dec 28, 2022
Pytorch Implementation of the paper "Cross-domain Correspondence Learning for Exemplar-based Image Translation"

CoCosNet Pytorch Implementation of the paper "Cross-domain Correspondence Learning for Exemplar-based Image Translation" (CVPR 2020 oral). Update: 202

Lingbo Yang 38 Sep 22, 2021
Pytorch implementation code for [Neural Architecture Search for Spiking Neural Networks]

Neural Architecture Search for Spiking Neural Networks Pytorch implementation code for [Neural Architecture Search for Spiking Neural Networks] (https

Intelligent Computing Lab at Yale University 28 Nov 18, 2022
Code for CVPR2021 paper 'Where and What? Examining Interpretable Disentangled Representations'.

PS-SC GAN This repository contains the main code for training a PS-SC GAN (a GAN implemented with the Perceptual Simplicity and Spatial Constriction c

Xinqi/Steven Zhu 40 Dec 16, 2022
Vector Quantization, in Pytorch

Vector Quantization - Pytorch A vector quantization library originally transcribed from Deepmind's tensorflow implementation, made conveniently into a

Phil Wang 665 Jan 08, 2023
BLEURT is a metric for Natural Language Generation based on transfer learning.

BLEURT: a Transfer Learning-Based Metric for Natural Language Generation BLEURT is an evaluation metric for Natural Language Generation. It takes a pa

Google Research 492 Jan 05, 2023
Official code for: A Probabilistic Hard Attention Model For Sequentially Observed Scenes

"A Probabilistic Hard Attention Model For Sequentially Observed Scenes" Authors: Samrudhdhi Rangrej, James Clark Accepted to: BMVC'21 A recurrent atte

5 Nov 19, 2022
Synthesize photos from PhotoDNA using machine learning 🌱

Ribosome Synthesize photos from PhotoDNA. See the blog post for more information. Installation Dependencies You can install Python dependencies using

Anish Athalye 112 Nov 23, 2022
code for the ICLR'22 paper: On Robust Prefix-Tuning for Text Classification

On Robust Prefix-Tuning for Text Classification Prefix-tuning has drawed much attention as it is a parameter-efficient and modular alternative to adap

Zonghan Yang 12 Nov 30, 2022