The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards

Overview

Project Scopes

The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards. To do that, a conceptual data model and a data pipeline will be defined.

Architecture

Data are uploaded to Google Cloud Storage bucket. GCS will act as the data lake where all raw files are stored. Data will then be loaded to staging tables on BigQuery. The ETL process will take data from those staging tables and create data mart tables. An Airflow instance can be deployed on a Google Compute Engine or locally to orchestrate the pipeline.

Here are the justifications for the technologies used:

  • Google Cloud Storage: act as the data lake, vertically scalable.
  • Google Big Query: act as data base engine for data warehousing, data mart and ETL processes. BigQuery is a serverless solution that can easily and effectively process petabytes scale dataset.
  • Apache Airflow: orchestrate the workflow by issuing command line to load data to BigQuery or SQL queries for ETL process. Airflow does not have to process any data by itself, thus allowing the architecture to scale.

Data Model

The database is designed following a star-schema principal with 1 fact table and 5 dimensions tables.

image

  • F_IMMIGRATION_DATA: contains immigration information such as arrival date, departure date, visa type, gender, country of origin, etc.
  • D_TIME: contains dimensions for date column
  • D_PORT: contains port_id and port_name
  • D_AIRPORT: contains airports within a state
  • D_STATE: contains state_id and state_name
  • D_COUNTRY: contains country_id and country_name
  • D_WEATHER: contains average weather for a state
  • D_CITY_DEMO: contains demographic information for a city

Data pipeline

This project uses Airflow for orchestration.

image

A DummyOperator start_pipeline kick off the pipeline followed by 4 load operations. Those operations load data from GCS bucket to BigQuery tables. The immigration_data is loaded as parquet files while the others are csv formatted. There are operations to check rows after loading to BigQuery.

Next the pipeline loads 3 master data object from the I94 Data dictionary. Then the F_IMMIGRATION_DATA table is created and check to make sure that there is no duplicates. Other dimension tables are also created and the pipelines finishes.

Scenarios

Data increase by 100x

Currently infrastructure can easily supports 100x increase in data size. GCS and BigQuery can handle petabytes scale data. Airflow is not a bottle neck since it only issue commands to other services.

Pipelines would be run on 7am daily. how to update dashboard? would it still work?

Schedule dag to be run daily at 7 AM. Setup dag retry, email/slack notification on failures.

Make it available to 100+ people

BigQuery is auto-scaling so if 100+ people need to access, it can handle that easily. If more people or services need access to the database, we can add steps to write to a NoSQL database like Data Store or Cassandra, or write to a SQL one that supports horizontal scaling like BigTable.

Project Instructions

GCP setup

Follow the following steps:

  • Create a project on GCP
  • Enable billing by adding a credit card (you have free credits worth $300)
  • Navigate to IAM and create a service account
  • Grant the account project owner. It is convenient for this project, but not recommended for production system. You should keep your key somewhere safe.

Create a bucket on your project and upload the data with the following structure:

gs://cloud-data-lake-gcp/airports/:
gs://cloud-data-lake-gcp/airports/airport-codes_csv.csv
gs://cloud-data-lake-gcp/airports/airport_codes.json

gs://cloud-data-lake-gcp/cities/:
gs://cloud-data-lake-gcp/cities/us-cities-demographics.csv
gs://cloud-data-lake-gcp/cities/us_cities_demo.json

gs://cloud-data-lake-gcp/immigration_data/:
gs://cloud-data-lake-gcp/immigration_data/part-00000-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00001-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00002-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00003-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00004-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00005-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00006-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00007-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00008-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00009-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00010-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00011-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00012-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00013-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet

gs://cloud-data-lake-gcp/master_data/:
gs://cloud-data-lake-gcp/master_data/
gs://cloud-data-lake-gcp/master_data/I94ADDR.csv
gs://cloud-data-lake-gcp/master_data/I94CIT_I94RES.csv
gs://cloud-data-lake-gcp/master_data/I94PORT.csv

gs://cloud-data-lake-gcp/weather/:
gs://cloud-data-lake-gcp/weather/GlobalLandTemperaturesByCity.csv
gs://cloud-data-lake-gcp/weather/temperature_by_city.json

You can copy the data to your own bucket by running the following:

gsutil cp -r gs://cloud-data-lake-gcp/ gs://{your_bucket_name}

Local setup

Clone the project, create environment, install required packages by running the following:

Install docker if it's not already installed. You can find the resources to do that here.

Install the Astronomer CLI following the instructions here.

Run the following commands to bring up the Airflow instance:

astro d start

You can look at the logs by running make logs if you need to debug something. You can access and manage the pipeline by typing the following address to a browser:

localhost:8080/admin/

If everything is setup correctly, you will see the following screen:

image

Navigate to Admin -> Connections and paste in the credentials for the following two connections: bigquery_default and google_cloud_default

image

Navigate to the main dag on path dags\cloud-data-lake-pipeline.py and change the following parameters with your own setup:

project_id = 'cloud-data-lake'
staging_dataset = 'IMMIGRATION_DWH_STAGING'
dwh_dataset = 'IMMIGRATION_DWH'
gs_bucket = 'cloud-data-lake-gcp'

You can then trigger the dag and the pipeline will run.

The data warehouse

The final data warehouse looks like this: img

Owner
Shweta_kumawat
AI software @ Computer programmer, Deep Learning, Computer Vision Researcher and Developer. Implement AI products and machine learning solutions
Shweta_kumawat
An Advanced Python Playing Card Module that makes creating playing card games simple and easy!

playingcards.py An Advanced Python Playing Card Module that makes creating playing card games simple and easy! Features Easy to Understand Class Objec

Blake Potvin 5 Aug 30, 2022
Make WhatsApp ChatBot and use WhatsApp API to send the WhatsApp messages in python .

Ultramsg.com WhatsApp Bot using WhatsApp API and ultramsg Demo WhatsApp API ChatBot using Ultramsg API with python. Opportunities and tasks: The outpu

Ultramsg 64 Dec 29, 2022
This Lambda will Pull propagated routes from TGW and update VPC route table

AWS-Transitgateway-Route-Propagation This Lambda will Pull propagated routes from TGW and update VPC route table. Tested on python 3.8 Lambda AWS INST

4 Jan 20, 2022
An async-ready Python wrapper around FerrisChat's API.

FerrisWheel An async-ready Python wrapper around FerrisChat's API. Installation Instructions Linux: $ python3.9 -m pip install -U ferriswheel Python 3

FerrisChat 8 Feb 08, 2022
YouTube-Discord-Bot - Discord Bot to Search YouTube

YouTube Bot Info YouTube Bot is a discord bot where you can search for anything

Riceblades11 10 Mar 05, 2022
A Recommendation System For Diabetes Detection And Treatment

Diabetes-detection-tg-bot A Recommendation System For Diabetes Detection And Treatment Данная система помогает определить наличие или отсутствие сахар

Alexander Kanonirov 1 Nov 22, 2021
Telegram Bot to store Posts and Documents and it can Access by Special Links.

Telegram Bot to store Posts and Documents and it can Access by Special Links. I Guess This Will Be Usefull For Many People..... 😇 . Features Fully cu

REX BOTZ 1 Dec 23, 2021
SQS + Lambda를 활용한 문자 메시지 및 이메일, Voice call 호출을 간단하게 구현하는 serverless 템플릿

AWS SQS With Lambda notification 서버 구축을 위한 Poc TODO serverless를 통해 sqs 관련 리소스(람다, sqs) 배포 가능한 템플릿 작성 및 배포 poc차원에서 간단한 rest api 호출을 통한 sqs fifo 큐에 메시지

김세환 4 Aug 08, 2021
A.I and game for gomoku, working only on windows

Gomoku (A.I of gomoku) The goal of the project is to create an artificial intelligence of gomoku. Goals Beat the opponent. Requirements Python 3.7+ Wo

Luis Rosario 13 Jun 20, 2021
A GETTR API client written in Python.

GUTTR A GETTR client library written in Python. I rushed to get this out so it's a bit janky. Open an issue if something is broken or missing. Getting

Roger Johnston 13 Nov 23, 2022
A Discord Tool which checks for valid tokens and adds them to a server

Discord-Server-Botter A Discord Tool which checks for valid tokens and adds them to a server Usage Choice 1 is for checking tokens Choice 2 is for add

Bless 3 Jul 01, 2022
Telegram Group Manager Bot + Userbot Written In Python Using Pyrogram.

Telegram Group Manager Bot + Userbot Written In Python Using PyrogramTelegram Group Manager Bot + Userbot Written In Python Using Pyrogram

1 Nov 11, 2021
A bot written in Python to automate attending classes on MyClass (Codetantra).

codetantrabot This is python program to attend class on myclass(codetantra) Prerequisites You should have Python3 and Pip installed on your system Run

Aniket Kumar 1 Feb 08, 2022
The official Python library for Shodan

shodan: The official Python library and CLI for Shodan Shodan is a search engine for Internet-connected devices. Google lets you search for websites,

John Matherly 2.1k Dec 31, 2022
A basic Ubisoft API wrapper created in python.

UbisoftAPI A basic Ubisoft API wrapper created in python. I will be updating this with more endpoints as time goes on. Please note that this is my fir

Ethan 2 Oct 31, 2021
Bot that embeds a random hysterical meme from Reddit into your text channel as an embedded message, using an API call.

Discord_Meme_Bot 🤣 Bot that embeds a random hysterical meme from Reddit into your text channel as an embedded message, using an API call. Add the bot

2 Jan 16, 2022
unofficial source of the discord bot, “haunting.” created by: vorqz, vert, & Veltz

hauntingSRC unofficial source of the discord bot, “haunting.” created by: vorqz, vert, & Veltz reasoning: creators skidded the most of this bot and do

Vast 11 Nov 04, 2022
Python wrapper for Revolt API

defectio is a direct implementation of the entire Revolt API and provides a way to authenticate and start communicating with Revolt servers. Similar interface to discord.py

Leon Bowie 26 Sep 18, 2022
A program that automates the boring parts of completing the Daily accounting spreadsheet at Taos Ski Valley

TSV_Daily_App A program that automates the boring parts of completing the Daily accounting spreadsheet at my old job. To see how it works you will nee

Devin Beck 2 Jan 01, 2022