HW3 ― GAN, ACGAN and UDA

Overview

HW3 ― GAN, ACGAN and UDA

In this assignment, you are given datasets of human face and digit images. You will need to implement the models of both GAN and ACGAN for generating human face images, and the model of DANN for classifying digit images from different domains.

For more details, please click this link to view the slides of HW3.

Usage

To start working on this assignment, you should clone this repository into your local machine by using the following command.

git clone https://github.com/dlcv-spring-2019/hw3-
   
    .git

   

Note that you should replace with your own GitHub username.

Dataset

In the starter code of this repository, we have provided a shell script for downloading and extracting the dataset for this assignment. For Linux users, simply use the following command.

bash ./get_dataset.sh

The shell script will automatically download the dataset and store the data in a folder called hw3_data. Note that this command by default only works on Linux. If you are using other operating systems, you should download the dataset from this link and unzip the compressed file manually.

⚠️ IMPORTANT NOTE ⚠️
You should keep a copy of the dataset only in your local machine. DO NOT upload the dataset to this remote repository. If you extract the dataset manually, be sure to put them in a folder called hw3_data under the root directory of your local repository so that it will be included in the default .gitignore file.

Evaluation

To evaluate your UDA models in Problems 3 and 4, you can run the evaluation script provided in the starter code by using the following command.

python3 hw3_eval.py $1 $2
  • $1 is the path to your predicted results (e.g. hw3_data/digits/mnistm/test_pred.csv)
  • $2 is the path to the ground truth (e.g. hw3_data/digits/mnistm/test.csv)

Note that for hw3_eval.py to work, your predicted .csv files should have the same format as the ground truth files we provided in the dataset as shown below.

image_name label
00000.png 4
00001.png 3
00002.png 5
... ...

Submission Rules

Deadline

108/05/08 (Wed.) 01:00 AM

Late Submission Policy

You have a five-day delay quota for the whole semester. Once you have exceeded your quota, the credit of any late submission will be deducted by 30% each day.

Note that while it is possible to continue your work in this repository after the deadline, we will by default grade your last commit before the deadline specified above. If you wish to use your quota or submit an earlier version of your repository, please contact the TAs and let them know which commit to grade. For more information, please check out this post.

Academic Honesty

  • Taking any unfair advantages over other class members (or letting anyone do so) is strictly prohibited. Violating university policy would result in an F grade for this course (NOT negotiable).
  • If you refer to some parts of the public code, you are required to specify the references in your report (e.g. URL to GitHub repositories).
  • You are encouraged to discuss homework assignments with your fellow class members, but you must complete the assignment by yourself. TAs will compare the similarity of everyone’s submission. Any form of cheating or plagiarism will not be tolerated and will also result in an F grade for students with such misconduct.

Submission Format

Aside from your own Python scripts and model files, you should make sure that your submission includes at least the following files in the root directory of this repository:

  1. hw3_ .pdf
    The report of your homework assignment. Refer to the "Grading" section in the slides for what you should include in the report. Note that you should replace with your student ID, NOT your GitHub username.
  2. hw3_p1p2.sh
    The shell script file for running your GAN and ACGAN models. This script takes as input a folder and should output two images named fig1_2.jpg and fig2_2.jpg in the given folder.
  3. hw3_p3.sh
    The shell script file for running your DANN model. This script takes as input a folder containing testing images and a string indicating the target domain, and should output the predicted results in a .csv file.
  4. hw3_p4.sh
    The shell script file for running your improved UDA model. This script takes as input a folder containing testing images and a string indicating the target domain, and should output the predicted results in a .csv file.

We will run your code in the following manner:

bash ./hw3_p1p2.sh $1
bash ./hw3_p3.sh $2 $3 $4
bash ./hw3_p4.sh $2 $3 $4
  • $1 is the folder to which you should output your fig1_2.jpg and fig2_2.jpg.
  • $2 is the directory of testing images in the target domain (e.g. hw3_data/digits/mnistm/test).
  • $3 is a string that indicates the name of the target domain, which will be either mnistm, usps or svhn.
    • Note that you should run the model whose target domain corresponds with $3. For example, when $3 is mnistm, you should make your prediction using your "USPS→MNIST-M" model, NOT your "MNIST-M→SVHN" model.
  • $4 is the path to your output prediction file (e.g. hw3_data/digits/mnistm/test_pred.csv).

🆕 NOTE
For the sake of conformity, please use the python3 command to call your .py files in all your shell scripts. Do not use python or other aliases, otherwise your commands may fail in our autograding scripts.

Packages

Below is a list of packages you are allowed to import in this assignment:

python: 3.5+
tensorflow: 1.13
keras: 2.2+
torch: 1.0
h5py: 2.9.0
numpy: 1.16.2
pandas: 0.24.0
torchvision: 0.2.2
cv2, matplotlib, skimage, Pillow, scipy
The Python Standard Library

Note that using packages with different versions will very likely lead to compatibility issues, so make sure that you install the correct version if one is specified above. E-mail or ask the TAs first if you want to import other packages.

Remarks

  • If your model is larger than GitHub’s maximum capacity (100MB), you can upload your model to another cloud service (e.g. Dropbox). However, your shell script files should be able to download the model automatically. For a tutorial on how to do this using Dropbox, please click this link.
  • DO NOT hard code any path in your file or script, and the execution time of your testing code should not exceed an allowed maximum of 10 minutes.
  • If we fail to run your code due to not following the submission rules, you will receive 0 credit for this assignment.

Q&A

If you have any problems related to HW3, you may

Owner
grassking100
A researcher study in bioinformatics and deep learning. To see other repositories: https://bitbucket.org/grassking100/?sort=-updated_on&privacy=public.
grassking100
Medical Image Segmentation using Squeeze-and-Expansion Transformers

Medical Image Segmentation using Squeeze-and-Expansion Transformers Introduction This repository contains the code of the IJCAI'2021 paper 'Medical Im

askerlee 172 Dec 20, 2022
A dead simple python wrapper for darknet that works with OpenCV 4.1, CUDA 10.1

What Dead simple python wrapper for Yolo V3 using AlexyAB's darknet fork. Works with CUDA 10.1 and OpenCV 4.1 or later (I use OpenCV master as of Jun

Pliable Pixels 6 Jan 12, 2022
Official implementation of FCL-taco2: Fast, Controllable and Lightweight version of Tacotron2 @ ICASSP 2021

FCL-Taco2: Towards Fast, Controllable and Lightweight Text-to-Speech synthesis (ICASSP 2021) Paper | Demo Block diagram of FCL-taco2, where the decode

Disong Wang 39 Sep 28, 2022
existing and custom freqtrade strategies supporting the new hyperstrategy format.

freqtrade-strategies Description Existing and self-developed strategies, rewritten to support the new HyperStrategy format from the freqtrade-develop

39 Aug 20, 2021
Deep Learning for Computer Vision final project

Deep Learning for Computer Vision final project

grassking100 1 Nov 30, 2021
PlaidML is a framework for making deep learning work everywhere.

A platform for making deep learning work everywhere. Documentation | Installation Instructions | Building PlaidML | Contributing | Troubleshooting | R

PlaidML 4.5k Jan 02, 2023
Audio Source Separation is the process of separating a mixture into isolated sounds from individual sources

Audio Source Separation is the process of separating a mixture into isolated sounds from individual sources (e.g. just the lead vocals).

Victor Basu 14 Nov 07, 2022
This project aim to create multi-label classification annotation tool to boost annotation speed and make it more easier.

This project aim to create multi-label classification annotation tool to boost annotation speed and make it more easier.

4 Aug 02, 2022
Code for Environment Inference for Invariant Learning (ICML 2020 UDL Workshop Paper)

Environment Inference for Invariant Learning This code accompanies the paper Environment Inference for Invariant Learning, which appears at ICML 2021.

Elliot Creager 40 Dec 09, 2022
The implementation of "Bootstrapping Semantic Segmentation with Regional Contrast".

ReCo - Regional Contrast This repository contains the source code of ReCo and baselines from the paper, Bootstrapping Semantic Segmentation with Regio

Shikun Liu 128 Dec 30, 2022
WiFi-based Multi-task Sensing

WiFi-based Multi-task Sensing Introduction WiFi-based sensing has aroused immense attention as numerous studies have made significant advances over re

zhangx289 6 Nov 24, 2022
Learning Neural Painters Fast! using PyTorch and Fast.ai

The Joy of Neural Painting Learning Neural Painters Fast! using PyTorch and Fast.ai Blogpost with more details: The Joy of Neural Painting The impleme

Libre AI 72 Nov 10, 2022
Codebase to experiment with a hybrid Transformer that combines conditional sequence generation with regression

Regression Transformer Codebase to experiment with a hybrid Transformer that combines conditional sequence generation with regression . Development se

International Business Machines 27 Jan 05, 2023
An OpenAI Gym environment for Super Mario Bros

gym-super-mario-bros An OpenAI Gym environment for Super Mario Bros. & Super Mario Bros. 2 (Lost Levels) on The Nintendo Entertainment System (NES) us

Andrew Stelmach 1 Jan 05, 2022
Unleashing Transformers: Parallel Token Prediction with Discrete Absorbing Diffusion for Fast High-Resolution Image Generation from Vector-Quantized Codes

Unleashing Transformers: Parallel Token Prediction with Discrete Absorbing Diffusion for Fast High-Resolution Image Generation from Vector-Quantized C

Sam Bond-Taylor 139 Jan 04, 2023
Robotics with GPU computing

Robotics with GPU computing Cupoch is a library that implements rapid 3D data processing for robotics using CUDA. The goal of this library is to imple

Shirokuma 625 Jan 07, 2023
RCD: Relation Map Driven Cognitive Diagnosis for Intelligent Education Systems

RCD: Relation Map Driven Cognitive Diagnosis for Intelligent Education Systems This is our implementation for the paper: Weibo Gao, Qi Liu*, Zhenya Hu

BigData Lab @USTC 中科大大数据实验室 10 Oct 16, 2022
Your interactive network visualizing dashboard

Your interactive network visualizing dashboard Documentation: Here What is Jaal Jaal is a python based interactive network visualizing tool built usin

Mohit 177 Jan 04, 2023
Custom Implementation of Non-Deep Networks

ParNet Custom Implementation of Non-deep Networks arXiv:2110.07641 Ankit Goyal, Alexey Bochkovskiy, Jia Deng, Vladlen Koltun Official Repository https

Pritama Kumar Nayak 20 May 27, 2022