[Open Source]. The improved version of AnimeGAN. Landscape photos/videos to anime

Overview

AnimeGANv2

「Open Source」. The improved version of AnimeGAN.
Project Page」 | Landscape photos/videos to anime

News
(2020.12.25) AnimeGANv3 will be released along with its paper in the spring of 2021.
(2021.02.21) The pytorch version of AnimeGANv2 has been released, Be grateful to @bryandlee for his contribution.

Focus:

Anime style Film Picture Number Quality Download Style Dataset
Miyazaki Hayao The Wind Rises 1752 1080p Link
Makoto Shinkai Your Name & Weathering with you 1445 BD
Kon Satoshi Paprika 1284 BDRip

     Different styles of training have different loss weights!

News:

The improvement directions of AnimeGANv2 mainly include the following 4 points:  
  • 1. Solve the problem of high-frequency artifacts in the generated image.

  • 2. It is easy to train and directly achieve the effects in the paper.

  • 3. Further reduce the number of parameters of the generator network. (generator size: 8.17 Mb), The lite version has a smaller generator model.

  • 4. Use new high-quality style data, which come from BD movies as much as possible.

          AnimeGAN can be accessed from here.


Requirements

  • python 3.6
  • tensorflow-gpu
    • tensorflow-gpu 1.8.0 (ubuntu, GPU 1080Ti or Titan xp, cuda 9.0, cudnn 7.1.3)
    • tensorflow-gpu 1.15.0 (ubuntu, GPU 2080Ti, cuda 10.0.130, cudnn 7.6.0)
  • opencv
  • tqdm
  • numpy
  • glob
  • argparse

Usage

1. Download vgg19

vgg19.npy

2. Download Train/Val Photo dataset

Link

3. Do edge_smooth

python edge_smooth.py --dataset Hayao --img_size 256

4. Calculate the three-channel(BGR) color difference

python data_mean.py --dataset Hayao

5. Train

python main.py --phase train --dataset Hayao --data_mean [13.1360,-8.6698,-4.4661] --epoch 101 --init_epoch 10
For light version: python main.py --phase train --dataset Hayao --data_mean [13.1360,-8.6698,-4.4661] --light --epoch 101 --init_epoch 10

6. Extract the weights of the generator

python get_generator_ckpt.py --checkpoint_dir ../checkpoint/AnimeGAN_Hayao_lsgan_300_300_1_2_10_1 --style_name Hayao

7. Inference

python test.py --checkpoint_dir checkpoint/generator_Hayao_weight --test_dir dataset/test/HR_photo --style_name Hayao/HR_photo

8. Convert video to anime

python video2anime.py --video video/input/お花見.mp4 --checkpoint_dir checkpoint/generator_Paprika_weight


Results


😍 Photo to Paprika Style













😍 Photo to Hayao Style













😍 Photo to Shinkai Style











License

This repo is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications. Permission is granted to use the AnimeGANv2 given that you agree to my license terms. Regarding the request for commercial use, please contact us via email to help you obtain the authorization letter.

Author

Xin Chen

Comments
  • Training code coming soon?

    Training code coming soon?

    Hello Tachibana san, Great work and congratulations on the work on AnimeGANv2. I had good success in converting the models and running the models on Android. However the latency is still an issue, it takes about 500 ms to run a 128x128 patch of image using Tensorflow Android(I tried tflite but it increases the inference time strangely.) I want to modify the network architecture and optimize its performance further to make it a real-time application (under 100 ms) So to cut a long story short, are you planning to release the training code in near future? :)

    Thank you.

    opened by maderix 10
  • Strange G_vgg loss curve

    Strange G_vgg loss curve

    Hello, thank you for posting this great work!

    I have retrained the model with a customized dataset, the results look great but the loss curves seem strange to me.

    image

    The adversary loss seems ok, I set the weights for D and G to 200 and 300, respectively, and the losses are approaching the equilibrium.

    However, the G_vgg loss, which consists of c_loss, s_loss, color_loss, tv_loss, reaches the bottom at around epoch 30 and then starts increasing. I looked at each individual loss among G_vgg_loss, only the s_loss is decreasing over time, and all others starts increasing after epoch 30. image

    Interestingly, the validation samples from epoch 100 is apparently better than the ones from epoch 30. Does anyone experience the same?

    opened by HaozhouPang 4
  • Run on CPU insted of GPU

    Run on CPU insted of GPU

    Hi, I Try To run this Project But I have little problem and when I try to start train phase this code is just only run on cpu but I install cudatoolkit and tensorflow-gpu. can u help me ?

    2020-12-06 18:00:45.278352: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA     [693/1792]
    2020-12-06 18:00:45.302721: E tensorflow/stream_executor/cuda/cuda_driver.cc:397] failed call to cuInit: CUDA_ERROR_NO_DEVICE                                                              
    2020-12-06 18:00:45.302766: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:158] retrieving CUDA diagnostic information for host: host-name
    2020-12-06 18:00:45.302775: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:165] hostname: host-name
    2020-12-06 18:00:45.302816: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:189] libcuda reported version is: 450.66.0
    2020-12-06 18:00:45.302853: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:193] kernel reported version is: 450.66.0                                                                 2020-12-06 18:00:45.302863: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:300] kernel version seems to match DSO: 450.66.0
    
    npy file loaded -------  vgg19_weight/vgg19.npy
    ##### Information #####
    # gan type :  lsgan
    # light :  False
    # dataset :  Hayao
    # max dataset number :  6656
    # batch_size :  12
    # epoch :  101
    # init_epoch :  10
    # training image size [H, W] :  [256, 256]
    # g_adv_weight,d_adv_weight,con_weight,sty_weight,color_weight,tv_weight :  300.0 300.0 1.5 2.5 10.0 1.0
    # init_lr,g_lr,d_lr :  0.0002 2e-05 4e-05
    # training_rate G -- D: 1 : 1
    build model finished: 0.138872s
    build model finished: 0.130571s
    build model finished: 0.120662s
    build model finished: 0.127711s
    build model finished: 0.123440s
    G:
    ---------
    Variables: name (type shape) [size]
    ---------
    generator/G_MODEL/A/Conv/weights:0 (float32_ref 7x7x3x32) [4704, bytes: 18816]
    generator/G_MODEL/A/LayerNorm/beta:0 (float32_ref 32) [32, bytes: 128]
    generator/G_MODEL/A/LayerNorm/gamma:0 (float32_ref 32) [32, bytes: 128]
    generator/G_MODEL/A/Conv_1/weights:0 (float32_ref 3x3x32x64) [18432, bytes: 73728]
    generator/G_MODEL/A/LayerNorm_1/beta:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/A/LayerNorm_1/gamma:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/A/Conv_2/weights:0 (float32_ref 3x3x64x64) [36864, bytes: 147456]
    generator/G_MODEL/A/LayerNorm_2/beta:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/A/LayerNorm_2/gamma:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/B/Conv/weights:0 (float32_ref 3x3x64x128) [73728, bytes: 294912]
    generator/G_MODEL/B/LayerNorm/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/B/LayerNorm/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/B/Conv_1/weights:0 (float32_ref 3x3x128x128) [147456, bytes: 589824]
    generator/G_MODEL/B/LayerNorm_1/beta:0 (float32_ref 128) [128, bytes: 512]                                                                                                        [649/1792]generator/G_MODEL/B/LayerNorm_1/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/C/Conv/weights:0 (float32_ref 3x3x128x128) [147456, bytes: 589824]
    generator/G_MODEL/C/LayerNorm/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/C/LayerNorm/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/C/r1/Conv/weights:0 (float32_ref 1x1x128x256) [32768, bytes: 131072]
    generator/G_MODEL/C/r1/LayerNorm/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/LayerNorm/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/r1/w:0 (float32_ref 3x3x256x1) [2304, bytes: 9216]
    generator/G_MODEL/C/r1/r1/bias:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/1/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/1/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/Conv_1/weights:0 (float32_ref 1x1x256x256) [65536, bytes: 262144]
    generator/G_MODEL/C/r1/2/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r1/2/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r2/Conv/weights:0 (float32_ref 1x1x256x512) [131072, bytes: 524288]
    generator/G_MODEL/C/r2/LayerNorm/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/LayerNorm/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/r2/w:0 (float32_ref 3x3x512x1) [4608, bytes: 18432]
    generator/G_MODEL/C/r2/r2/bias:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/1/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/1/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r2/Conv_1/weights:0 (float32_ref 1x1x512x256) [131072, bytes: 524288]
    generator/G_MODEL/C/r2/2/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r2/2/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r3/Conv/weights:0 (float32_ref 1x1x256x512) [131072, bytes: 524288]
    generator/G_MODEL/C/r3/LayerNorm/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/LayerNorm/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/r3/w:0 (float32_ref 3x3x512x1) [4608, bytes: 18432]
    generator/G_MODEL/C/r3/r3/bias:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/1/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/1/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r3/Conv_1/weights:0 (float32_ref 1x1x512x256) [131072, bytes: 524288]
    generator/G_MODEL/C/r3/2/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r3/2/gamma:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r4/Conv/weights:0 (float32_ref 1x1x256x512) [131072, bytes: 524288]
    generator/G_MODEL/C/r4/LayerNorm/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/LayerNorm/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/r4/w:0 (float32_ref 3x3x512x1) [4608, bytes: 18432]
    generator/G_MODEL/C/r4/r4/bias:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/1/beta:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/1/gamma:0 (float32_ref 512) [512, bytes: 2048]
    generator/G_MODEL/C/r4/Conv_1/weights:0 (float32_ref 1x1x512x256) [131072, bytes: 524288]
    generator/G_MODEL/C/r4/2/beta:0 (float32_ref 256) [256, bytes: 1024]
    generator/G_MODEL/C/r4/2/gamma:0 (float32_ref 256) [256, bytes: 1024]                                                                                                             [605/1792]generator/G_MODEL/C/Conv_1/weights:0 (float32_ref 3x3x256x128) [294912, bytes: 1179648]
    generator/G_MODEL/C/LayerNorm_1/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/C/LayerNorm_1/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/D/Conv/weights:0 (float32_ref 3x3x128x128) [147456, bytes: 589824]
    generator/G_MODEL/D/LayerNorm/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/D/LayerNorm/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/D/Conv_1/weights:0 (float32_ref 3x3x128x128) [147456, bytes: 589824]
    generator/G_MODEL/D/LayerNorm_1/beta:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/D/LayerNorm_1/gamma:0 (float32_ref 128) [128, bytes: 512]
    generator/G_MODEL/E/Conv/weights:0 (float32_ref 3x3x128x64) [73728, bytes: 294912]
    generator/G_MODEL/E/LayerNorm/beta:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/E/LayerNorm/gamma:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/E/Conv_1/weights:0 (float32_ref 3x3x64x64) [36864, bytes: 147456]
    generator/G_MODEL/E/LayerNorm_1/beta:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/E/LayerNorm_1/gamma:0 (float32_ref 64) [64, bytes: 256]
    generator/G_MODEL/E/Conv_2/weights:0 (float32_ref 7x7x64x32) [100352, bytes: 401408]
    generator/G_MODEL/E/LayerNorm_2/beta:0 (float32_ref 32) [32, bytes: 128]
    generator/G_MODEL/E/LayerNorm_2/gamma:0 (float32_ref 32) [32, bytes: 128]
    generator/G_MODEL/out_layer/Conv/weights:0 (float32_ref 1x1x32x3) [96, bytes: 384]
    Total size of variables: 2143552
    Total bytes of variables: 8574208
     [*] Reading checkpoints...
     [*] Failed to find a checkpoint
     [!] Load failed...
    Epoch:   0 Step:     0 /   554  time: 80.342747 s init_v_loss: 592.22143555  mean_v_loss: 592.22143555
    
    opened by amirzenoozi 3
  • Add ⭐️Weights and Biases⭐️ logging

    Add ⭐️Weights and Biases⭐️ logging

    Hey @TachibanaYoshino 👋, This PR aims to add basic Weights and Biases Metric Logging by appending to the existing codebase with minimal changes.

    The changes can be summarized as follows :-

    Add 3 extra arguments namely --use_wandb, --wandb_project and --wandb_entity which can be used to specify whether to use wandb, the name of the project to be used ("AnimeGANv2" by default) and name of the entity to be used.

    opened by SauravMaheshkar 2
  • Can I train a model by using multiple GPUs?

    Can I train a model by using multiple GPUs?

    Thank you for your awesome project. I think if i can using mulitple GPUs to traning, it's making things be more efficient. Hope to get some advice from you. Thanks.

    opened by MorningStarJ 2
  • typo on the folders

    typo on the folders

    Hello Author! your folder naming has a typo though It's kinda annoying to rename it again and again cuz I'm using google colab cuz I tried to use shinkai image image

    opened by IchimakiKasura 2
  • issue saving checkpoints of model

    issue saving checkpoints of model

    hello! when I try to train the model, I get the following error when the code tries to save the checkpoint:

    Traceback (most recent call last):
      File "main.py", line 115, in <module>
        main()
      File "main.py", line 107, in main
        gan.train()
      File "/content/drive/.shortcut-targets-by-id/1X8hfrOWE2KxmaJG4LFKH9ydVQ4BA7oyZ/cs7643-final-project/AnimeGANv2.py", line 302, in train
        self.save(self.checkpoint_dir, epoch)
      File "/content/drive/.shortcut-targets-by-id/1X8hfrOWE2KxmaJG4LFKH9ydVQ4BA7oyZ/cs7643-final-project/AnimeGANv2.py", line 341, in save
        self.saver.save(self.sess, os.path.join(checkpoint_dir, self.model_name + '.model'), global_step=step)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/training/saver.py", line 1186, in save
        save_relative_paths=self._save_relative_paths)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/training/checkpoint_management.py", line 231, in update_checkpoint_state_internal
        last_preserved_timestamp=last_preserved_timestamp)
      File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/training/checkpoint_management.py", line 110, in generate_checkpoint_state_proto
        model_checkpoint_path = os.path.relpath(model_checkpoint_path, save_dir)
      File "/usr/lib/python3.7/posixpath.py", line 475, in relpath
        start_list = [x for x in abspath(start).split(sep) if x]
      File "/usr/lib/python3.7/posixpath.py", line 383, in abspath
        cwd = os.getcwd()
    FileNotFoundError: [Errno 2] No such file or directory
    

    I mounted my google drive into colab and am using colab to train the model. When I check my checkpoint folder, I have two files there but it appears that I am missing the checkpoint binary file and the .meta file. Any idea why this could be happening? image

    opened by wooae 1
  • Cannot understand rgb2yuv function code

    Cannot understand rgb2yuv function code

    def rgb2yuv(rgb):
        """
        Convert RGB image into YUV https://en.wikipedia.org/wiki/YUV
        """
        rgb = (rgb + 1.0)/2.0
        return tf.image.rgb_to_yuv(rgb)
    

    tf.image.rgb_to_yuv(rgb) do the op: rgb_to_yuv so,I can't understand what this line of code means: "rgb = (rgb + 1.0)/2.0"

    opened by wan-h 1
  • What attributed to the better performance of the model compared to your earlier model?

    What attributed to the better performance of the model compared to your earlier model?

    Hi, thanks for sharing your work. What in your opinion was the key to achieving better performance compared to your earlier model (v1) and/or other models?

    I've roughly seen the code of this repo but I can't figure it out.

    opened by xiankgx 1
  • How to use 512 x 512 or higher-definition pictures for training

    How to use 512 x 512 or higher-definition pictures for training

    I want to use 512 x 512 or higher resolution images, my plan is as follows:

    1. ffmpeg extracts 1080 * 1080 pictures, then sacle to 512 x 512
    2. python edge_smooth.py --dataset xxxx --img_size 512
    3. python train.py --dataset xxxx --epoch 101 --init_epoch 10 But I see that the pictures in train_photo under the dataset will also be used for training, so do the pictures in train_photo need to be updated to 512 x 512?
    opened by mjgaga 0
  • Could you share how you get the improvements that you mentioned in the readme?

    Could you share how you get the improvements that you mentioned in the readme?

    Hi, Could you share how you get these 3 improvements that you mentioned in the readme?


    1. Solve the problem of high-frequency artifacts in the generated image.

    2. It is easy to train and directly achieve the effects in the paper.

    3. Further reduce the number of parameters of the generator network. (generator size: 8.17 Mb), The lite version has a smaller generator model.


    opened by kasim0226 0
  • How to train face model?

    How to train face model?

    Is the training method the same as the training landscape photos? The difference is the data set. As long as the human face data is aligned, plus the animation face alignment, is that right?

    opened by baixinping618 0
Owner
CC
AI Algorithm Engineer
CC
A deep learning based semantic search platform that computes similarity scores between provided query and documents

semanticsearch This is a deep learning based semantic search platform that computes similarity scores between provided query and documents. Documents

1 Nov 30, 2021
BackgroundRemover lets you Remove Background from images and video with a simple command line interface

BackgroundRemover BackgroundRemover is a command line tool to remove background from video and image, made by nadermx to power https://BackgroundRemov

Johnathan Nader 1.7k Dec 30, 2022
Vit-ImageClassification - Pytorch ViT for Image classification on the CIFAR10 dataset

Vit-ImageClassification Introduction This project uses ViT to perform image clas

Kaicheng Yang 4 Jun 01, 2022
Generate Contextual Directory Wordlist For Target Org

PathPermutor Generate Contextual Directory Wordlist For Target Org This script generates contextual wordlist for any target org based on the set of UR

8 Jun 23, 2021
The code for SAG-DTA: Prediction of Drug–Target Affinity Using Self-Attention Graph Network.

SAG-DTA The code is the implementation for the paper 'SAG-DTA: Prediction of Drug–Target Affinity Using Self-Attention Graph Network'. Requirements py

Shugang Zhang 7 Aug 02, 2022
ML for NLP and Computer Vision.

Sparrow is our open-source ML product. It runs on Skipper MLOps infrastructure.

Katana ML 2 Nov 28, 2021
[ECCV 2020] Reimplementation of 3DDFAv2, including face mesh, head pose, landmarks, and more.

Stable Head Pose Estimation and Landmark Regression via 3D Dense Face Reconstruction Reimplementation of (ECCV 2020) Towards Fast, Accurate and Stable

Remilia Scarlet 221 Dec 30, 2022
"Domain Adaptive Semantic Segmentation without Source Data" (ACM MM 2021)

LDBE Pytorch implementation for two papers (the paper will be released soon): "Domain Adaptive Semantic Segmentation without Source Data", ACM MM2021.

benfour 16 Sep 28, 2022
Label-Free Model Evaluation with Semi-Structured Dataset Representations

Label-Free Model Evaluation with Semi-Structured Dataset Representations Prerequisites This code uses the following libraries Python 3.7 NumPy PyTorch

8 Oct 06, 2022
A python implementation of Yolov5 to detect fire or smoke in the wild in Jetson Xavier nx and Jetson nano

yolov5-fire-smoke-detect-python A python implementation of Yolov5 to detect fire or smoke in the wild in Jetson Xavier nx and Jetson nano You can see

20 Dec 15, 2022
Speech Separation Using an Asynchronous Fully Recurrent Convolutional Neural Network

Speech Separation Using an Asynchronous Fully Recurrent Convolutional Neural Network This repository is the official implementation of Speech Separati

Kai Li (李凯) 116 Nov 09, 2022
Pytorch Lightning code guideline for conferences

Deep learning project seed Use this seed to start new deep learning / ML projects. Built in setup.py Built in requirements Examples with MNIST Badges

Pytorch Lightning 1k Jan 02, 2023
Pytorch implementation of Nueral Style transfer

Nueral Style Transfer Pytorch implementation of Nueral style transfer algorithm , it is used to apply artistic styles to content images . Content is t

Abhinav 9 Oct 15, 2022
A Parameter-free Deep Embedded Clustering Method for Single-cell RNA-seq Data

A Parameter-free Deep Embedded Clustering Method for Single-cell RNA-seq Data Overview Clustering analysis is widely utilized in single-cell RNA-seque

AI-Biomed @NSCC-gz 3 May 08, 2022
A whale detector design for the Kaggle whale-detector challenge!

CNN (InceptionV1) + STFT based Whale Detection Algorithm So, this repository is my PyTorch solution for the Kaggle whale-detection challenge. The obje

Tarin Ziyaee 92 Sep 28, 2021
MLPs for Vision and Langauge Modeling (Coming Soon)

MLP Architectures for Vision-and-Language Modeling: An Empirical Study MLP Architectures for Vision-and-Language Modeling: An Empirical Study (Code wi

Yixin Nie 27 May 09, 2022
A program that can analyze videos according to the weights you select

MaskMonitor A program that can analyze videos according to the weights you select 下載 訓練完的 weight檔案 執行 MaskDetection.py 內部可更改 輸入來源(鏡頭, 影片, 圖片) 以及輸出條件(人

Patrick_star 1 Nov 07, 2021
Code & Experiments for "LILA: Language-Informed Latent Actions" to be presented at the Conference on Robot Learning (CoRL) 2021.

LILA LILA: Language-Informed Latent Actions Code and Experiments for Language-Informed Latent Actions (LILA), for using natural language to guide assi

Sidd Karamcheti 11 Nov 25, 2022
The source code of the paper "Understanding Graph Neural Networks from Graph Signal Denoising Perspectives"

GSDN-F and GSDN-EF This repository provides a reference implementation of GSDN-F and GSDN-EF as described in the paper "Understanding Graph Neural Net

Guoji Fu 18 Nov 14, 2022
Fight Recognition from Still Images in the Wild @ WACVW2022, Real-world Surveillance Workshop

Fight Detection from Still Images in the Wild Detecting fights from still images is an important task required to limit the distribution of social med

Şeymanur Aktı 10 Nov 09, 2022