ArcaneGAN by Alex Spirin

Overview

ArcaneGAN by Alex Spirin

Colab visitors

Changelog

ArcaneGAN v0.3

Videos processed by the huggingface video inference colab.

obama2.mp4
ryan2.mp4

Image samples

arcaneganv03

Faces were enhanced via GPEN before applying the ArcaneGAN v0.3 filter.

ArcaneGAN v0.2

The release is here image photo_2021-12-04_08-05-34 photo_2021-12-04_07-23-17 weewq

Implementation Details

It does something, but not much at the moment.

The model is a pytroch *.jit of a fastai v1 flavored u-net trained on a paired dataset, generated via a blended stylegan2. You can see the blending colab I've used here.

Comments
  • How to convert the FastAI model to Pytorch JIT

    How to convert the FastAI model to Pytorch JIT

    Hi,

    I trained a model with unet_learner but I can't convert it to jit.

    I run the following code: torch.jit.save(torch.jit.script(learn.model), 'jit.pt')

    Here is the error:

    UnsupportedNodeError: GeneratorExp aren't supported: File "/usr/local/lib/python3.7/dist-packages/fastai/callbacks/hooks.py", line 21 "Applieshook_functomodule,input,output." if self.detach: input = (o.detach() for o in input ) if is_listy(input ) else input.detach() ~ <--- HERE output = (o.detach() for o in output) if is_listy(output) else output.detach() self.stored = self.hook_func(module, input, output)

    May I know how you convert it to a jit model? Thanks

    opened by ramtiin 2
  • Ошибка

    Ошибка

    Добрый вечер.В ArcaneGAN на colab for videos,выдаёт ошибку:

    RuntimeError: CUDA out of memory. Tried to allocate 2.80 GiB (GPU 0; 11.17 GiB total capacity; 5.74 GiB already allocated; 2.21 GiB free; 8.44 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

    Помогите пожалуйста!

    opened by Zzip7 2
  • How do you change the style of the whole image

    How do you change the style of the whole image

    Nice work! My only confusion is how you change the style of the whole image instead of just the face. Usually, StyleGAN generates aligned face images by fine-tuning the FFHQ checkpoint. How does the pix2pix model trained with these face image pairs work with the full image or frame.

    opened by zhanglonghao1992 2
  • Architecture for video

    Architecture for video

    Hi, what does the architecture look like? Is it similar to Pix2Pix? And for processing of the video, are you doing anything extra to make sure the frames are consistent?

    opened by unography 2
  • How to prevent eyes occur in nose?

    How to prevent eyes occur in nose?

    Hello, I try your model and it's amazing, but I find in some pictures if the nose is too big, there will be eyes in the nose. I try to lower the 'target_face' and it can work. But the details like the light of the eyes and background will also lose when I lower the 'target_face'. So I wonder is there a way to prevent the eyes occurs in the nose and keep the details in the meantime? image

    opened by Folkfive 1
  • support arbitrary image size?

    support arbitrary image size?

    Great work!

    The unet prediction result will be cropped to be the same size as the training input, e.g. 256 or 512. For arbitrary image size (e.g. 1280*720), how to config or set the model to output the same size of the input image as your colab did? Thank you.

    opened by foobarhe 1
  • RuntimeError: CUDA out of memory

    RuntimeError: CUDA out of memory

    Добрый вечер.Извините,это опять я.Снова эта ошибка появляется.Можно ли,самому эту ошибку решать?Или исправлять можете только вы?Обьясните пожалуйста подробно.

    opened by Zzip7 1
  • about the paired datasets generated by stylegan

    about the paired datasets generated by stylegan

    how do you make sure the background and expression similarity between the generated input(face) and target(style face) ? I find that the style is too weak when less finetune and the similarity is too weak when more finetune, how do you solve it ? Would you like to share the paired datasets generated code with me ? thanks a lot ~

    opened by Leocien 1
  • Any news for training code?

    Any news for training code?

    Interesting topic... I wonder how you trained the model, especially the augmentation part. Fixed crop limitation is a well-known problem and would like to know how you handle it. :)

    opened by dongyun-kim-arch 0
  •  tuple issue

    tuple issue

    Was trying the ArcaneGan video colab but I am having a tuple issue can you please help, i am really excited to try the Arcane video can you please help out

    opened by mau021 0
  • What GPU is used for training?

    What GPU is used for training?

    Hi,

    I want to train the Fastai u-net model. However, when I try to train the critic (learn_critic.fit_one_cycle(6, 1e-3)), I get the following error:

    CUDA out of memory. Tried to allocate 4.00 GiB (GPU 0; 14.76 GiB total capacity; 9.78 GiB already allocated; 891.75 MiB free; 12.57 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

    The GPU is a Tesla T4 with 16 GB of VRAM. My batch size is 4 and the training images size is 512*512. I also tried with lower numbers, but I'm still getting the same error.

    opened by ramtiin 2
  • How to make the style stronger?

    How to make the style stronger?

    The following are input image, my training output from pair label supervision, and the output from your test model。 I trained my model (Super-Resolution model) on the images from your model outputs, I find it difficult to change the facial features。 Like the eyes and face texture are changed, how to do it ? I use L1Loss (weight is 1) + PerceptualLoss (weight is 1)+ GANLoss (weight is 0.1),

    6W2HG4GXC2

    opened by xuanandsix 1
Releases(v0.4)
  • v0.4(Dec 25, 2021)

    ArcaneGAN v0.4

    Colab visitors

    The main differences are:

    • lighter styling (closer to original input)
    • sharper result
    • happier faces
    • reduced childish eyes effect
    • reduced stubble on feminine faces
    • increased temporal stability on videos
    • reduced mouth\teeth artifacts

    Image samples

    v0.3 vs v0.4

    v3-4

    Video samples

    https://user-images.githubusercontent.com/11751592/146966428-f4e27929-19dd-423f-a772-8aee709d2116.mp4

    https://user-images.githubusercontent.com/11751592/146966462-6511998e-77f5-4fd2-8ad9-5709bf0cd172.mp4

    Source code(tar.gz)
    Source code(zip)
    ArcaneGANv0.4.jit(59.75 MB)
  • v0.3(Dec 12, 2021)

    ArcaneGAN v0.3

    Colab

    Video samples

    This is a stronger-styled version. It performs okay on videos, though visible flickering is present. Here are some video examples.

    https://user-images.githubusercontent.com/11751592/145702737-c02b8b00-ad30-4358-98bf-97c8ad7fefdf.mp4

    https://user-images.githubusercontent.com/11751592/145702740-afd3377d-d117-467d-96ca-045e25d85ac6.mp4

    Image samples

    arcaneganv03

    Faces were enhanced via GPEN before applying the ArcaneGAN v0.3 filter.

    The model is a pytroch *.jit of a fastai v1 flavored u-net trained on a paired dataset, generated via a blended stylegan2. You can see the blending colab I've used here.

    Source code(tar.gz)
    Source code(zip)
    ArcaneGANv0.3.jit(79.40 MB)
  • v0.2(Dec 7, 2021)

    ArcaneGAN v0.2 This version is a bit better at doing something other than making images darker :D

    Here are some image pairs. I've specifically picked various images to see how the model performs in the wild, not on aligned and cropped faces. ds e42 ewewe maxresdefault photo_2021-11-16_19-32-15 photo_2021-11-16_19-34-02 photo_2021-11-16_19-34-33 photo_2021-11-16_19-34-49 photo_2021-11-29_13-23-56 photo_2021-11-29_13-26-13 photo_2021-12-04_07-22-51 photo_2021-12-04_07-23-17 photo_2021-12-04_07-25-29 photo_2021-12-04_07-48-29 photo_2021-12-04_08-04-43 photo_2021-12-04_08-06-17 photo_2021-12-04_08-06-40 photo_2021-12-04_08-07-04 photo_2021-12-04_08-09-53

    photo_2021-12-04_11-26-27 weewq 0_256_ 1_256_Всем онеме посоны

    The model is a pytroch *.jit of a fastai v1 flavored u-net trained on a paired dataset, generated via a blended stylegan2. You can see the blending colab I've used here.

    Inference notebook is here

    Source code(tar.gz)
    Source code(zip)
    ArcaneGANv0.2.jit(79.52 MB)
  • v0.1(Dec 6, 2021)

    ArcaneGAN v0.1 This is a proof of concept release. The model is in beta (which means it's beta than nothin')

    Here are some image pairs. I've specifically picked various images to see how the model performs in the wild, not on aligned and cropped faces.

    0_256_ 258c27bcb658a86765361c1faca7b749fa3a36aaf07e975b408281c0a9c76513 e42 ewewe maxresdefault photo_2021-11-16_19-32-15 photo_2021-11-16_19-34-02 photo_2021-11-16_19-34-33 photo_2021-11-16_19-34-49 photo_2021-12-04_07-23-17 photo_2021-12-04_07-48-29 photo_2021-12-04_08-06-40 photo_2021-12-04_08-07-04 photo_2021-12-04_11-26-27

    It does something, but not much at the moment.

    The model is a pytroch *.jit of a fastai v1 flavored u-net trained on a paired dataset, generated via a blended stylegan2. You can see the blending colab I've used here.

    Inference notebook is here

    Source code(tar.gz)
    Source code(zip)
    ArcaneGANv0.1.jit(79.53 MB)
Owner
Alex
Alex
A full-fledged version of Pix2Seq

Stable-Pix2Seq A full-fledged version of Pix2Seq What it is. This is a full-fledged version of Pix2Seq. Compared with unofficial-pix2seq, stable-pix2s

peng gao 205 Dec 27, 2022
Code for Paper: Self-supervised Learning of Motion Capture

Self-supervised Learning of Motion Capture This is code for the paper: Hsiao-Yu Fish Tung, Hsiao-Wei Tung, Ersin Yumer, Katerina Fragkiadaki, Self-sup

Hsiao-Yu Fish Tung 87 Jul 25, 2022
Optical machine for senses sensing using speckle and deep learning

# Senses-speckle [Remote Photonic Detection of Human Senses Using Secondary Speckle Patterns](https://doi.org/10.21203/rs.3.rs-724587/v1) paper Python

Zeev Kalyuzhner 0 Sep 26, 2021
This is the official code release for the paper Shape and Material Capture at Home

This is the official code release for the paper Shape and Material Capture at Home. The code enables you to reconstruct a 3D mesh and Cook-Torrance BRDF from one or more images captured with a flashl

89 Dec 10, 2022
CLNTM - Contrastive Learning for Neural Topic Model

Contrastive Learning for Neural Topic Model This repository contains the impleme

Thong Thanh Nguyen 25 Nov 24, 2022
MutualGuide is a compact object detector specially designed for embedded devices

Introduction MutualGuide is a compact object detector specially designed for embedded devices. Comparing to existing detectors, this repo contains two

ZHANG Heng 103 Dec 13, 2022
Repository for the paper "Online Domain Adaptation for Occupancy Mapping", RSS 2020

RSS 2020 - Online Domain Adaptation for Occupancy Mapping Repository for the paper "Online Domain Adaptation for Occupancy Mapping", Robotics: Science

Anthony 26 Sep 22, 2022
Prediction of MBA refinance Index (Mortgage prepayment)

Prediction of MBA refinance Index (Mortgage prepayment) Deep Neural Network based Model The ability to predict mortgage prepayment is of critical use

Ruchil Barya 1 Jan 16, 2022
Code for ACM MM2021 paper "Complementary Trilateral Decoder for Fast and Accurate Salient Object Detection"

CTDNet The PyTorch code for ACM MM2021 paper "Complementary Trilateral Decoder for Fast and Accurate Salient Object Detection" Requirements Python 3.6

CVTEAM 28 Oct 20, 2022
Allows including an action inside another action (by preprocessing the Yaml file). This is how composite actions should have worked.

actions-includes Allows including an action inside another action (by preprocessing the Yaml file). Instead of using uses or run in your action step,

Tim Ansell 70 Nov 04, 2022
ISBI 2022: Cross-level Contrastive Learning and Consistency Constraint for Semi-supervised Medical Image.

Cross-level Contrastive Learning and Consistency Constraint for Semi-supervised Medical Image Introduction This repository contains the PyTorch implem

25 Nov 09, 2022
GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training @ KDD 2020

GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training Original implementation for paper GCC: Graph Contrastive Coding for Graph Neural N

THUDM 274 Dec 27, 2022
Ppq - A powerful offline neural network quantization tool with custimized IR

PPL Quantization Tool(PPL 量化工具) PPL Quantization Tool (PPQ) is a powerful offlin

605 Jan 03, 2023
A Python package to create, run, and post-process MODFLOW-based models.

Version 3.3.5 — release candidate Introduction FloPy includes support for MODFLOW 6, MODFLOW-2005, MODFLOW-NWT, MODFLOW-USG, and MODFLOW-2000. Other s

388 Nov 29, 2022
Autonomous Robots Kalman Filters

Autonomous Robots Kalman Filters The Kalman Filter is an easy topic. However, ma

20 Jul 18, 2022
Reference code for the paper "Cross-Camera Convolutional Color Constancy" (ICCV 2021)

Cross-Camera Convolutional Color Constancy, ICCV 2021 (Oral) Mahmoud Afifi1,2, Jonathan T. Barron2, Chloe LeGendre2, Yun-Ta Tsai2, and Francois Bleibe

Mahmoud Afifi 76 Jan 07, 2023
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms

CARLA - Counterfactual And Recourse Library CARLA is a python library to benchmark counterfactual explanation and recourse models. It comes out-of-the

Carla Recourse 200 Dec 28, 2022
Tracking code for the winner of track 1 in the MMP-Tracking Challenge at ICCV 2021 Workshop.

Tracking Code for the winner of track1 in MMP-Trakcing challenge This repository contains our tracking code for the Multi-camera Multiple People Track

DamoCV 29 Nov 13, 2022
Library extending Jupyter notebooks to integrate with Apache TinkerPop and RDF SPARQL.

Graph Notebook: easily query and visualize graphs The graph notebook provides an easy way to interact with graph databases using Jupyter notebooks. Us

Amazon Web Services 501 Dec 28, 2022
gACSON software for visualization, processing and analysis of three-dimensional electron microscopy images

gACSON gACSON software is to visualize, segment, and analyze the morphology of neurons in three-dimensional electron microscopy images. If you use any

Andrea Behanova 2 May 31, 2022