Lucid library adapted for PyTorch

Related tags

Deep Learninglucent
Overview

Lucent

Travis build status Code coverage

PyTorch + Lucid = Lucent

The wonderful Lucid library adapted for the wonderful PyTorch!

Lucent is not affiliated with Lucid or OpenAI's Clarity team, although we would love to be! Credit is due to the original Lucid authors, we merely adapted the code for PyTorch and we take the blame for all issues and bugs found here.

Usage

Lucent is still in pre-alpha phase and can be installed locally with the following command:

pip install torch-lucent

In the spirit of Lucid, get up and running with Lucent immediately, thanks to Google's Colab!

You can also clone this repository and run the notebooks locally with Jupyter.

Quickstart

import torch

from lucent.optvis import render
from lucent.modelzoo import inceptionv1

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = inceptionv1(pretrained=True)
model.to(device).eval()

render.render_vis(model, "mixed4a:476")

Tutorials

Other Notebooks

Here, we have tried to recreate some of the Lucid notebooks! You can also check out the lucent-notebooks repo to clone all the notebooks.

Recommended Readings

Related Talks

Slack

Check out #proj-lucid and #circuits on the Distill slack!

Additional Information

License and Disclaimer

You may use this software under the Apache 2.0 License. See LICENSE.

Comments
  • use custom model?

    use custom model?

    Hi, I see its possible to use models from the modelzoo, is it possible to use a custom trained model? Ay documentation or direction would be appreciated.

    opened by dvschultz 5
  • Add activation grids notebook

    Add activation grids notebook

    Issue #3, reproducing activation grids (https://github.com/tensorflow/lucid/blob/master/notebooks/building-blocks/ActivationGrid.ipynb)

    It's possible to try it here: https://colab.research.google.com/drive/1pEe-KmXeDJcWQYLOHwcMubS69wVVCHLe#scrollTo=xidm-QrXvL2X

    Here are the results so far with inceptionv1 and layer mixed4d:

    • reproduced: https://imgur.com/twmizR4
    • original: https://imgur.com/eaYwEWR

    Some remarks and a question:

    • I added channel_reducer as is from the original repo
    • default transforms in transform.py produce a different size (due to random scaling) each time it is called, then resampling to 224 is done after that to have a fixed size. Is that the same in lucid? I need to debug a little bit in the original repo to be sure to answer that, but please let me know if you have the answer. The reason I ask is that in this specific notebook, the cells in the grid are much smaller than 224. I added an argument "fixed_image_size" to handle this specific case where we want a fixed image size (after resampling) which is not 224.
    • Since all layers are computed and with this commit we can accept smaller images, this means that it's possible to have an exception on higher layers because the image size is not big enough, but it should be fine as long as the layer we are interested in is computed, I handled this exception
    opened by mehdidc 5
  • ValueError in Render

    ValueError in Render

    Hi there,

    I am trying to run the tutorial and am running into the following error:

    >>> import torch
    >>> from lucent.optvis import render, param, transform, objectives
    >>> from lucent.modelzoo import inceptionv1
    >>> device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    >>> model = inceptionv1(pretrained=True)
    >>> _ = model.to(device).eval()
    >>> _ = render.render_vis(model, "mixed4a:476", show_inline=True)
    
      0%|                                                                                       | 0/512 [00:00<?, ?it/s]
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/lucent/optvis/render.py", line 113, in render_vis
        optimizer.step(closure)
      File "/Users/tatekeller/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
        return func(*args, **kwargs)
      File "/Users/tatekeller/.local/lib/python3.8/site-packages/torch/optim/adam.py", line 66, in step
        loss = closure()
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/lucent/optvis/render.py", line 97, in closure
        model(transform_f(image_f()))
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/lucent/optvis/transform.py", line 85, in inner
        x = transform(x)
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/lucent/optvis/transform.py", line 75, in inner
        M = kornia.get_rotation_matrix2d(center, angle, scale).to(device)
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/kornia/geometry/transform/imgwarp.py", line 347, in get_rotation_matrix2d
        raise ValueError("Input scale must be a B tensor. Got {}"
    ValueError: Input scale must be a B tensor. Got torch.Size([1, 2])
    

    I am using a conda environment with Python 3.8.5 and pytorch=1.7.0.

    Any help regarding this error would be much appreciated!

    opened by tatkeller 4
  • Utils to show modulename with its repr(); Add Linear weighted activations as objective; Add pretrained GAN as parametrization

    Utils to show modulename with its repr(); Add Linear weighted activations as objective; Add pretrained GAN as parametrization

    Dear author,

    Thanks so much for implement lucid in PyTorch! I really enjoyed using it in my projects of leveraging deep neural networks as a way to understand real neurons in visual cortices. In my usage, I want to activate multiple channels together to match the selectivity of the biological neuron or units in other networks. We can achieve this by adding up the original channel objective or neuron objective. But it becomes very inefficient in back prop.

    So here are my 2 cents, in this commit I

    • Add linearly weighted activations of the channel, neuron, neuron group as objective. Using tensor operations.
    • Add a function in util.py to output the module names to ease the usage for custom networks.
    opened by Animadversio 4
  • Generating a batch of optimal stimuli, one for each unit in a layer

    Generating a batch of optimal stimuli, one for each unit in a layer

    Hi, I was trying to use Lucent to generate optimal stimuli for several units/neurons of a layer parallely. So, I figured I would leverage the batch processing. As illustrated in the neuron interaction tutorial notebook, I was passing a sum of objectives to the render.render_vis() function. Here is a toy example of what I want and my approach: Units to be visualized = [10,20,30] Layer = 'readout_fc' tot_objective = objectives.channel("readout_fc",10,batch=0)+objectives.channel("readout_fc",20,batch=1)+objectives.channel("readout_fc",30,batch=2) param_f = lambda:param.image(135,batch=3) imgs = render.render_vis(model,tot_objective,param_f=param_f,preprocess=False,fixed_input_image_size=135)

    The parameter settings works beautifully when I try one unit.😄 However, I wasn't sure if this is the correct way to approach this for multiple units in parallel (this gives me seperate images for each unit). Also when the number of units is more, I was hoping to avoid writing it out individually or run an explicit for loop to compute the objective. I tried using reduce as below: neurons = [10,20,30] tot_objective = reduce(lambda x,y: x+objectives.channel("readout_fc",y[0],batch=y[1]),list(zip(neurons,np.arange(len(neurons)))),0) Doing so gives me the same image 3 times. So, I was wondering if there is something wrong in how I am using the objective function to generate optimal stimuli from multiple units in parallel. Thanks in advance.

    opened by arnaghosh 3
  • Q: Do you use the same architecture and weights as Clarity does?

    Q: Do you use the same architecture and weights as Clarity does?

    Hi,

    I am looking for a trainable InceptionV1 model which shares the same weights with the ones Clarity team uses. Reading your code, I've found these lines:

    model_urls = {
        # InceptionV1 model used in Lucid examples, converted by ProGamerGov
        'inceptionv1': 'https://github.com/ProGamerGov/pytorch-old-tensorflow-models/raw/master/inception5h.pth',
    }
    

    Does it mean you're using exactly the same architecture and weights, so your render_vis function can reproduce the same pictures that Clarity has published?

    Thanks!

    opened by gergopool 3
  • Add direction and direction_neuron objectives

    Add direction and direction_neuron objectives

    Hi @greentfrapp,

    Thanks so much for making this repo! It has been a great help for me. For my use-case I needed the direction and direction_neuron objectives so I added them into lucent. I also included two demo files but let me know if they should be rolled into a docstring instead. This PR also lays the groundwork to reproduce the activation atlas notebooks from lucid. Would love to hear your thoughts :)

    opened by ndey96 3
  • Activation Grid Notebook

    Activation Grid Notebook

    Reproduce Lucid's Activation Grid Notebook with PyTorch and Lucent.

    The only new function required seems to be ChannelReducer, which doesn't rely on Tensorflow so it should be relatively simple to port over.

    Help wanted for this!

    help wanted good first issue 
    opened by greentfrapp 3
  • get raw_activations

    get raw_activations

    Hi, thanks for this great library! I'm trying to reproduce the Activation Atlas notebook using lucent, creating grid cells in the end.

    In the notebook, "raw activation" is available in a numpy.ndarray format by "model.layers[7].activations" to utilise in the next Dimensionality reduction section. How can I get this raw activation using lucent?

    I did create visualised images using lucent render_vis first, and then flattened them to use UMAP fit method, but I'm not sure this is correct. Any suggestion would be appreciated.

    opened by 2nayk 2
  • Temporarily freeze kornia at 0.4.0 to prevent breaking change

    Temporarily freeze kornia at 0.4.0 to prevent breaking change

    kornia 0.4.1 released recently and it includes a breaking change to get_rotation_matrix2d

    I've opened an Issue to hopefully address this soon: https://github.com/kornia/kornia/issues/742 but in the meantime, I suggest freezing kornia at 0.4.0 so that the random_rotate transform continues working as before

    opened by ivanzvonkov 2
  • Suggestion for `lucent.optvis.render.hook_model`

    Suggestion for `lucent.optvis.render.hook_model`

    First, thanks for making this. Lifesaver. Two thoughts (Fwiw, the nested functions, higher-order functions and decorators make things a biiiiit hard to follow when debugging):

    1. I initially dun goofed and didn't eval the model (even though the very example notebook I'm using from lucent does lol). Maybe the hook_model function could check for nonetypes and tell the user to eval, if no saved feature maps are found?
    2. PyTorch module names usually use dot notation. Maybe use dots instead of underscores? Or just tell the user which feature map names are available and the user'll figure it out quickly enough

    Suggested replacement for this function: https://github.com/greentfrapp/lucent/blob/a2b015ce95f29460a329f750428077bcde5e4e94/lucent/optvis/render.py#L194

    def hook(layer):
            if layer == "input":
                out = image_f()
            elif layer == "labels":
                out = list(features.values())[-1].features
            else:
                assert layer in features, f"Invalid layer {layer}. Pick from one of {features.keys()}"  # suggestion 2 ish
                out = features[layer].features
            assert out is not None, "There are no saved feature maps. Make sure to put the model in eval mode, like so: `model.to(device).eval()`. See Lucent notebook for example."   # suggestion 1, tell user to eval
            return out
    

    *I ran it on resnet18. Gorgeous and worked out of the box btw.

    download-4 download-5 download-6 download-7

    opened by alvinwan 2
  • Code Breaks as GPU Index > 0

    Code Breaks as GPU Index > 0

    When using GPU, this codebase only works for torch.device('cuda:0') -- the GPU index has to be 0.

    For example, if you choos torch.device('cuda:1'), then when you run the code demo

    import torch
    
    from lucent.optvis import render
    from lucent.modelzoo import inceptionv1
    
    # Let's use cuda:1
    device = torch.device("cuda:1")
    model = inceptionv1(pretrained=True)
    model.to(device).eval()
    
    render.render_vis(model, "mixed4a:476")
    

    you will see an error like

    ..........
    File .....lucent/optvis/render.py:206, in hook_model.<locals>.hook(layer)
        204     assert layer in features, f"Invalid layer {layer}. Retrieve the list of layers with `lucent.modelzoo.util.get_model_layers(model)`."
        205     out = features[layer].features
    --> 206 assert out is not None, "There are no saved feature maps. Make sure to put the model in eval mode, like so: `model.to(device).eval()`. See README for example."
        207 return out
    
    AssertionError: There are no saved feature maps. Make sure to put the model in eval mode, like so: `model.to(device).eval()`. See README for example.
    
    opened by Haoxiang-Wang 0
  • Lucent handles greyscale images in function view incorrectly

    Lucent handles greyscale images in function view incorrectly

    When rendering and visualizing greyscale images not inline, i.e., with show_inline=False, PIL throws following error: TypeError: Cannot handle this data type: (1, 1, 1), |u1 The problem is that Lucent passes a tensor of shape [H, W, C] with C=1 and range from 0-225 to PIL, but PIL can handle only two-dimensional tensors with integer values. This Stackoverflow answer provides more information.

    Solution: Lucent should check if shape is [H, W, C=1] and reduce to [H, W]. Alternatively, introduce a param, e.g, greycale=True in the view function.

    opened by neoglez 0
  • activation grid for hierarchical custom model

    activation grid for hierarchical custom model

    Hi, is there a way to visualize the activation grid for a custom model with nested modules, which are not explicitely named as a model's attribute? E.g. when I call get_model_layers(), you can see the following output for this custom model: image

    I followed your notebook on the activation grid (https://colab.research.google.com/github/greentfrapp/lucent-notebooks/blob/master/notebooks/activation_grids.ipynb#scrollTo=BDH9cXnSuu5Q). For example, I choose layer = "net_down1_maxpool_conv" (is there some kind of syntax for specifying the layers?) I also rewrote the get_layer() helper function to parse the networks layer from the string, because that layer is not a direct attribute of the network class. But when I then try to use the rendering function, there is an error in the first line of the objective function: In lines 203-206 of render.py either one of the assertions is thrown, depending on how I choose the layer string. Can you help me with this problem? Many thanks!

    opened by An-nay-marks 1
  • Support batches for CPPN image representation

    Support batches for CPPN image representation

    When representing the optimized image using CPPN network, current implementation allows for optimizing for a single image per run. This limitation prevents using, e.g., "diversity" objectives during optimization. This PR adds support for batching for cppn image representation by creating a batch of networks.

    here's an example of generating a diverse batch=2 images for objective "mixed4d_3x3_bottleneck_pre_relu_conv:139" of inception network.

    image

    opened by shaibagon 0
  • Low GPU utilization

    Low GPU utilization

    I am trying to use Lucent to visualize deep neurons, but whatever I do it seems like GPU is under-utilized: Examining utilization via nvidia-smi I see low utilization (~10%) with occasional peaks at ~50%, but never above that. This happens both for cppn prior as well as fourier image representation.

    Any suggestions?

    opened by shaibagon 0
Releases(v0.1.8)
Owner
Lim Swee Kiat
Lim Swee Kiat
[CVPR2021] Domain Consensus Clustering for Universal Domain Adaptation

[CVPR2021] Domain Consensus Clustering for Universal Domain Adaptation [Paper] Prerequisites To install requirements: pip install -r requirements.txt

Guangrui Li 84 Dec 26, 2022
Pytorch implementation of CoCon: A Self-Supervised Approach for Controlled Text Generation

COCON_ICLR2021 This is our Pytorch implementation of COCON. CoCon: A Self-Supervised Approach for Controlled Text Generation (ICLR 2021) Alvin Chan, Y

alvinchangw 79 Dec 18, 2022
Yet Another Robotics and Reinforcement (YARR) learning framework for PyTorch.

Yet Another Robotics and Reinforcement (YARR) learning framework for PyTorch.

Stephen James 51 Dec 27, 2022
CLUES: Few-Shot Learning Evaluation in Natural Language Understanding

CLUES: Few-Shot Learning Evaluation in Natural Language Understanding This repo contains the data and source code for baseline models in the NeurIPS 2

Microsoft 29 Dec 29, 2022
Face Library is an open source package for accurate and real-time face detection and recognition

Face Library Face Library is an open source package for accurate and real-time face detection and recognition. The package is built over OpenCV and us

52 Nov 09, 2022
It's a implement of this paper:Relation extraction via Multi-Level attention CNNs

Relation Classification via Multi-Level Attention CNNs It's a implement of this paper:Relation Classification via Multi-Level Attention CNNs. Training

Aybss 2 Nov 04, 2022
PyTorch Implementation of Exploring Explicit Domain Supervision for Latent Space Disentanglement in Unpaired Image-to-Image Translation.

DosGAN-PyTorch PyTorch Implementation of Exploring Explicit Domain Supervision for Latent Space Disentanglement in Unpaired Image-to-Image Translation

40 Nov 30, 2022
CS50x-AI - Artificial Intelligence with Python from Harvard University

CS50x-AI Artificial Intelligence with Python from Harvard University 📖 Table of

Hosein Damavandi 6 Aug 22, 2022
Code to reproduce the experiments in the paper "Transformer Based Multi-Source Domain Adaptation" (EMNLP 2020)

Transformer Based Multi-Source Domain Adaptation Dustin Wright and Isabelle Augenstein To appear in EMNLP 2020. Read the preprint: https://arxiv.org/a

CopeNLU 36 Dec 05, 2022
A package for "Procedural Content Generation via Reinforcement Learning" OpenAI Gym interface.

Readme: Illuminating Diverse Neural Cellular Automata for Level Generation This is the codebase used to generate the results presented in the paper av

Sam Earle 27 Jan 05, 2023
Understanding the Effects of Datasets Characteristics on Offline Reinforcement Learning

Understanding the Effects of Datasets Characteristics on Offline Reinforcement Learning Kajetan Schweighofer1, Markus Hofmarcher1, Marius-Constantin D

Institute for Machine Learning, Johannes Kepler University Linz 17 Dec 28, 2022
Code for Private Recommender Systems: How Can Users Build Their Own Fair Recommender Systems without Log Data? (SDM 2022)

Private Recommender Systems: How Can Users Build Their Own Fair Recommender Systems without Log Data? (SDM 2022) We consider how a user of a web servi

joisino 20 Aug 21, 2022
AI Based Smart Exam Proctoring Package

AI Based Smart Exam Proctoring Package It takes image (base64) as input: Provide Output as: Detection of Mobile phone. Detection of More than 1 person

NARENDER KESWANI 3 Sep 09, 2022
Bootstrapped Representation Learning on Graphs

Bootstrapped Representation Learning on Graphs This is the PyTorch implementation of BGRL Bootstrapped Representation Learning on Graphs The main scri

NerDS Lab :: Neural Data Science Lab 55 Jan 07, 2023
Dynamic Token Normalization Improves Vision Transformers

Dynamic Token Normalization Improves Vision Transformers This is the PyTorch implementation of the paper Dynamic Token Normalization Improves Vision T

Wenqi Shao 20 Oct 09, 2022
Official implementation for "Image Quality Assessment using Contrastive Learning"

Image Quality Assessment using Contrastive Learning Pavan C. Madhusudana, Neil Birkbeck, Yilin Wang, Balu Adsumilli and Alan C. Bovik This is the offi

Pavan Chennagiri 67 Dec 30, 2022
This is the source code for: Context-aware Entity Typing in Knowledge Graphs.

This is the source code for: Context-aware Entity Typing in Knowledge Graphs.

9 Sep 01, 2022
Code release for Hu et al. Segmentation from Natural Language Expressions. in ECCV, 2016

Segmentation from Natural Language Expressions This repository contains the code for the following paper: R. Hu, M. Rohrbach, T. Darrell, Segmentation

Ronghang Hu 88 May 24, 2022
Finding an Unsupervised Image Segmenter in each of your Deep Generative Models

Finding an Unsupervised Image Segmenter in each of your Deep Generative Models Description Recent research has shown that numerous human-interpretable

Luke Melas-Kyriazi 61 Oct 17, 2022
Unsupervised CNN for Single View Depth Estimation: Geometry to the Rescue

Realtime Unsupervised Depth Estimation from an Image This is the caffe implementation of our paper "Unsupervised CNN for single view depth estimation:

Ravi Garg 227 Nov 28, 2022