LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation (NeurIPS2021 Benchmark and Dataset Track)

Overview

LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation

by Junjue Wang, Zhuo Zheng, Ailong Ma, Xiaoyan Lu, and Yanfei Zhong


This is an official implementation of LoveDA in our NeurIPS2021 paper " LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation"

Citation

If you use FactSeg in your research, please cite our coming NeurIPS2021 paper.

    @inproceedings{
    wang2021loveda,
    title={Love{DA}: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation},
    author={Junjue Wang and Zhuo Zheng and Ailong Ma and Xiaoyan Lu and Yanfei Zhong},
    booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
    year={2021},
    url={https://openreview.net/forum?id=bLBIbVaGDu}
    }

Dataset

Coming Soon!

Comments
  • bad cbst result

    bad cbst result

    hello, we re-run the cbst_train with the default settings you provide, but get bad results as shown in the fig, even worse than the source only method. i wonder the stability of the training of cbst, and i will appreciate that if you can provide the training log of the cbst. THANK YOU VERY MUCH! Uploading 屏幕截图 2021-11-11 112454.png…

    bug 
    opened by Luffy03 14
  • About the accuracy of the CodaLab website

    About the accuracy of the CodaLab website

    Why is the domain adaptation MIOU on the CodaLab site so high? Shouldn't the "Oracle" MIOU provided in the paper be the highest MIOU for this domain adaptation task?

    question 
    opened by Hcshenziyang 6
  • Results submitted to Codalab

    Results submitted to Codalab

    The results submitted to the CodaLab get zero score and zero ExecutionTime. I wonder is it any wrong with the CodaLab or it is just my own mistake. The output class index is 0~6 with 1024*1024 pixels.

    question 
    opened by Luffy03 6
  • Invitation of incoporating LoveDA dataset into MMSegmentation.

    Invitation of incoporating LoveDA dataset into MMSegmentation.

    Hi, I am member of OpenMMLab who develops MMSegmentation. Our vision is provide up-to-date methods and dataset(i.e., benchmark) for researchers and community around the world.

    First, congrats for acceptance of NeurIPS'21. I think this dataset and benchmark would definitely help Remote Sensing Image field where semantic segmentation plays an important role.

    Frankly speaking, right now we do not have too much human resources. Would you like to help us incpoorate your dataset into MMSegmentation? We appreciate all contibutors and users, here is our contributing details.

    I think if LoveDA is provided by MMSegmentation, it could let more people use & cite this excellent work, especially for those who want to establish standard segmentation benchmark.

    Looking forward to your reply. Wish you all the best.

    Best,

    good first issue 
    opened by MengzhangLI 6
  • Potential shift in class labels

    Potential shift in class labels

    Following up on the discussion from #23, I was wondering whether in the context of the segmantic segmentation task there could be a shift in class labels between the data on which the pretrained model hrnetw32.pth was trained on and the data provided in this repo.

    Here I have visualised the true and predicted segmentations on training image 1338 for 2 different COLOR_MAP-s from the repo (render.py and data.loveda.py)

    Screenshot 2022-03-26 at 10 06 23 Screenshot 2022-03-26 at 10 06 31

    Based on the input image we can see that the colours are correct for the top left and bottom right visualisations. Also, the black colours in top right image corresponds to label IGNORE with RGB values (0,0,0) while in the bottom left the black colour has RGB values (7,7,7), which seems to be because in data.loveda.py the COLOR_MAP only has 7 classes and with indexing 0-6 with agriculture having label 7 in the masked images, it is not colour mapped.

    This seems to be related to the difference between labels in the current repo:

    Category labels: background – 1, building – 2, road – 3, water – 4, barren – 5, forest – 6, agriculture – 7. And the no-data regions were assigned 0 which should be ignored. The provided data loader will help you construct your pipeline.

    and the ones described on CodaLab:

    Classes indexes: Background - 0, Building - 1, Road - 2, Water - 3, Barren - 4, Forest - 5, Agriculture - 6

    Could this class label offset be the case or perhaps there is an alternative explanation which I have not thought about?

    question 
    opened by keliive 3
  • Dataset links for Google drive return a 404 error

    Dataset links for Google drive return a 404 error

    The links mentioned on the README.md of this repository as well as the competition page for google drive of the dataset are broken as of 30-01-2022 and return a 404 error. Please update the link with a working one.

    opened by AnkushMalaker 3
  • The different resolutions in training and testing

    The different resolutions in training and testing

    I found that in the training process, the input resolution is 512x512, while in the test phase, the input resolution is 1024x1024. Would you please tell me why?

    question 
    opened by Luffy03 3
  • Meaning of line 228 in the Unsupervised_Domian_Adaptation/utils/tools.py

    Meaning of line 228 in the Unsupervised_Domian_Adaptation/utils/tools.py

    Hello,

    Thank you very much for making your excellent work open to the public.

    May I ask you the meaning of line 228 in tools.py for Unsupervised Domain Adaptation? I found that when running bash ./scripts/predict_cbst.sh, it will generate a bug saying AttributeError: 'NoneType' object has no attribute 'info'. This bug is due to line 228 and also the default setting _default_logger=None. Hence, I wonder what this line is for. Also, I would like to let you know that after commenting the line 228, the command can be run successfully.

    Many thanks for your help.

    opened by simonep1052 2
  • [Request] Release codalab evaluation script

    [Request] Release codalab evaluation script

    Would it be possible to release the evaluation script from codalab? File format detail is a bit confusing. For example, if I set empty regions as transparent or embed color palette within the image the evaluation script shows warning:

    /opt/conda/lib/python2.7/site-packages/PIL/Image.py:870: UserWarning: Palette images with Transparency   expressed in bytes should be converted to RGBA images
      'to RGBA images')
    

    Even if i remove the color palette I get the following error:

    Traceback (most recent call last):
      File "/tmp/codalab/tmpS_IrwU/run/program/evaluate.py", line 157, in <module>
        metric.forward(gt[valid_inds], mask[valid_inds])
      File "/tmp/codalab/tmpS_IrwU/run/program/evaluate.py", line 22, in forward
        cm = sparse.coo_matrix((v, (y_true, y_pred)), shape=(self.num_classes, self.num_classes), dtype=np.float32)
      File "/opt/conda/lib/python2.7/site-packages/scipy/sparse/coo.py", line 182, in __init__
        self._check()
      File "/opt/conda/lib/python2.7/site-packages/scipy/sparse/coo.py", line 219, in _check
        nnz = self.nnz
      File "/opt/conda/lib/python2.7/site-packages/scipy/sparse/coo.py", line 196, in getnnz
        raise ValueError('row, column, and data array must all be the '
    ValueError: row, column, and data array must all be the same length
    

    I made sure all my images are 1024 × 1024 with a single uint8 channel. The class ids have been assigned as per the specification, with empty regions assigned with value 15

    Classes indexes

    Background - 0
    Building - 1
    Road - 2
    Water - 3
    Barren - 4
    Forest - 5
    Agriculture - 6
    

    So, it would be helpful to see the evaluation script and generate compatible prediction images.

    opened by digital-idiot 2
  • Can you provide the pre-training weights of the adversarial learning?

    Can you provide the pre-training weights of the adversarial learning?

    Hi, I would like to use the visualized results of Adaptseg and CLAN for comparison, could you provide the pre-training weights (Rural to Urban weights) of these two networks?

    opened by csliujw 2
  • Running pretrained model without CUDA

    Running pretrained model without CUDA

    Hi,

    Is there a way to run ./scripts/predict_test.sh without CUDA?

    I am using the LoveDA dataset and pretrained model weights hrnetw32.pth as described in the ReadME.

    Initially I got the error urllib.error.HTTPError: HTTP Error 403: Forbidden, which I fixed by setting pretrained=False as recommended here: https://github.com/Junjue-Wang/LoveDA/issues/9.

    Then when rerunning the predict_test.sh, I got the error:

    Traceback (most recent call last):
      File "predict.py", line 52, in <module>
        predict_test(args.ckpt_path, args.config_path, args.out_dir)
      File "predict.py", line 38, in predict_test
        model.cuda()
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/nn/modules/module.py", line 680, in cuda
        return self._apply(lambda t: t.cuda(device))
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/nn/modules/module.py", line 570, in _apply
        module._apply(fn)
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/nn/modules/module.py", line 570, in _apply
        module._apply(fn)
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/nn/modules/module.py", line 570, in _apply
        module._apply(fn)
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/nn/modules/module.py", line 593, in _apply
        param_applied = fn(param)
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/nn/modules/module.py", line 680, in <lambda>
        return self._apply(lambda t: t.cuda(device))
      File "/Users/kristjan/miniconda3/envs/mip/lib/python3.7/site-packages/torch/cuda/__init__.py", line 208, in _lazy_init
        raise AssertionError("Torch not compiled with CUDA enabled")
    AssertionError: Torch not compiled with CUDA enabled
    

    I then commented out the line 38: https://github.com/Junjue-Wang/LoveDA/blob/4d574ce08f84cbc8d27becf2bd9dce8fbb7f50f8/Semantic_Segmentation/predict.py#L38 and after rerunning predict_test.sh, I got the output:

    Load model!
    INFO:data.loveda:./LoveDA/Val/Urban/images_png -- Dataset images: 0
    INFO:data.loveda:./LoveDA/Val/Rural/images_png -- Dataset images: 0
    INFO:ever.core.logger:HRNetEncoder: pretrained = False
    0it [00:00, ?it/s]
    
    question 
    opened by keliive 2
  • bash eval_hrnetw32.sh  Error!

    bash eval_hrnetw32.sh Error!

    Traceback (most recent call last). File ""home/libowen/LoveDA-master/Semantic_Segmentation/predict.py", line 52, in smodule> predict test(argsckpt path, args.config path, args.out dir) File "/home/libowen/LoveDA-master/Semantic_Segmentation/predictpy", line 37, in predict test model.load_state_dictmodel_state_dict) File "home/libowen/.conda/envs/bw/ib/python3.8/site-packages/torch/nn/modules/module,py", line 1667, in load_state_dictraise RuntimeError('Error(s) in loading state_dic for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state dict for HRNetFusion. Missing keys) in state dict: "ackbone.het.conv1.weight"ackbone hmet bn1.weight, "ackbone hretbn1.bias""backbone hmet bn1.running mean"

    question 
    opened by kukujoyyo 1
  • Predict.py Problem

    Predict.py Problem

    I download pretrained weight and use predict.py to test some images, but meet this bug, what's the problem of the fuse_layers?

    File "test4/Road/LoveDA-master/Semantic_Segmentation/module/baseline/base_hrnet/_hrnet.py", line 394, in forward y = y + self.fuse_layers[i][j](x[j]) RuntimeError: The size of tensor a (500) must match the size of tensor b (504) at non-singleton dimension 3

    question 
    opened by Acid-knight 3
  • Can run with One GPU in this work?

    Can run with One GPU in this work?

    **Shall we run this work with One GPU? If possible how to set parameters? **

    I'v got the issue below:

    PS F:\Models\LoveDA-master\Semantic_Segmentation> bash ./scripts/train_hrnetw32.sh NOTE: Redirects are currently not supported in Windows or MacOs. Init Trainer Set Seed Torch Traceback (most recent call last): File "train.py", line 79, in trainer = er.trainer.get_trainer('th_amp_ddp')() File "D:\ProgramData\Anaconda3\lib\site-packages\ever\api\trainer\th_amp_ddp_trainer.py", line 77, in init torch.cuda.set_device(self.args.local_rank) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\cuda_init_.py", line 311, in set_device device = _get_device_index(device) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\cuda_utils.py", line 34, in _get_device_index return _torch_get_device_index(device, optional, allow_cpu) File "D:\ProgramData\Anaconda3\lib\site-packages\torch_utils.py", line 537, in _get_device_index 'or an integer, but got:{}'.format(device)) ValueError: Expected a torch.device with a specified index or an integer, but got:None ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 108252) of binary: D:\ProgramData\Anaconda3\python.exe Traceback (most recent call last): File "D:\ProgramData\Anaconda3\lib\runpy.py", line 193, in run_module_as_main "main", mod_spec) File "D:\ProgramData\Anaconda3\lib\runpy.py", line 85, in run_code exec(code, run_globals) File "D:\ProgramData\Anaconda3\Scripts\torchrun.exe_main.py", line 7, in File "D:\ProgramData\Anaconda3\lib\site-packages\torch\distributed\elastic\multiprocessing\errors_init.py", line 345, in wrapper return f(*args, **kwargs) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\distributed\run.py", line 724, in main run(args) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\distributed\run.py", line 718, in run )(*cmd_args) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\distributed\launcher\api.py", line 131, in call return launch_agent(self._config, self._entrypoint, list(args)) File "D:\ProgramData\Anaconda3\lib\site-packages\torch\distributed\launcher\api.py", line 247, in launch_agent failures=result.failures, torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

    train.py FAILED

    Failures: <NO_OTHER_FAILURES>

    Root Cause (first observed failure): [0]: time : 2022-11-13_13:10:33 host : KWPAACQRFTY8V05 rank : 0 (local_rank: 0) exitcode : 1 (pid: 108252) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

    question 
    opened by kukujoyyo 1
  • no such file problem when training ST 2urban scripts

    no such file problem when training ST 2urban scripts

    When training self-training 2urban scripts, such as CBST_train.py and IAST_train.py, there is a problem which is 'FileNotFoundError: No such file: '/home/xxx/ssuda/UDA/log/cbst/2urban/pseudo_label/3814.png''. I guess this is because that the batch size is set to 2, and as expected the problem is solved when batch size is modified to 1.

    So, I wonder that if this is a bug or something what?

    Thanks for your excellent works!

    question 
    opened by lyhnsn 2
Releases(v0.2.0-alpha)
Owner
Kingdrone
Deep learning in RS
Kingdrone
Breaking Shortcut: Exploring Fully Convolutional Cycle-Consistency for Video Correspondence Learning

Breaking Shortcut: Exploring Fully Convolutional Cycle-Consistency for Video Correspondence Learning Yansong Tang *, Zhenyu Jiang *, Zhenda Xie *, Yue

Zhenyu Jiang 12 Nov 16, 2022
CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images

Code and result about CCAFNet(IEEE TMM) 'CCAFNet: Crossflow and Cross-scale Adaptive Fusion Network for Detecting Salient Objects in RGB-D Images' IEE

zyrant丶 14 Dec 29, 2021
PyTorch implementation of Weak-shot Fine-grained Classification via Similarity Transfer

SimTrans-Weak-Shot-Classification This repository contains the official PyTorch implementation of the following paper: Weak-shot Fine-grained Classifi

BCMI 60 Dec 02, 2022
Fast Differentiable Matrix Sqrt Root

Official Pytorch implementation of ICLR 22 paper Fast Differentiable Matrix Square Root

YueSong 42 Dec 30, 2022
Code of Puregaze: Purifying gaze feature for generalizable gaze estimation, AAAI 2022.

PureGaze: Purifying Gaze Feature for Generalizable Gaze Estimation Description Our work is accpeted by AAAI 2022. Picture: We propose a domain-general

39 Dec 05, 2022
Testing and Estimation of structural breaks in Stata

xtbreak estimating and testing for many known and unknown structural breaks in time series and panel data. For an overview of xtbreak test see xtbreak

Jan Ditzen 13 Jun 19, 2022
A Closer Look at Reference Learning for Fourier Phase Retrieval

A Closer Look at Reference Learning for Fourier Phase Retrieval This repository contains code for our NeurIPS 2021 Workshop on Deep Learning and Inver

Tobias Uelwer 1 Oct 28, 2021
Data Engineering ZoomCamp

Data Engineering ZoomCamp I'm partaking in a Data Engineering Bootcamp / Zoomcamp and will be tracking my progress here. I can't promise these notes w

Aaron 61 Jan 06, 2023
Code for: Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification

Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification Prerequisite PyTorch = 1.2.0 Python3 torch

16 Dec 14, 2022
Some useful blender add-ons for SMPL skeleton's poses and global translation.

Blender add-ons for SMPL skeleton's poses and trans There are two blender add-ons for SMPL skeleton's poses and trans.The first is for making an offli

犹在镜中 154 Jan 04, 2023
SVG Icon processing tool for C++

BAWR This is a tool to automate the icons generation from sets of svg files into fonts and atlases. The main purpose of this tool is to add it to the

Frank David Martínez M 66 Dec 14, 2022
PyTorch Implementation of Spatially Consistent Representation Learning(SCRL)

Spatially Consistent Representation Learning (CVPR'21) Official PyTorch implementation of Spatially Consistent Representation Learning (SCRL). This re

Kakao Brain 102 Nov 03, 2022
Code and experiments for "Deep Neural Networks for Rank Consistent Ordinal Regression based on Conditional Probabilities"

corn-ordinal-neuralnet This repository contains the orginal model code and experiment logs for the paper "Deep Neural Networks for Rank Consistent Ord

Raschka Research Group 14 Dec 27, 2022
ZSL-KG is a general-purpose zero-shot learning framework with a novel transformer graph convolutional network (TrGCN) to learn class representation from common sense knowledge graphs.

ZSL-KG is a general-purpose zero-shot learning framework with a novel transformer graph convolutional network (TrGCN) to learn class representa

Bats Research 94 Nov 21, 2022
Reproduce partial features of DeePMD-kit using PyTorch.

DeePMD-kit on PyTorch For better understand DeePMD-kit, we implement its partial features using PyTorch and expose interface consuing descriptors. Tec

Shaochen Shi 8 Dec 17, 2022
PGPortfolio: Policy Gradient Portfolio, the source code of "A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem"(https://arxiv.org/pdf/1706.10059.pdf).

This is the original implementation of our paper, A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem (arXiv:1706.1

Zhengyao Jiang 1.5k Dec 29, 2022
DeepAL: Deep Active Learning in Python

DeepAL: Deep Active Learning in Python Python implementations of the following active learning algorithms: Random Sampling Least Confidence [1] Margin

Kuan-Hao Huang 583 Jan 03, 2023
Hand tracking demo for DIY Smart Glasses with a remote computer doing the work

CameraStream This is a demonstration that streams the image from smartglasses to a pc, does the hand recognition on the remote pc and streams the proc

Teemu Laurila 20 Oct 13, 2022
ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi-Object Segmentation

ClevrTex This repository contains dataset generation code for ClevrTex benchmark from paper: ClevrTex: A Texture-Rich Benchmark for Unsupervised Multi

Laurynas Karazija 26 Dec 21, 2022
Keras Image Embeddings using Contrastive Loss

Keras-Image-Embeddings-using-Contrastive-Loss Image to Embedding projection in vector space. Implementation in keras and tensorflow for custom data. B

Shravan Anand K 5 Mar 21, 2022