Multiple Object Extraction from Aerial Imagery with Convolutional Neural Networks

Related tags

Deep Learningssai-cnn
Overview

This is an implementation of Volodymyr Mnih's dissertation methods on his Massachusetts road & building dataset and my original methods that are published in this paper.

Requirements

  • Python 3.5 (anaconda with python 3.5.1 is recommended)
    • Chainer 1.5.0.2
    • Cython 0.23.4
    • NumPy 1.10.1
    • tqdm
  • OpenCV 3.0.0
  • lmdb 0.87
  • Boost 1.59.0
  • Boost.NumPy (26aaa5b)

Build Libraries

OpenCV 3.0.0

$ wget https://github.com/Itseez/opencv/archive/3.0.0.zip
$ unzip 3.0.0.zip && rm -rf 3.0.0.zip
$ cd opencv-3.0.0 && mkdir build && cd build
$ bash $SSAI_HOME/shells/build_opencv.sh
$ make -j32 install

If some libraries are missing, do below before compiling 3.0.0.

$ sudo apt-get install -y libopencv-dev libtbb-dev

Boost 1.59. 0

$ wget http://downloads.sourceforge.net/project/boost/boost/1.59.0/boost_1_59_0.tar.bz2
$ tar xvf boost_1_59_0.tar.bz2 && rm -rf boost_1_59_0.tar.bz2
$ cd boost_1_59_0
$ ./bootstrap.sh
$ ./b2 -j32 install cxxflags="-I/home/ubuntu/anaconda3/include/python3.5m"

Boost.NumPy

$ git clone https://github.com/ndarray/Boost.NumPy.git
$ cd Boost.NumPy && mkdir build && cd build
$ cmake -DPYTHON_LIBRARY=$HOME/anaconda3/lib/libpython3.5m.so ../
$ make install

Build utils

$ cd $SSAI_HOME/scripts/utils
$ bash build.sh

Create Dataset

$ bash shells/download.sh
$ bash shells/create_dataset.sh
Dataset Training Validation Test
mass_roads 8580352 108416 379456
mass_roads_mini 1060928 30976 77440
mass_buildings 1060928 30976 77440
mass_merged 1060928 30976 77440

Start Training

$ CHAINER_TYPE_CHECK=0 CHAINER_SEED=$1 \
nohup python scripts/train.py \
--seed 0 \
--gpu 0 \
--model models/MnihCNN_multi.py \
--train_ortho_db data/mass_merged/lmdb/train_sat \
--train_label_db data/mass_merged/lmdb/train_map \
--valid_ortho_db data/mass_merged/lmdb/valid_sat \
--valid_label_db data/mass_merged/lmdb/valid_map \
--dataset_size 1.0 \
> mnih_multi.log 2>&1 < /dev/null &

Prediction

python scripts/predict.py \
--model results/MnihCNN_multi_2016-02-03_03-34-58/MnihCNN_multi.py \
--param results/MnihCNN_multi_2016-02-03_03-34-58/epoch-400.model \
--test_sat_dir data/mass_merged/test/sat \
--channels 3 \
--offset 8 \
--gpu 0 &

Evaluation

$ PYTHONPATH=".":$PYTHONPATH python scripts/evaluate.py \
--map_dir data/mass_merged/test/map \
--result_dir results/MnihCNN_multi_2016-02-03_03-34-58/ma_prediction_400 \
--channel 3 \
--offset 8 \
--relax 3 \
--steps 1024

Results

Conventional methods

Model Mass. Buildings Mass. Roads Mass.Roads-Mini
MnihCNN 0.9150 0.8873 N/A
MnihCNN + CRF 0.9211 0.8904 N/A
MnihCNN + Post-processing net 0.9203 0.9006 N/A
Single-channel 0.9503062 0.91730195 (epoch 120) 0.89989258
Single-channel with MA 0.953766 0.91903522 (epoch 120) 0.902895

Multi-channel models (epoch = 400, step = 1024)

Model Building-channel Road-channel Road-channel (fixed)
Multi-channel 0.94346856 0.89379946 0.9033020025
Multi-channel with MA 0.95231262 0.89971473 0.90982972
Multi-channel with CIS 0.94417078 0.89415726 0.9039476538
Multi-channel with CIS + MA 0.95280431 0.90071099 0.91108087

Test on urban areas (epoch = 400, step = 1024)

Model Building-channel Road-channel
Single-channel with MA 0.962133 0.944748
Multi-channel with MA 0.962797 0.947224
Multi-channel with CIS + MA 0.964499 0.950465

x0_sigma for inverting feature maps

159.348674296

After prediction for single MA

$ bash shells/predict.sh
$ python scripts/integrate.py --result_dir results --epoch 200 --size 7,60
$ PYTHONPATH=".":$PYTHONPATH python scripts/evaluate.py --map_dir data/mass_merged/test/map --result_dir results/integrated_200 --channel 3 --offset 8 --relax 3 --steps 256
$ PYTHONPATH="." python scripts/eval_urban.py --result_dir results/integrated_200 --test_map_dir data/mass_merged/test/map --steps 256

Pre-trained models and Predicted results

Reference

If you use this code for your project, please cite this journal paper:

Shunta Saito, Takayoshi Yamashita, Yoshimitsu Aoki, "Multiple Object Extraction from Aerial Imagery with Convolutional Neural Networks", Journal of Imaging Science and Technology, Vol. 60, No. 1, pp. 10402-1-10402-9, 2015

Comments
  • cannot reshape array of size 3 into shape (92,92,3)

    cannot reshape array of size 3 into shape (92,92,3)

    Hi there! I'm having trouble to run this bash shells/create_dataset.sh

    The error is this: patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 patch size: 92 24 16 n_all_files: 0 patches: 0 Traceback (most recent call last): File "tests/test_dataset.py", line 36, in o_val, dtype=np.uint8).reshape((92, 92, 3)) ValueError: cannot reshape array of size 0 into shape (92,92,3) Traceback (most recent call last): File "tests/test_dataset.py", line 36, in o_val, dtype=np.uint8).reshape((92, 92, 3)) ValueError: cannot reshape array of size 0 into shape (92,92,3) Traceback (most recent call last): File "tests/test_dataset.py", line 36, in o_val, dtype=np.uint8).reshape((92, 92, 3)) ValueError: cannot reshape array of size 0 into shape (92,92,3) Traceback (most recent call last): File "tests/test_dataset.py", line 36, in o_val, dtype=np.uint8).reshape((92, 92, 3)) ValueError: cannot reshape array of size 0 into shape (92,92,3) Traceback (most recent call last): File "tests/test_transform.py", line 55, in (args.ortho_original_side, args.ortho_original_side, 3)) ValueError: cannot reshape array of size 0 into shape (92,92,3) Traceback (most recent call last): File "tests/test_transform.py", line 55, in (args.ortho_original_side, args.ortho_original_side, 3)) ValueError: cannot reshape array of size 0 into shape (92,92,3) Traceback (most recent call last): File "tests/test_transform.py", line 55, in (args.ortho_original_side, args.ortho_original_side, 3)) ValueError: cannot reshape array of size 0 into shape (92,92,3) Traceback (most recent call last): File "tests/test_transform.py", line 55, in (args.ortho_original_side, args.ortho_original_side, 3)) ValueError: cannot reshape array of size 0 into shape (92,92,3)

    Any ideas? Thank you very much for your time!

    opened by flor88 8
  • Adam optimisation error

    Adam optimisation error

    Hi,

    I am trying set Adam optimization algorithm but receiving the following error, can you please help me how to set parameters?

    python3.5/site-packages/chainer-1.22.0-py3.5-linux-x86_64.egg/chainer/optimizers/adam.py", line 51, in lr return self.alpha * math.sqrt(fix2) / fix1 ZeroDivisionError: float division by zero

    Please see the below parameters what i have set.

    parser.add_argument('--opt', type=str, default='Adam', choices=['MomentumSGD', 'Adam', 'AdaGrad'])
    parser.add_argument('--weight_decay', type=float, default=0.0001)
    parser.add_argument('--alpha', type=float, default=0.001)
    parser.add_argument('--lr', type=float, default=0.0001)
    parser.add_argument('--lr_decay_freq', type=int, default=100)
    parser.add_argument('--lr_decay_ratio', type=float, default=0.1)
    parser.add_argument('--seed', type=int, default=1701)
    args = parser.parse_args()
    

    Thank you

    opened by Tejuwi 3
  • Warning: don't setup your environment using anaconda.

    Warning: don't setup your environment using anaconda.

    This project use Chianer as its deep learning framework. However, Chianer is based on a Cuda driver named CUPY, which is(seemingly) not supported by any Anaconda library. Therefore setup environment using anaconda will always result in fail to find cupy module.

    I tried to using system python cupy library and conda chainer library with no luck. If any one has solution, please follow this issue.

    opened by dragon9001 3
  • import Error

    import Error

    I am using python 3.6 on win 10 I run the script create_dataset Found this Error ModuleNotFoundError: No module named 'utils.patches' the Error relates to this lines: from utils.patches import divide_to_patches

    Kindly help..

    opened by mshakaib 2
  • Tesla P100 is slower

    Tesla P100 is slower

    Hi,

    I have tested an algorithm on Tesla p100 (Ubuntu Server16.04 LTS x86_64) and it takes one epoch 2Hrs. I applied same algorithm on Quadro M4000 (Ubuntu desktop 16.04 LTS x86_64) takes 2Hrs 40min.

    We expected that training time will comedown to half of the Quadro M4000 but their no much difference between Tesla p100 and Quadro M4000.

    Please give me guidance so that the training time will be reduced. I appreciate you kind help

    opened by Tejuwi 1
  • cupy.cuda.curand.CURANDError.__init__ (cupy/cuda/curand.cpp:1108)

    cupy.cuda.curand.CURANDError.__init__ (cupy/cuda/curand.cpp:1108)

    I just enter: CHAINER_TYPE_CHECK=0 CHAINER_SEED=$1 nohup python scripts/train.py --seed 0 --gpu 0 --model models/MnihCNN_multi.py --train_ortho_db data/mass_merged/lmdb/train_sat --train_label_db data/mass_merged/lmdb/train_map --valid_ortho_db data/mass_merged/lmdb/valid_sat --valid_label_db data/mass_merged/lmdb/valid_map --dataset_size 1.0 > mnih_multi.log 2>&1 < /dev/null & and this is the log: You are running using the stub version of curand .Traceback (most recent call last): File "scripts/train.py", line 300, in xp.random.seed(args.seed) File "/home/qs/anaconda3/lib/python3.5/site-packages/cupy/random/generator.py", line 318, in seed get_random_state().seed(seed) File "/home/qs/anaconda3/lib/python3.5/site-packages/cupy/random/generator.py", line 350, in get_random_state rs = RandomState(seed) File "/home/qs/anaconda3/lib/python3.5/site-packages/cupy/random/generator.py", line 45, in init self._generator = curand.createGenerator(method) File "cupy/cuda/curand.pyx", line 92, in cupy.cuda.curand.createGenerator (cupy/cuda/curand.cpp:1443) File "cupy/cuda/curand.pyx", line 96, in cupy.cuda.curand.createGenerator (cupy/cuda/curand.cpp:1381) File "cupy/cuda/curand.pyx", line 85, in cupy.cuda.curand.check_status (cupy/cuda/curand.cpp:1216) File "cupy/cuda/curand.pyx", line 79, in cupy.cuda.curand.CURANDError.init (cupy/cuda/curand.cpp:1108) KeyError: 51

    I have no idea,help

    opened by FogXcG 0
  • Updated environment versions and instructions

    Updated environment versions and instructions

    Hi, I'm trying to train my own dataset but failed to deploy after many different CUDA, CUDNN, CuPy, Python version trials on Ubuntu 16.04 on NVIDIA GTX-950M.

    Is there any updated environment versions for CUDA, CUDNN, CuPy, Python, Chainer, NumPy?

    Cheers,

    opened by Vol-i 3
  • IndexError: index 76 is out of bounds for axis 1 with size 3

    IndexError: index 76 is out of bounds for axis 1 with size 3

    Hello,

    I am currently trying to automate parts of this project and I am running into difficulties during the training phase using CPU mode, which throws an IndexError and appears to hang the entire training. I am using a very small dataset from the mass_buildings set, i.e. I am using 8 training images and 2 validation images. The purpose is only to test and not to have accurate results at the moment. Below is the state of the installation and steps I am using:

    System:

    uname -a
    Linux user-VirtualBox 4.10.0-28-generic #32~16.04.2-Ubuntu SMP Thu Jul 20 10:19:48 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
    

    Python (w/o Anaconda):

    $ python -V
    Python 3.5.2
    

    Python modules:

    [email protected]:~/Source/ssai-cnn$ pip3 freeze
    ...
    chainer==1.5.0.2
    ...
    Cython==0.23.4
    ...
    h5py==2.7.1
    ...
    lmdb==0.87
    ...
    matplotlib==2.1.1
    ...
    numpy==1.10.1
    onboard==1.2.0
    opencv-python==3.1.0.3
    ...
    six==1.10.0
    tqdm==4.19.5
    ...
    

    Additionally, Boost 1.59.0 and OpenCV 3.0.0 have been build and installed from source and both installs appears successful. The utils is also built successfully.

    I have downloaded only a small subset of the mass_buildings dataset:

    # ls -R ./data/mass_buildings/train/
    ./data/mass_buildings/train/:
    map  sat
    
    ./data/mass_buildings/train/map:
    22678915_15.tif  22678930_15.tif  22678945_15.tif  22678960_15.tif
    
    ./data/mass_buildings/train/sat:
    22678915_15.tiff  22678930_15.tiff  22678945_15.tiff  22678960_15.tiff
    

    Below is the output obtained by running the shells/create_datasets.sh script, modified only to build the mass_buildings data:

    patch size: 92 24 16
    n_all_files: 1
    divide:0.6727173328399658
    0 / 1 n_patches: 7744
    patches:	 7744
    patch size: 92 24 16
    n_all_files: 1
    divide:0.6314394474029541
    0 / 1 n_patches: 7744
    patches:	 7744
    patch size: 92 24 16
    n_all_files: 4
    divide:0.6260504722595215
    0 / 4 n_patches: 7744
    divide:0.667414665222168
    1 / 4 n_patches: 15488
    divide:0.628319263458252
    2 / 4 n_patches: 23232
    divide:0.6634025573730469
    3 / 4 n_patches: 30976
    patches:	 30976
    0.03437542915344238 sec (128, 3, 64, 64) (128, 16, 16)
    

    Then the training script is initiated using the following command:

    [email protected]:~/Source/ssai-cnn$ CHAINER_TYPE_CHECK=0 CHAINER_SEED=$1 \
    > nohup python ./scripts/train.py \
    > --seed 0 \
    > --gpu -1 \
    > --model ./models/MnihCNN_multi.py \
    > --train_orthokill _db data/mass_buildings/lmdb/train_sat \
    > --train_label_db data/mass_buildings/lmdb/train_map \
    > --valid_ortho_db data/mass_buildings/lmdb/valid_sat \
    > --valid_label_db data/mass_buildings/lmdb/valid_map \
    > --dataset_size 1.0 \
    > --epoch 1
    

    As you can see above, I've been using only 8 images and a single epoch. I left the entire process run an entire night and never completed. Hence the reason I believe the process simply hanged. Using nohup also does not complete. When forcefully stopped using Ctrl-C, I'm obtaining the following message:

    # cat nohup.out 
    Traceback (most recent call last):
      File "./scripts/train.py", line 313, in <module>
        model, optimizer = one_epoch(args, model, optimizer, epoch, True)
      File "./scripts/train.py", line 265, in one_epoch
        optimizer.update(model, x, t)
      File "/usr/local/lib/python3.5/dist-packages/chainer/optimizer.py", line 377, in update
        loss = lossfun(*args, **kwds)
      File "./models/MnihCNN_multi.py", line 31, in __call__
        self.loss = F.softmax_cross_entropy(h, t, normalize=False)
      File "/usr/local/lib/python3.5/dist-packages/chainer/functions/loss/softmax_cross_entropy.py", line 152, in softmax_cross_entropy
        return SoftmaxCrossEntropy(use_cudnn, normalize)(x, t)
      File "/usr/local/lib/python3.5/dist-packages/chainer/function.py", line 105, in __call__
        outputs = self.forward(in_data)
      File "/usr/local/lib/python3.5/dist-packages/chainer/function.py", line 183, in forward
        return self.forward_cpu(inputs)
      File "/usr/local/lib/python3.5/dist-packages/chainer/functions/loss/softmax_cross_entropy.py", line 39, in forward_cpu
        p = yd[six.moves.range(t.size), numpy.maximum(t.flat, 0)]
    IndexError: index 76 is out of bounds for axis 1 with size 3
    

    This is the only components that fails at this moment. I've tested the prediction and evaluation phases using the pre-trained data and both seems to complete successfully. Any assistance on how I could use the training script using custom datasets would be appreciated.

    Thank you

    opened by InfectedPacket 1
  • Unable to use cudnn (GPU)

    Unable to use cudnn (GPU)

    I am using the following code for training URL : https://github.com/mitmul/ssai-cnn My Environment specifications are given below: Xeon CPU, 128 GB RAM nVidia TITAN Xp Graphics Card Driver Version 384.90 CUDA 9.0, CUDNN 7.0

    Ubuntu 16.04, Anaconda3 Environment on python 3.5.1 chainer 1.5.0.2
    cupy 2.0.0
    curl 7.55.1 boost 1.65.1
    Cython 0.27.3
    h5py 2.7.1
    hdf5 1.10.1
    lmdb 0.87
    matplotlib 2.1.0
    numpy 1.13.3
    opencv 3.3.0
    pillow 4.3.0
    pycuda 2017.1.1
    python 3.5.1
    six 1.11.0
    tqdm 4.19.4

    I am running the training process on dataset of 430 images (Resolution 1500X1500 pixels) CUDNN is configured cuda.cudnn_enabled gives True Training doesn’t run fine in CPU environment . While using GPU environment using the flag CHAINER_CUDNN = 0 (i.e. cudnn disabled) it gives warning message about cudnn and the GPU usage shows only max. 25%. I feel it is not utilizing the GPU environment fully during training as 1 EPOCH takes 2-3 hrs on average, assuming this for 400 epochs it would take us too many 800-1200hrs (33-50 days) to complete the training. +++ Warning message showing CuDNN not enabled for training+++ /home/user/anaconda3/envs/ssai-cnn/lib/python3.5/site-packages/chainer/cuda.py:85: UserWarning: cuDNN is not enabled. Please reinstall chainer after you install cudnn (see https://github.com/pfnet/chainer#installation). 'cuDNN is not enabled.\n' ++++++++++++++++++++++++++++++++++++++++++++++++++++++

    When I tried to run the training process using cudnn (i.e. CHAINER_CUDNN = 1) then it gives error. +++++ Error message +++++++ 2017-12-08 11:36:34 INFO start training... 2017-12-08 11:36:34 INFO learning rate:0.0005 2017-12-08 11:36:34 INFO random skip:2 Traceback (most recent call last): File "/common/workspace/ssai-cnn/src/scripts/train.py", line 373, in model, optimizer = one_epoch(args, model, optimizer, epoch, True) File "/common/workspace/ssai-cnn/src/scripts/train.py", line 285, in one_epoch optimizer.update(model, x, t) File "/home/user/anaconda3/envs/ssai-cnn/lib/python3.5/site-packages/chainer/optimizer.py", line 377, in update loss = lossfun(*args, **kwds) File "/common/workspace/ssai-cnn/src/models/MnihCNN_multi.py", line 22, in call h = F.relu(self.conv1(x)) File "/home/user/anaconda3/envs/ssai-cnn/lib/python3.5/site-packages/chainer/links/connection/convolution_2d.py", line 74, in call return convolution_2d.convolution_2d(x, self.W, self.b, self.stride, self.pad, self.use_cudnn) File "/home/user/anaconda3/envs/ssai-cnn/lib/python3.5/site-packages/chainer/functions/connection/convolution_2d.py", line 267, in convolution_2d return func(x, W, b) File "/home/user/anaconda3/envs/ssai-cnn/lib/python3.5/site-packages/chainer/function.py", line 105, in call outputs = self.forward(in_data) File "/home/user/anaconda3/envs/ssai-cnn/lib/python3.5/site-packages/chainer/function.py", line 181, in forward return self.forward_gpu(inputs) File "/home/user/anaconda3/envs/ssai-cnn/lib/python3.5/site-packages/chainer/functions/connection/convolution_2d.py", line 80, in forward_gpu (self.ph, self.pw), (self.sy, self.sx)) TypeError: create_convolution_descriptor() missing 1 required positional argument: 'dtype' ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

    I just want to confirm if your process had run on cudnn enabled mode is yes then what chainer and cupy and other supporting package versions were used? What will be the challenges on existing code if I have to upgrade the Chainer version 3.0.0 or 3.1.0 , Cupy 2.1.0 and other packages? Which chainer-cupy version combination will be preferable?

    Thanks

    opened by sundeeptewari 4
  • Is it possible to build this for windows?

    Is it possible to build this for windows?

    Hi, I use Windows 10, anaconda3 and Python 3.6 I was wondering if i would be able to build the utils if I change the paths in the file build.sh. e.g. $PYTHON_DIR/lib/libpython3.5m.so to Windows Anaconda equivalent (if there is one). then run bash using cygwin.

    Or should I give up and use a Linux machine? Thanks, Véro

    opened by VeroL 2
  • fatal error: pyconfig.h: No such file or directory  # include <pyconfig.h>

    fatal error: pyconfig.h: No such file or directory # include

    -- Found PythonLibs: /home/yanghuan/.pyenv/versions/anaconda3-2.4.0/lib/libpython3.5m.so
    -- Configuring done -- Generating done CMake Warning: Manually-specified variables were not used by the project:

    PYTHON_INCLUDE_DIR2
    

    -- Build files have been written to: /home/yanghuan/下载/ssai-cnn-master/utils Scanning dependencies of target patches [ 16%] Building CXX object CMakeFiles/patches.dir/src/devide_to_patches.cpp.o In file included from /usr/local/include/boost/python/detail/prefix.hpp:13:0, from /usr/local/include/boost/python/args.hpp:8, from /usr/local/include/boost/python.hpp:11, from /home/yanghuan/下载/ssai-cnn-master/utils/src/devide_to_patches.cpp:4: /usr/local/include/boost/python/detail/wrap_python.hpp:50:23: fatal error: pyconfig.h: No such file or directory

    include <pyconfig.h>

                       ^
    

    compilation terminated. CMakeFiles/patches.dir/build.make:62: recipe for target 'CMakeFiles/patches.dir/src/devide_to_patches.cpp.o' failed make[2]: *** [CMakeFiles/patches.dir/src/devide_to_patches.cpp.o] Error 1 CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/patches.dir/all' failed make[1]: *** [CMakeFiles/patches.dir/all] Error 2 Makefile:83: recipe for target 'all' failed make: *** [all] Error 2

    opened by yizuifangxiuyh 3
Releases(v1.0.0)
  • v1.0.0(Oct 22, 2018)

    This is an implementation of Volodymyr Mnih's dissertation methods on his Massachusetts road & building dataset and my original methods that are published in this paper.

    Requirements

    • Python 3.5 (anaconda with python 3.5.1 is recommended)
      • Chainer 1.5.0.2
      • Cython 0.23.4
      • NumPy 1.10.1
      • tqdm
    • OpenCV 3.0.0
    • lmdb 0.87
    • Boost 1.59.0
    • Boost.NumPy (26aaa5b)

    Build Libraries

    OpenCV 3.0.0

    $ wget https://github.com/Itseez/opencv/archive/3.0.0.zip
    $ unzip 3.0.0.zip && rm -rf 3.0.0.zip
    $ cd opencv-3.0.0 && mkdir build && cd build
    $ bash $SSAI_HOME/shells/build_opencv.sh
    $ make -j32 install
    

    If some libraries are missing, do below before compiling 3.0.0.

    $ sudo apt-get install -y libopencv-dev libtbb-dev
    

    Boost 1.59. 0

    $ wget http://downloads.sourceforge.net/project/boost/boost/1.59.0/boost_1_59_0.tar.bz2
    $ tar xvf boost_1_59_0.tar.bz2 && rm -rf boost_1_59_0.tar.bz2
    $ cd boost_1_59_0
    $ ./bootstrap.sh
    $ ./b2 -j32 install cxxflags="-I/home/ubuntu/anaconda3/include/python3.5m"
    

    Boost.NumPy

    $ git clone https://github.com/ndarray/Boost.NumPy.git
    $ cd Boost.NumPy && mkdir build && cd build
    $ cmake -DPYTHON_LIBRARY=$HOME/anaconda3/lib/libpython3.5m.so ../
    $ make install
    

    Build utils

    $ cd $SSAI_HOME/scripts/utils
    $ bash build.sh
    

    Create Dataset

    $ bash shells/download.sh
    $ bash shells/create_dataset.sh
    

    Dataset | Training | Validation | Test :-------------: | :------: | :--------: | :---: mass_roads | 8580352 | 108416 | 379456 mass_roads_mini | 1060928 | 30976 | 77440 mass_buildings | 1060928 | 30976 | 77440 mass_merged | 1060928 | 30976 | 77440

    Start Training

    $ CHAINER_TYPE_CHECK=0 CHAINER_SEED=$1 \
    nohup python scripts/train.py \
    --seed 0 \
    --gpu 0 \
    --model models/MnihCNN_multi.py \
    --train_ortho_db data/mass_merged/lmdb/train_sat \
    --train_label_db data/mass_merged/lmdb/train_map \
    --valid_ortho_db data/mass_merged/lmdb/valid_sat \
    --valid_label_db data/mass_merged/lmdb/valid_map \
    --dataset_size 1.0 \
    > mnih_multi.log 2>&1 < /dev/null &
    

    Prediction

    python scripts/predict.py \
    --model results/MnihCNN_multi_2016-02-03_03-34-58/MnihCNN_multi.py \
    --param results/MnihCNN_multi_2016-02-03_03-34-58/epoch-400.model \
    --test_sat_dir data/mass_merged/test/sat \
    --channels 3 \
    --offset 8 \
    --gpu 0 &
    

    Evaluation

    $ PYTHONPATH=".":$PYTHONPATH python scripts/evaluate.py \
    --map_dir data/mass_merged/test/map \
    --result_dir results/MnihCNN_multi_2016-02-03_03-34-58/ma_prediction_400 \
    --channel 3 \
    --offset 8 \
    --relax 3 \
    --steps 1024
    

    Results

    Conventional methods

    Model | Mass. Buildings | Mass. Roads | Mass.Roads-Mini :---------------------------- | :-------------- | :--------------------- | :-------------- MnihCNN | 0.9150 | 0.8873 | N/A MnihCNN + CRF | 0.9211 | 0.8904 | N/A MnihCNN + Post-processing net | 0.9203 | 0.9006 | N/A Single-channel | 0.9503062 | 0.91730195 (epoch 120) | 0.89989258 Single-channel with MA | 0.953766 | 0.91903522 (epoch 120) | 0.902895

    Multi-channel models (epoch = 400, step = 1024)

    Model | Building-channel | Road-channel | Road-channel (fixed) :-------------------------- | :--------------- | :----------- | :------------------- Multi-channel | 0.94346856 | 0.89379946 | 0.9033020025 Multi-channel with MA | 0.95231262 | 0.89971473 | 0.90982972 Multi-channel with CIS | 0.94417078 | 0.89415726 | 0.9039476538 Multi-channel with CIS + MA | 0.95280431 | 0.90071099 | 0.91108087

    Test on urban areas (epoch = 400, step = 1024)

    Model | Building-channel | Road-channel :-------------------------- | :--------------- | :----------- Single-channel with MA | 0.962133 | 0.944748 Multi-channel with MA | 0.962797 | 0.947224 Multi-channel with CIS + MA | 0.964499 | 0.950465

    x0_sigma for inverting feature maps

    159.348674296
    

    After prediction for single MA

    $ bash shells/predict.sh
    $ python scripts/integrate.py --result_dir results --epoch 200 --size 7,60
    $ PYTHONPATH=".":$PYTHONPATH python scripts/evaluate.py --map_dir data/mass_merged/test/map --result_dir results/integrated_200 --channel 3 --offset 8 --relax 3 --steps 256
    $ PYTHONPATH="." python scripts/eval_urban.py --result_dir results/integrated_200 --test_map_dir data/mass_merged/test/map --steps 256
    

    Pre-trained models and Predicted results

    Reference

    If you use this code for your project, please cite this journal paper:

    Shunta Saito, Takayoshi Yamashita, Yoshimitsu Aoki, "Multiple Object Extraction from Aerial Imagery with Convolutional Neural Networks", Journal of Imaging Science and Technology, Vol. 60, No. 1, pp. 10402-1-10402-9, 2015

    Source code(tar.gz)
    Source code(zip)
Owner
Shunta Saito
Ph.D in Engineering, Researcher at Preferred Networks, Inc.
Shunta Saito
All the code and files related to the MI-Lab of UE19CS305 course in sem 5

Machine-Intelligence-Lab-CS305 The compilation of all the code an drelated files from MI-Lab UE19CS305 (of batch 2019-2023) offered by PES University

Arvind Krishna 3 Nov 10, 2022
Combining Diverse Feature Priors

Combining Diverse Feature Priors This repository contains code for reproducing the results of our paper. Paper: https://arxiv.org/abs/2110.08220 Blog

Madry Lab 5 Nov 12, 2022
TensorFlow-LiveLessons - "Deep Learning with TensorFlow" LiveLessons

TensorFlow-LiveLessons Note that the second edition of this video series is now available here. The second edition contains all of the content from th

Deep Learning Study Group 830 Jan 03, 2023
This repo implements a 3D segmentation task for an airport baggage dataset.

3D CT Scan Segmentation With Occupancy Network This repo implements a 3D superresolution segmentation task for an airport baggage dataset. Our final p

Christoph Reich 2 Mar 28, 2022
Udacity's CS101: Intro to Computer Science - Building a Search Engine

Udacity's CS101: Intro to Computer Science - Building a Search Engine All soluti

Phillip 0 Feb 26, 2022
Learning Generative Models of Textured 3D Meshes from Real-World Images, ICCV 2021

Learning Generative Models of Textured 3D Meshes from Real-World Images This is the reference implementation of "Learning Generative Models of Texture

Dario Pavllo 115 Jan 07, 2023
Code for the paper "Combining Textual Features for the Detection of Hateful and Offensive Language"

The repository provides the source code for the paper "Combining Textual Features for the Detection of Hateful and Offensive Language" submitted to HA

Sherzod Hakimov 3 Aug 04, 2022
[ICCV 2021] Learning A Single Network for Scale-Arbitrary Super-Resolution

ArbSR Pytorch implementation of "Learning A Single Network for Scale-Arbitrary Super-Resolution", ICCV 2021 [Project] [arXiv] Highlights A plug-in mod

Longguang Wang 229 Dec 30, 2022
Codes for our paper "SentiLARE: Sentiment-Aware Language Representation Learning with Linguistic Knowledge" (EMNLP 2020)

SentiLARE: Sentiment-Aware Language Representation Learning with Linguistic Knowledge Introduction SentiLARE is a sentiment-aware pre-trained language

74 Dec 30, 2022
NALSM: Neuron-Astrocyte Liquid State Machine

NALSM: Neuron-Astrocyte Liquid State Machine This package is a Tensorflow implementation of the Neuron-Astrocyte Liquid State Machine (NALSM) that int

Computational Brain Lab 4 Nov 28, 2022
Python scripts to detect faces in Python with the BlazeFace Tensorflow Lite models

Python scripts to detect faces using Python with the BlazeFace Tensorflow Lite models. Tested on Windows 10, Tensorflow 2.4.0 (Python 3.8).

Ibai Gorordo 46 Nov 17, 2022
🧮 Matrix Factorization for Collaborative Filtering is just Solving an Adjoint Latent Dirichlet Allocation Model after All

Accompanying source code to the paper "Matrix Factorization for Collaborative Filtering is just Solving an Adjoint Latent Dirichlet Allocation Model A

Florian Wilhelm 39 Dec 03, 2022
Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning"

A Unified Framework for Parameter-Efficient Transfer Learning This is the official implementation of the paper: Towards a Unified View of Parameter-Ef

Junxian He 216 Dec 29, 2022
CVPRW 2021: How to calibrate your event camera

E2Calib: How to Calibrate Your Event Camera This repository contains code that implements video reconstruction from event data for calibration as desc

Robotics and Perception Group 104 Nov 16, 2022
Brain tumor detection using Convolution-Neural Network (CNN)

Detect and Classify Brain Tumor using CNN. A system performing detection and classification by using Deep Learning Algorithms using Convolution-Neural Network (CNN).

assia 1 Feb 07, 2022
Repositório criado para abrigar os notebooks com a listas de exercícios propostos pelo professor Gustavo Guanabara do canal Curso em Vídeo do YouTube durante o Curso de Python 3

Curso em Vídeo - Exercícios de Python 3 Sobre o repositório Este repositório contém os notebooks com a listas de exercícios propostos pelo professor G

João Pedro Pereira 9 Oct 15, 2022
Codes for TIM2021 paper "Anchor-Based Spatio-Temporal Attention 3-D Convolutional Networks for Dynamic 3-D Point Cloud Sequences"

Codes for TIM2021 paper "Anchor-Based Spatio-Temporal Attention 3-D Convolutional Networks for Dynamic 3-D Point Cloud Sequences"

Intelligent Robotics and Machine Vision Lab 4 Jul 19, 2022
Pyramid Grafting Network for One-Stage High Resolution Saliency Detection. CVPR 2022

PGNet Pyramid Grafting Network for One-Stage High Resolution Saliency Detection. CVPR 2022, CVPR 2022 (arXiv 2204.05041) Abstract Recent salient objec

CVTEAM 109 Dec 05, 2022
PyTorch implementation of "Contrast to Divide: self-supervised pre-training for learning with noisy labels"

Contrast to Divide: self-supervised pre-training for learning with noisy labels This is an official implementation of "Contrast to Divide: self-superv

55 Nov 23, 2022
Automatic packaging of the open-composite libs for OvGME

OvGME Packager for OpenXR – OpenComposite for DCS Note This repository is currently unsupported and needs to be migrated to the upstream OpenComposite

12 Nov 03, 2022