Skip to content
/ PSANet Public

PSANet: Point-wise Spatial Attention Network for Scene Parsing, ECCV2018.

Notifications You must be signed in to change notification settings

hszhao/PSANet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PSANet: Point-wise Spatial Attention Network for Scene Parsing (in construction)

by Hengshuang Zhao*, Yi Zhang*, Shu Liu, Jianping Shi, Chen Change Loy, Dahua Lin, Jiaya Jia, details are in project page.

Introduction

This repository is build for PSANet, which contains source code for PSA module and related evaluation code. For installation, please merge the related layers and follow the description in PSPNet repository (test with CUDA 7.0/7.5 + cuDNN v4).

PyTorch Version

Highly optimized PyTorch codebases available for semantic segmentation in repo: semseg, including full training and testing codes for PSPNet and PSANet.

Usage

  1. Clone the repository recursively:

    git clone --recursive https://github.com/hszhao/PSANet.git
  2. Merge the caffe layers into PSPNet repository:

    Point-wise spatial attention: pointwise_spatial_attention_layer.hpp/cpp/cu and caffe.proto.

  3. Build Caffe and matcaffe:

    cd $PSANET_ROOT/PSPNet
    cp Makefile.config.example Makefile.config
    vim Makefile.config
    make -j8 && make matcaffe
    cd ..
  4. Evaluation:

    • Evaluation code is in folder 'evaluation'.

    • Download trained models and put them in related dataset folder under 'evaluation/model', refer 'README.md'.

    • Modify the related paths in 'eval_all.m':

      Mainly variables 'data_root' and 'eval_list', and your image list for evaluation should be similarity to that in folder 'evaluation/samplelist' if you use this evaluation code structure.

    cd evaluation
    vim eval_all.m
    • Run the evaluation scripts:
    ./run.sh
    
  5. Results:

    Predictions will show in folder 'evaluation/mc_result' and the expected scores are listed as below:

    (mIoU/pAcc. stands for mean IoU and pixel accuracy, 'ss' and 'ms' denote single scale and multiple scale testing.)

    ADE20K:

    network training data testing data mIoU/pAcc.(ss) mIoU/pAcc.(ms) md5sum
    PSANet50 train val 41.92/80.17 42.97/80.92 a8e884
    PSANet101 train val 42.75/80.71 43.77/81.51 ab5e56

    VOC2012:

    network training data testing data mIoU/pAcc.(ss) mIoU/pAcc.(ms) md5sum
    PSANet50 train_aug val 77.24/94.88 78.14/95.12 d5fc37
    PSANet101 train_aug val 78.51/95.18 79.77/95.43 5d8c0f
    PSANet101 COCO + train_aug + val test -/- 85.7/- 3c6a69

    Cityscapes:

    network training data testing data mIoU/pAcc.(ss) mIoU/pAcc.(ms) md5sum
    PSANet50 fine_train fine_val 76.65/95.99 77.79/96.24 25c06a
    PSANet101 fine_train fine_val 77.94/96.10 79.05/96.30 3ac1bf
    PSANet101 fine_train fine_test -/- 78.6/- 3ac1bf
    PSANet101 fine_train + fine_val fine_test -/- 80.1/- 1dfc91
  6. Demo video:

    • Video processed by PSANet (with PSPNet) on BDD dataset for drivable area segmentation: Video.

Citation

If PSANet is useful for your research, please consider citing:

@inproceedings{zhao2018psanet,
  title={{PSANet}: Point-wise Spatial Attention Network for Scene Parsing},
  author={Zhao, Hengshuang and Zhang, Yi and Liu, Shu and Shi, Jianping and Loy, Chen Change and Lin, Dahua and Jia, Jiaya},
  booktitle={ECCV},
  year={2018}
}

Questions

Please contact '[email protected]' or '[email protected]'

About

PSANet: Point-wise Spatial Attention Network for Scene Parsing, ECCV2018.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published