This repository implements a PyTorch re-implementation of the research paper: "Disentangle then Parse: Night-time Semantic Segmentation with Illumination Disentanglement".
Nightcity-fine: Access the dataset via Google Drive.
Cityscapes: Access the dataset via cityscapes-dataset.com.
Set up your environment with these steps:
conda create -n dtp python=3.10
conda activate dtp
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia
# Alternatively: pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu116
pip install tensorboard
pip install -U openmim
mim install mmcv-full
pip install -v -e .
# Alternatively: python setup.py develop
-
Decompress the
nightcity-fine.zip
dataset and relocate it to./data/nightcity-fine
. -
Download Cityscapes
-
git clone https://github.com/mcordts/cityscapesScripts.git pip install cityscapesscripts
-
Extract all the above zip files and git repo into the
./data
folder -
vim cityscapesScripts/cityscapesscripts/preparation/createTrainIdLabelImgs.py Add the next line of code after `import os` os.envieron['CITYSCAPES_DATASET'] = "../../../" python cityscapesScripts/cityscapesscripts/preparation/createTrainIdLabelImgs.py mkdir cityscapes mv gtFine cityscapes && mv leftImg8bit cityscapes
-
Download the checkpoint from Google Drive and place it in
./checkpoints
.
Your directory structure should resemble:
.
├── checkpoints
│ ├── night
│ ├── night+day
| └── simmim_pretrain__swin_base__img192_window6__800ep.pth
├── custom
├── custom-tools
│ ├── dist_test.sh
│ ├── dist_train.sh
│ ├── test.py
│ └── train.py
├── data
│ ├── cityscapes
│ │ ├── gtFine
│ │ └── leftImg8bit
│ └── nightcity-fine
│ ├── train
│ └── val
├── mmseg
├── readme.md
├── requirements.txt
├── setup.cfg
└── setup.py
Execute tests using:
python custom-tools/test.py checkpoints/night/cfg.py checkpoints/night/night.pth --eval mIoU --aug-test
- Download pre-training weight from Google Drive.
- Convert it to MMSeg format using:
python custom-tools/swin2mmseg.py </path/to/pretrain> checkpoints/simmim_pretrain__swin_base__img192_window6__800ep.pth
- Start training with:
python custom-tools/train.py </path/to/your/config> # </path/to/your/config>:our config: checkpoints/night/cfg.py or checkpoints/night+day/cfg.py
The table below summarizes our findings:
logs | train dataset | validation dataset | mIoU |
---|---|---|---|
checkpoints/night/eval_multi_scale_20230801_162237.json | nightcity-fine | nightcity-fine | 64.2 |
checkpoints/night+day/eval_multi_scale_20230809_170141.json | nightcity-fine + cityscapes | nightcity-fine | 64.9 |
This dataset is refined based on the dataset of NightCity by Xin Tan et al. and NightLab by Xueqing Deng et al..
This project is based on the mmsegmentation.
Pretraining checkpoint comes from the SimMIM.
The annotation process was completed using LabelMe.
If you find this code or data useful, please cite our paper
@InProceedings{Wei_2023_ICCV,
author = {Wei, Zhixiang and Chen, Lin and Tu, Tao and Ling, Pengyang and Chen, Huaian and Jin, Yi},
title = {Disentangle then Parse: Night-time Semantic Segmentation with Illumination Disentanglement},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2023},
pages = {21593-21603}
}