This repository contains the implementation of our RAL 2022 paper: An Algorithm for the SE(3)-Transformation on Neural Implicit Maps for Remapping Functions.
Brief intro of this project shows in website.
Update 7. Aug 2022: seems you can also download replica dataset here. I donot verify, but seems the same.
conda create -n recons python=3.6
conda activate recons
conda install pytorch=1.6.0 torchvision=0.7.0 cudatoolkit=10.1 -c pytorch
pip install torch-scatter==1.4 open3d numba opencv-python
Be sure you have build-essential before torch-scatter (sudo apt install build-essential). If you still have problem on torch-scatter, please directly copy that torch-scatter function into the script.
We have pretrained parameter of encoder-decoder in ./treasure
for easy use.
Prepare data
-
Download ICL-NUIM data (TUM RGB-D Compatible PNGs with noise).
mkdir ./data
and unzip inside, for example, you will have./data/ICL_NUIM/lr_kt0n
(Replaca dataset requires contacting iMAP author, and put into ./data as described in ./config/ifr-fusion-replica.yaml)
-
For quick play, please download our pose stream computed from ORB-SLAM2.
mkdir ./treasure/orbslam2_record
and unzip inside, for example, you will have./treasure/orbslam2_record/lrkt0n
.- Reconstruction Demo (will draw a window with incremental reconstruction and output the intermediary mesh in args.outdir. )
python pose_fmt.py ./configs/ifr-fusion-lr-kt0.yaml
- Transformation Demo (will save a. transform-then-encode [tgt.ply] and b. encode-then-transform [tsrc.ply] to args.outdir+'transform')
python exp_transform.py ./configs/ifr-fusion-lr-kt0.yaml
Please find more detail in configs/[config_file.yaml].
NOTE: the first run may takes some time to compile functions of system/ext from DI-Fusion
The experiment is on one GTX2080Ti, but I find it can also run well (take around 3GB) on my laptop (Legion R9000X) with a 1650Ti.
Reconstruction goes with python pose_fmt.py [config_file.yaml]
where we provide some [config_file] in ./config
In addition, we provide a guidline on what does it mean in config readme.
As we have provided the pretained model, this step is not necessary, as the model is a transfering from synthetic to real scenario.
But for friends are interested, we provid as follow:
-
follow DI-Fusion to prepare the ShapeNet data for training.
-
python network_trainer.py configs/train.yaml
to train.
NOTE: put the ShapeNet data on SSD because read on HDD during training is extremely slow...
This training step is actually the same as DI-Fusion, we replace encoder with SO3 equivariant layer from vnn.
We further rewrite and extend the test scope to a campus-scale dataset KITTI-odometry following reviewer's suggestion. LiDAR-SLAM on KITTI-odometry in community focus more on trajectory. But here we use it to provide a large-scale demo.
Prepare data
Download our pose stream computed from PyICP-SLAM.
mkdir ./treasure/pyicp_slam_record
and unzip inside, for example, you will have ./treasure/pyicp_slam_record/kitti00
.
Because PyICP-SLAM produces pose stream stored in .csv file but not .txt as we used in indoor data. The zip might be large.
Download kitti-odometry data and then have folder like ``./data/kitti_odometry/dataset/sequences''
Demo
python pose_fmt_kitti.py ./configs/test_kitti.yaml
- A full reconstruction release.
If you find this work interesting, please cite us:
@article{yuan2022transform,
title={An Algorithm for the SE(3)-Transformation on Neural Implicit Maps for Remapping Functions},
author={Yuan, Yijun and N{\"u}chter, Andreas},
journal={IEEE Robotics and Automation Letters},
year={2022},
publisher={IEEE}
}