-
Notifications
You must be signed in to change notification settings - Fork 419
MVE Users Guide
Wiki Home ▸ The MVE Users Guide
Download and Building: In order to build the libraries, type make
in the base path. Note that the ogl
library requires OpenGL (only used by UMVE), but most applications do not require that library. Building this library will fail on systems without OpenGL, and this is fine as long as "ogl" is not required.
$ git clone https://github.com/simonfuhrmann/mve.git
$ cd mve
$ make -j8
User Interface UMVE: MVE can be operated without UMVE using the command line tools. However, UMVE is useful for inspecting the results of reconstruction steps. UMVE is a Qt-based application and qmake
is used as build system. To build and execute it, run:
$ cd apps/umve/
$ qmake && make -j8
$ ./umve
API Documentation: Optional API level documentation can be generated using Doxygen:
$ make doc
$ open-browser docs/doxygen.html
System requirements to compile and run MVE or UVME are:
- libjpeg (for MVE, http://www.ijg.org/)
- libpng (for MVE, http://www.libpng.org/pub/png/libpng.html)
- libtiff (for MVE, http://www.libtiff.org/)
- OpenGL (for libogl in MVE and UMVE)
- Qt 5 (for UMVE, http://qt.nokia.com)
See also Build Instructions for OS X and Build Instructions for Windows.
Currently, there is no install procedure. MVE apps do not depend on any external files. UMVE only requires access to the shaders, and expects these files in the shader/
directory located next to the binary. If shaders cannot be loaded from the file system, built-in fallback shaders are used.
- If UMVE does not show icons, SVG support for Qt is missing. Search for packages like
libqt5svg5
,qt5-qtsvg
, etc...
The MVE image-based reconstruction pipeline is composed of the following components:
- Creating a dataset, by converting the input photos into The MVE File Format.
- Structure from Motion, which reconstructs the camera parameters of the input photos.
- Multi-View Stereo, which reconstructs dense depth maps for each image.
- Surface Reconstruction, which reconstructs a surface mesh from the depth maps.
All steps of the pipeline are available as console applications and can be executed on systems without graphical user interface. Only Multi-View Stereo is currently accessible directly from within UMVE.
The following commands are a typical invocation of the pipeline. Read on for more information.
$ makescene -i <image-dir> <scene-dir>
$ sfmrecon <scene-dir>
$ dmrecon -s2 <scene-dir>
$ scene2pset -F2 <scene-dir> <scene-dir>/pset-L2.ply
$ fssrecon <scene-dir>/pset-L2.ply <scene-dir>/surface-L2.ply
$ meshclean -t10 <scene-dir>/surface-L2.ply <scene-dir>/surface-L2-clean.ply
The MVE libraries as well as UMVE are designed to work on MVE datasets. An MVE dataset is simply a directory, that contains another views/
directory. A bundle file synth_0.out
as well as other files may be placed in the dataset directory during the process.
The makescene
command line application is used to convert input photos to an MVE scene. Don't worry. Your original photos are untouched. makescene
also supports to import from a few third party Structure from Motion applications (see Third Party Bundles for details). Another method to create a new scene and import photos is to use UMVE.
There are more advanced ways to create MVE datasets using the MVE API. This involves creating the dataset directory, the views/
directory, and implementing a program that creates the views with the help of the mve::View
class. You may want to look at the makescene
application code and the API level documentation.
If makescene
has been used to import from an existing Structure from Motion (SfM) reconstruction, this step can be omitted. The sfmrecon
command line application runs the SfM reconstruction on the input images. See Structure from Motion for more details.
Note: Call any application without arguments to see the documentation.
The dmrecon
application runs Multi-View Stereo (MVS) to reconstruct a depth map for every input image. MVS automatically chooses a resolution for the depth maps. It is rarely useful to reconstruct at full resolution as it will produce less complete depth maps with more noise at a highly increased processing time. This behavior can be changed using options. See Multi-View Stereo for more details.
First, the scene2pset
application is used to create an extremely dense point cloud as the union of all samples from all depth maps. Use the -F
option to generate output that can be used by Floating Scale Surface Reconstruction and Poisson Surface Reconstruction. See the project websites for more information.