Skip to content

Commit

Permalink
fVDB: Added 'projects' directory to include more fully-featured examp…
Browse files Browse the repository at this point in the history
…le 'batteries included' projects

Added panoptic segmentation project with MaskPLS implementation
Fixed install issue not including `fvdb.optim`
Re-introduced support for Volta&Turing but added errors for use of certain conv backends
Added interpolation functions to VDBTensor

Signed-off-by: Jonathan Swartz <[email protected]>
  • Loading branch information
swahtz committed Dec 11, 2024
1 parent 11836ad commit 093a638
Show file tree
Hide file tree
Showing 58 changed files with 2,199 additions and 63 deletions.
2 changes: 1 addition & 1 deletion fvdb/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,6 @@ RUN pip install --no-cache-dir -r env/build_requirements.txt

RUN if [ "$MODE" = "production" ]; then \
MAX_JOBS=$(free -g | awk '/^Mem:/{jobs=int($4/2.5); if(jobs<1) jobs=1; print jobs}') \
TORCH_CUDA_ARCH_LIST="8.0;8.6;8.9+PTX" \
TORCH_CUDA_ARCH_LIST="7.0;7.5;8.0;8.6+PTX" \
python setup.py install; \
fi
49 changes: 17 additions & 32 deletions fvdb/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ During the project's initial stage of release, it is necessary to [run the build

** Notes:**
* Linux is the only platform currently supported (Ubuntu >= 20.04 recommended).
* A CUDA-capable GPU with Ampere architecture or newer (i.e. compute capability >=8.0) is required to run the CUDA-accelerated operations in ƒVDB.
* A CUDA-capable GPU with Ampere architecture or newer (i.e. compute capability >=8.0) is recommended to run the CUDA-accelerated operations in ƒVDB. A GPU with compute capabililty >=7.0 (Volta architecture) is the minimum requirement but some operations and data types are not supported.


## Building *f*VDB from Source
Expand Down Expand Up @@ -60,65 +60,50 @@ docker build --build-arg MODE=dev -t fvdb/dev .
Running the docker container is done with the following command:
```shell
# Run an interactive bash shell (or replace with your command)
docker run -it --gpus all --rm \
docker run -it --ipc=host --gpus all --rm \
fvdb/dev:latest \
/bin/bash
```

When running the docker container in `dev` mode and when you are ready to build ƒVDB, you can run the following command to build ƒVDB for the recommended set of CUDA architectures:
```shell
MAX_JOBS=$(free -g | awk '/^Mem:/{jobs=int($4/2.5); if(jobs<1) jobs=1; print jobs}') \
TORCH_CUDA_ARCH_LIST="8.0;8.6;8.9+PTX" \
TORCH_CUDA_ARCH_LIST="7.0;7.5;8.0;8.6+PTX" \
python setup.py install
```

#### Setting up a Conda Environment

In order to get resolved package versions in your conda environment consistent with our testing, it is necessary to configure your `.condarc` since not all package resolving behaviour can be controlled with an `environment.yml` file. We recommend using `strict` channel priority in your conda configuration. This can be done by running the following command:
*f*VDB can be used with any Conda distribution. Below is an installation guide using
[miniforge](https://github.com/conda-forge/miniforge). You can skip steps 1-3 if you already have a Conda installation.

```shell
conda config --set channel_priority strict
```

Further, it is recommend to not mix the `defaults` and `conda-forge` package channels when resolving environments. We have generally used `conda-forge` as the primary channel for our dependencies. You can remove the `defaults` channel and add `conda-forge` with the following command:
1. Download and Run Install Script. Copy the command below to download and run the [miniforge install script](https://github.com/conda-forge/miniforge?tab=readme-ov-file#unix-like-platforms-macos--linux):

```shell
conda config --remove channels defaults
conda config --add channels conda-forge
```

With these changes, it is recommended that your `.condarc` file looks like the following:

```yaml
channel_priority: strict
channels:
- conda-forge
curl -L -O "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh"
bash Miniforge3-$(uname)-$(uname -m).sh
```

2. Follow the prompts to customize Conda and run the install. Note, we recommend saying yes to enable `conda-init`.

**(Optional) Install libMamba for a huge quality of life improvement when using Conda**
```
conda update -n base conda
conda install -n base conda-libmamba-solver
conda config --set solver libmamba
```
3. Start Conda. Open a new terminal window, which should now show Conda initialized to the `(base)` environment.

4. Create the `fvdb` conda environment. Run the following command from the root of this repository:

Next, create the `fvdb` conda environment by running the following command from the root of this repository, and then grabbing a ☕:
```shell
conda env create -f env/dev_environment.yml
```

**Notes:**
* You can optionally use the `env/build_environment.yml` environment file if you want a minimum set of dependencies needed just to build/package *f*VDB (note this environment won't have all the runtime dependencies needed to `import fvdb`).
* If you would like a runtime environment which has only the packages required to run the unit tests after building ƒVDB, you can use the `env/test_environment.yml`. This is the environment used by the CI pipeline to run the tests after building ƒVDB in the `fvdb_build` environment.
* Use the `fvdb_learn` environment defined in `env/learn_environment.yml` if you would like an environment with the runtime requirements and the additional packages needed to run the [notebooks](notebooks) or [examples](examples) and view their visualizations.
5. Activate the *f*VDB environment:

Now activate the environment:
```shell
conda activate fvdb
```

#### Other available environments
* `fvdb_build`: Use `env/build_environment.yml` for a minimum set of dependencies needed just to build/package *f*VDB (note this environment won't have all the runtime dependencies needed to `import fvdb`).
* `fvdb_test`: Use `env/test_environment.yml` for a runtime environment which has only the packages required to run the unit tests after building ƒVDB. This is the environment used by the CI pipeline to run the tests after building ƒVDB in the `fvdb_build` environment.
* `fvdb_learn`: Use `env/learn_environment.yml` for additional runtime requirements and packages needed to run the [notebooks](notebooks) or [examples](examples) and view their visualizations.

### Building *f*VDB

Expand Down Expand Up @@ -254,4 +239,4 @@ Please consider citing this when using *f*VDB in a project. You can use the cita
}
```

## Contact
## Contact
3 changes: 3 additions & 0 deletions fvdb/ci/main.sh
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
# Copyright Contributors to the OpenVDB Project
# SPDX-License-Identifier: Apache-2.0
#
# Running a Github Action Runner, the first argument

# Starting a dockerd
Expand Down
4 changes: 3 additions & 1 deletion fvdb/examples/3dgs/accumulate_depths.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
import matplotlib.pyplot as plt
# Copyright Contributors to the OpenVDB Project
# SPDX-License-Identifier: Apache-2.0
#
import numpy as np
import point_cloud_utils as pcu
import torch
Expand Down
3 changes: 3 additions & 0 deletions fvdb/examples/3dgs/download_example_data.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
#!/usr/bin/env python
# Copyright Contributors to the OpenVDB Project
# SPDX-License-Identifier: Apache-2.0
#
"""Script to download benchmark dataset(s)"""

import os
Expand Down
3 changes: 3 additions & 0 deletions fvdb/examples/3dgs/evaluate_colmap.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
# Copyright Contributors to the OpenVDB Project
# SPDX-License-Identifier: Apache-2.0
#
import time
from typing import Optional, Union

Expand Down
3 changes: 3 additions & 0 deletions fvdb/examples/3dgs/make_segmentation_dataset.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
# Copyright Contributors to the OpenVDB Project
# SPDX-License-Identifier: Apache-2.0
#
import torch
import torch.utils.data
import tqdm
Expand Down
3 changes: 3 additions & 0 deletions fvdb/examples/3dgs/resume_colmap.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
# Copyright Contributors to the OpenVDB Project
# SPDX-License-Identifier: Apache-2.0
#
from typing import Optional, Union

import torch
Expand Down
3 changes: 3 additions & 0 deletions fvdb/examples/3dgs/train_colmap.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
# Copyright Contributors to the OpenVDB Project
# SPDX-License-Identifier: Apache-2.0
#
import itertools
import json
import logging
Expand Down
5 changes: 4 additions & 1 deletion fvdb/examples/3dgs/train_segmentation.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,12 @@
# Copyright Contributors to the OpenVDB Project
# SPDX-License-Identifier: Apache-2.0
#

import itertools
import time
from dataclasses import dataclass
from typing import Union

import matplotlib.pyplot as plt
import torch
import tqdm
import tyro
Expand Down
3 changes: 3 additions & 0 deletions fvdb/examples/3dgs/utils.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
# Copyright Contributors to the OpenVDB Project
# SPDX-License-Identifier: Apache-2.0
#
import torch
import torch.nn.functional as F

Expand Down
3 changes: 3 additions & 0 deletions fvdb/examples/3dgs/viz.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
# Copyright Contributors to the OpenVDB Project
# SPDX-License-Identifier: Apache-2.0
#
import dataclasses
import os
import sys
Expand Down
18 changes: 17 additions & 1 deletion fvdb/fvdb/nn/modules.py
Original file line number Diff line number Diff line change
Expand Up @@ -268,11 +268,27 @@ def _dispatch_conv(self, in_feature, in_grid, in_kmap, out_grid):

backend = self.backend

sm_arch = torch.cuda.get_device_capability()[0] + torch.cuda.get_device_capability()[1] / 10
# tf32 requires compute capability >= 8.0 (Ampere)
if self.allow_tf32 and self.weight.is_cuda:
assert (
torch.cuda.get_device_capability()[0] >= 8
sm_arch >= 8
), "TF32 requires GPU with compute capability >= 8.0. Please set fvdb.nn.SparseConv3d.allow_tf32 = False."

# bf16 requires compute capability >= 8.0 (Ampere)
if self.weight.is_cuda and self.weight.dtype == torch.bfloat16:
assert sm_arch >= 8, "BF16 requires GPU with compute capability >= 8.0."

# float16 requires compute capability >= 7.5 (Turing)
if self.weight.is_cuda and self.weight.dtype == torch.float16:
assert sm_arch >= 7.5, "FP16 requires GPU with compute capability >= 7.5."

# cutlass, lggs, halo backends require compute capability >= 8.0 (Ampere)
if backend in ["cutlass", "lggs", "halo"]:
assert (
torch.cuda.get_device_capability()[0] >= 8
), "cutlass, LGGS and Halo backends require GPU with compute capability >= 8.0."

if backend == "cutlass" and (
(not self.weight.is_cuda) or (self.in_channels, self.out_channels) not in self.CUTLASS_SUPPORTED_CHANNELS
):
Expand Down
20 changes: 19 additions & 1 deletion fvdb/fvdb/nn/vdbtensor.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,16 @@
# SPDX-License-Identifier: Apache-2.0
#
from dataclasses import dataclass
from typing import Any, Optional, Union
from typing import Any, Optional, Tuple, Union

import torch

import fvdb
from fvdb import GridBatch, JaggedTensor, SparseConvPackInfo
from fvdb.types import Vec3dBatch, Vec3dBatchOrScalar, Vec3i

JaggedTensorOrTensor = Union[torch.Tensor, JaggedTensor]


@dataclass
class VDBTensor:
Expand Down Expand Up @@ -204,6 +206,22 @@ def _binop_inplace(self, other, op):
op(self.data, other)
return self

# -----------------------
# Interpolation functions
# -----------------------

def sample_bezier(self, points: JaggedTensorOrTensor) -> JaggedTensor:
return self.grid.sample_bezier(points, self.data)

def sample_bezier_with_grad(self, points: JaggedTensorOrTensor) -> Tuple[JaggedTensor, JaggedTensor]:
return self.grid.sample_bezier_with_grad(points, self.data)

def sample_trilinear(self, points: JaggedTensorOrTensor) -> JaggedTensor:
return self.grid.sample_trilinear(points, self.data)

def sample_trilinear_with_grad(self, points: JaggedTensorOrTensor) -> Tuple[JaggedTensor, JaggedTensor]:
return self.grid.sample_trilinear_with_grad(points, self.data)

def cpu(self):
return VDBTensor(self.grid.to("cpu"), self.data.cpu(), self.kmap.cpu() if self.kmap is not None else None)

Expand Down
7 changes: 6 additions & 1 deletion fvdb/fvdb/types.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,10 @@
# Copyright Contributors to the OpenVDB Project
# SPDX-License-Identifier: Apache-2.0
#

from __future__ import annotations
from typing import List, Tuple, Union, Iterable

from typing import Iterable, List, Tuple, Union

import numpy
import torch
Expand Down
3 changes: 3 additions & 0 deletions fvdb/fvdb/utils/data/_colmap_utils/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
# Copyright Contributors to the OpenVDB Project
# SPDX-License-Identifier: Apache-2.0
#
from .camera import Camera
from .database import COLMAPDatabase
from .image import Image
Expand Down
2 changes: 2 additions & 0 deletions fvdb/fvdb/utils/data/_colmap_utils/camera.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# Author: True Price <jtprice at cs.unc.edu>
# SPDX-License-Identifier: Apache-2.0
#

import numpy as np
from scipy.optimize import root
Expand Down
4 changes: 3 additions & 1 deletion fvdb/fvdb/utils/data/_colmap_utils/database.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
import os
# Copyright Contributors to the OpenVDB Project
# SPDX-License-Identifier: Apache-2.0
#
import sqlite3

import numpy as np
Expand Down
3 changes: 2 additions & 1 deletion fvdb/fvdb/utils/data/_colmap_utils/image.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# SPDX-License-Identifier: Apache-2.0
# Author: True Price <jtprice at cs.unc.edu>

#
import numpy as np

# -------------------------------------------------------------------------------
Expand Down
2 changes: 2 additions & 0 deletions fvdb/fvdb/utils/data/_colmap_utils/rotation.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# SPDX-License-Identifier: Apache-2.0
# Author: True Price <jtprice at cs.unc.edu>
#

import numpy as np

Expand Down
2 changes: 2 additions & 0 deletions fvdb/fvdb/utils/data/_colmap_utils/scene_manager.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# SPDX-License-Identifier: Apache-2.0
# Author: True Price <jtprice at cs.unc.edu>
#

import array
import os
Expand Down
6 changes: 4 additions & 2 deletions fvdb/fvdb/utils/data/_colmap_utils/tools/colmap_to_nvm.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
# Copyright Contributors to the OpenVDB Project
# SPDX-License-Identifier: Apache-2.0
#

import itertools
import sys

sys.path.append("..")

import numpy as np

from .. import Quaternion, SceneManager

# -------------------------------------------------------------------------------
Expand Down
6 changes: 4 additions & 2 deletions fvdb/fvdb/utils/data/_colmap_utils/tools/delete_images.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
# Copyright Contributors to the OpenVDB Project
# SPDX-License-Identifier: Apache-2.0
#

import sys

sys.path.append("..")

import numpy as np

from .. import DualQuaternion, Image, SceneManager

# -------------------------------------------------------------------------------
Expand Down
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
# Copyright Contributors to the OpenVDB Project
# SPDX-License-Identifier: Apache-2.0
#

import sys

sys.path.append("..")

import numpy as np

from .. import DualQuaternion, Image, SceneManager

# -------------------------------------------------------------------------------
Expand Down
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
# Copyright Contributors to the OpenVDB Project
# SPDX-License-Identifier: Apache-2.0
#

import sys

sys.path.append("..")

import os

import numpy as np

from .. import SceneManager
Expand Down
4 changes: 4 additions & 0 deletions fvdb/fvdb/utils/data/_colmap_utils/tools/transform_model.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
# Copyright Contributors to the OpenVDB Project
# SPDX-License-Identifier: Apache-2.0
#

import sys

sys.path.append("..")
Expand Down
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
# Copyright Contributors to the OpenVDB Project
# SPDX-License-Identifier: Apache-2.0
#

import sys

sys.path.append("..")

import numpy as np

from .. import SceneManager

# -------------------------------------------------------------------------------
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
# Copyright Contributors to the OpenVDB Project
# SPDX-License-Identifier: Apache-2.0
#
import sys

sys.path.append("..")
Expand Down
Loading

0 comments on commit 093a638

Please sign in to comment.