Replies: 5 comments
-
hey, thanks for starting a discussion! I think a swapping example would be interesting, similar to other model/pipeline specializations like 4d gaussians or surfels, swapping flexes the current rigid gaussian cloud bindings. the LoD flow looks something like:
the particle behavior example would be a good starting point to quickly prototype a gpu LoD filter, e.g. binning gaussian visibility based on camera depth/intrinsics
I agree there is a lot to learn in this space, it would be great to better align the project with bevy's GPU-driven rendering vision (inviting @JMS55) a lot of refactor in this project is likely to take place in model binding, supporting more dynamic/specializable gaussian cloud formats/pipelines, similar to bevy's mesh. e.g. currently the project uses cargo features to change between f16/f32 formats, I've experimented with derive macro planar texture bind group formats, both feel less than ideal. |
Beta Was this translation helpful? Give feedback.
-
Thanks for the welcome! I was unable to run that particles example. "Platform unsupported by winit". Is there a versioning issue with the current repo or is this somehow on my end? That preprocessing strikes me as something that would belong to an automatic LOD generation system, but not a general swapping implementation. Any thoughts on that distinction? I was imagining that the swapping functionality would be a generic feature that the LOD system would then "implement". The only thing I've found during a quick search about gpu-driven rendering is this section from bevy 0.12 release post. Is there something specific that isn't currently aligning? |
Beta Was this translation helpful? Give feedback.
-
to enable a winit backend on linux, the I agree, the swapping system could be generic and used by multiple downstream features (e.g. LoD) the codebase is largely influenced by bevy 0.11 patterns, pre-dating some of the rebinding optimizations described in the gpu-driven rendering article you linked. I've tried some of the features in 0.12, like model swapping would benefit from the data pattern described:
additionally, the flexibility of gaussian cloud should be similar to bevy's mesh, e.g. attribute/pipeline specialization, storage/texture formats, etc. |
Beta Was this translation helpful? Give feedback.
-
Seems like good starting points. Also: I think streaming the swap target clouds from disc would eventually be relevant as well, but can probably be left out of the initial implementation. Does that seem like something that can be added on later? |
Beta Was this translation helpful? Give feedback.
-
related work: https://github.com/graphdeco-inria/hierarchical-3d-gaussians |
Beta Was this translation helpful? Give feedback.
-
Hey! I've been tentatively exploring the gaussian splat space for the past half year, and this project in particular has been fascinating to follow. I'm happy that you're taking the time to work on this!
I have a implementation/feature idea regarding LOD's that I would like to share and get your thoughts on. It's to implement "swapping", so essentially triggering an arbitrary collection of gaussians to be replaced / faded out with another arbitrary collection when an arbitrary condition is fulfilled. You could also conceptualise them as constraints.
This would then also be an implementation for Levels of Detail, since that's just swapping models when they're x distance from the camera. Particle effects could be made by chaining a sequence of clouds as time passes. Trigger swaps based on individual splat attributes for more complex simulations. Movement through morphological space.
I'd be interested in exploring and implementing a version of this. I haven't yet taken the time to learn how your impIementation works so I don't know how your recent work on 4D gaussians would overlap with this or what your plans are for LOD and spatial querying. I also don't have any real graphics programming experience. I've been reading around and trying to learn the high-level concepts, but would love a concrete goal and contribution to work toward while learning.
For context, my motivation is that I am super excited about gaussians as a new graphics primitive, and I am making a creative application with Bevy that I would eventually want to utilise them in. Gaussians are so versatile, I think they have a lot of potential for simulation and procedural animation.
From what I've explored so far it seems like there would be a lot to learn from UE5's Nanite, Bevy's new meshlets, and simple neural networks. But of course I would first start by familiarising myself with this codebase, play around, and understanding how gaussian rendering works in general.
I wouldn't be able to properly get started yet during summer, but wanted to at least break the ice and start a discussion, forming a rough idea of how this would go and if it would be useful or even feasible.
Beta Was this translation helpful? Give feedback.
All reactions