Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Basic PBR implementation #261

Closed
wants to merge 3 commits into from
Closed

Conversation

StarArawn
Copy link
Contributor

@StarArawn StarArawn commented Aug 20, 2020

This is a WIP draft PR for adding basic PBR support to bevy. I used this as a resource for this implementation:
https://google.github.io/filament/Filament.md.html

@StarArawn
Copy link
Contributor Author

The text below was written up by @aclysma :

  • We probably want support for direction lights, point lights, and spot lights. It doesn’t block what you have so far I think, but do we have a plan for extending support for that? Should they three different component types? (This is how I did it in my renderer)
  • Have we decided on units and scale? (for example with attenuation)? I recommend 1.0 in floating point = 1 meter and broadly the same SI units in places like the GLTF spec and the filament docs. (Maybe we should have a units/scale/handedness reference guide somewhere)
  • Is it possible to keep “pbr” in the StandardMaterial struct as separate f32 for roughness/metal? If not, is it possible to have a RoughnessMetalic type?
  • PBR is likely to need more than a few textures, do we need to combine the sampler/texture to keep binding count down? I’m not sure if there is an upper bound we need to worry about here. I looked at the vulkan spec to get some idea and the only one that looked relevant was maxBoundDecriptorSets and maxPerStage* and those seem count-based, not index-based
  • For reference I have an implementation for shading here that supports normal mapping, several kinds of lights, etc. https://github.com/aclysma/engine_prototype/blob/master/assets/shaders/mesh.frag
  • I think we should be very thorough documenting where formulas are coming from. For example I think you used this for fresnel: https://google.github.io/filament/Filament.md.html#toc5.6.2 - this is mostly so anyone coming behind us knows if something errs more towards approximation or accuracy. And it makes it quick for someone less familiar to check the math in the code for accuracy against the original.
  • In general I’m inclined to go with epic’s approximations as I think they spent a significant amount of time comparing results/performance impact https://cdn2.unrealengine.com/Resources/files/2013SiggraphPresentationsNotes-26915738.pdf (formula (5) in that doc has their approximation for fresnel for example)
  • Epic and Disney also remap roughness in the G component (it’s (4) in the epic paper)
  • There are #ifdefs for albedo texture - is the intent to use a #ifdef for every texture? I recommend using a branch for handling textures being “unset.” I know branches are generally considered taboo in shaders but from the reading I did, it seems that as long as every wave is taking the same branch, it won’t be a problem. In the end I think I still had to bind a texture to those “unused” texture slots to avoid validation errors but it was never actually sampled. I know it’s not ideal, but looking at the 5 textures I was using in my PBR implementation (it could be more for a more complete implementation!) using #ifdef would create 5! = 120 permutations of the shader.

@StarArawn
Copy link
Contributor Author

* We probably want support for direction lights, point lights, and spot lights. It doesn’t block what you have so far I think, but do we have a plan for extending support for that? Should they three different component types? (This is how I did it in my renderer)

Yes absolutely! We should consider a combined light soultion in my opinion, although the branching might make things slower than what we want. For now I was mostly focusing on point lights since that's what the engine currently works with.

* Have we decided on units and scale? (for example with attenuation)? I recommend 1.0 in floating point = 1 meter and broadly the same SI units in places like the GLTF spec and the filament docs. (Maybe we should have a units/scale/handedness reference guide somewhere)

Scale is an important concept I don't know if we want bevy to be responsible for determining scale as that might differ from game to game. Typically though I'm on board with 1.0f32 = 1 meter.

* Is it possible to keep “pbr” in the StandardMaterial struct as separate f32 for roughness/metal? If not, is it possible to have a RoughnessMetalic type?

Not with the way bind groups get generated currently. I think we can have a RoughnessMetalic type.

* PBR is likely to need more than a few textures, do we need to combine the sampler/texture to keep binding count down? I’m not sure if there is an upper bound we need to worry about here. I looked at the vulkan spec to get some idea and the only one that looked relevant was maxBoundDecriptorSets and maxPerStage* and those seem count-based, not index-based

We have to keep descriptor sets below 4 to be inline with the WGPU web spec. Natively we can request for more, although I'm not sure we need to. As for uniforms it depends on the type of uniform. Here is a complete list:

dictionary GPULimits {
    GPUSize32 maxBindGroups = 4;
    GPUSize32 maxDynamicUniformBuffersPerPipelineLayout = 8;
    GPUSize32 maxDynamicStorageBuffersPerPipelineLayout = 4;
    GPUSize32 maxSampledTexturesPerShaderStage = 16;
    GPUSize32 maxSamplersPerShaderStage = 16;
    GPUSize32 maxStorageBuffersPerShaderStage = 4;
    GPUSize32 maxStorageTexturesPerShaderStage = 4;
    GPUSize32 maxUniformBuffersPerShaderStage = 12;
    GPUSize32 maxUniformBufferBindingSize = 16384;
};

https://gpuweb.github.io/gpuweb/#gpulimits

* For reference I have an implementation for shading here that supports normal mapping, several kinds of lights, etc. https://github.com/aclysma/engine_prototype/blob/master/assets/shaders/mesh.frag

Thanks I'll take a look! 👍

* I think we should be very thorough documenting where formulas are coming from. For example I think you used this for fresnel: https://google.github.io/filament/Filament.md.html#toc5.6.2 - this is mostly so anyone coming behind us knows if something errs more towards approximation or accuracy. And it makes it quick for someone less familiar to check the math in the code for accuracy against the original.

Yeah that's a really good idea!

* In general I’m inclined to go with epic’s approximations as I think they spent a significant amount of time comparing results/performance impact https://cdn2.unrealengine.com/Resources/files/2013SiggraphPresentationsNotes-26915738.pdf (formula (5) in that doc has their approximation for fresnel for example)

I'm opening to switching to that, ultimately we need buy in from @cart on that, but if they spent more time figuring out stuff than perhaps its a better fit than filament. My only concern is that filament seems a bit more open with how they figured out stuff and the entire code base is open source which makes it a bit easier to digest.

* Epic and Disney also remap roughness in the G component (it’s (4) in the epic paper)

Interesting. Seems like an easy enough thing to add in.

* There are #ifdefs for albedo texture - is the intent to use a #ifdef for every texture? I recommend using a branch for handling textures being “unset.” I know branches are generally considered taboo in shaders but from the reading I did, it seems that as long as every wave is taking the same branch, it won’t be a problem. In the end I think I still had to bind a texture to those “unused” texture slots to avoid validation errors but it was never actually sampled. I know it’s not ideal, but looking at the 5 textures I was using in my PBR implementation (it could be more for a more complete implementation!) using #ifdef would create 5! = 120 permutations of the shader.

Yes I agree, however this is how the current system works, perhaps we can create a ShaderBranch trait which lets you define variables that will be used as branches. Currently the RenderResources trait does not support the boolean type.

@StarArawn
Copy link
Contributor Author

One thing I noticed is in filament is that they have a cool idea for a pixel struct here which I kinda like:

struct PixelParams {
    vec3  diffuseColor;
    float perceptualRoughness;
    float perceptualRoughnessUnclamped;
    vec3  f0;
    float roughness;
    vec3  dfg;
    vec3  energyCompensation;

#if defined(MATERIAL_HAS_CLEAR_COAT)
    float clearCoat;
    float clearCoatPerceptualRoughness;
    float clearCoatRoughness;
#endif

#if defined(MATERIAL_HAS_ANISOTROPY)
    vec3  anisotropicT;
    vec3  anisotropicB;
    float anisotropy;
#endif

#if defined(SHADING_MODEL_SUBSURFACE) || defined(HAS_REFRACTION)
    float thickness;
#endif
#if defined(SHADING_MODEL_SUBSURFACE)
    vec3  subsurfaceColor;
    float subsurfacePower;
#endif

#if defined(SHADING_MODEL_CLOTH) && defined(MATERIAL_HAS_SUBSURFACE_COLOR)
    vec3  subsurfaceColor;
#endif

#if defined(HAS_REFRACTION)
    float etaRI;
    float etaIR;
    float transmission;
    float uThickness;
    vec3 absorption;
#endif
};

@aclysma
Copy link
Contributor

aclysma commented Aug 21, 2020

I'm not sure if it's the same everywhere but it might be - in vulkan I was seeing all uniforms being 4k aligned. (see minUniformBufferOffsetAlignment) My guess is that for uniform data, the size of the struct won't matter if we're just talking about a few fields. So I'm not sure that compiling things out of the PixelParams struct will actually benefit performance in practice. (I could be wrong though!)

For multiple light types, what I did in my code was have separate arrays of each type of light. Anything being looped is technically a branch so I don't know that it's any better than branching to three different light impls. It makes for simple code though.

Again as far as I know, branching isn't really expensive on current hardware if the branch is happening based on uniform data and all the work being done within a unit of compute are taking the same paths.

@karroffel karroffel added C-Feature A new feature, making something new possible A-Rendering Drawing game state to the screen labels Aug 21, 2020
@StarArawn
Copy link
Contributor Author

StarArawn commented Aug 21, 2020

I'm not sure if it's the same everywhere but it might be - in vulkan I was seeing all uniforms being 4k aligned. (see minUniformBufferOffsetAlignment) My guess is that for uniform data, the size of the struct won't matter if we're just talking about a few fields. So I'm not sure that compiling things out of the PixelParams struct will actually benefit performance in practice. (I could be wrong though!)

PixelParams is just a convenience struct that is passed around to various methods internally in the shader I don't think it's actually a uniform. I liked it because it packages up the current fragment in way that is relatable to PBR which is cool imo.

For multiple light types, what I did in my code was have separate arrays of each type of light. Anything being looped is technically a branch so I don't know that it's any better than branching to three different light impls. It makes for simple code though.

Yeah a separate buffer for each light type is fine I think, but we have to be careful because we get a max of 4 storage buffers and 8 uniform buffers in a shader.. :(

@aclysma
Copy link
Contributor

aclysma commented Aug 22, 2020

What I meant by separate arrays was one buffer like this:

// Represents the data uploaded to the GPU to provide all data necessary to render meshes
#[derive(Default, Copy, Clone)]
#[repr(C)]
pub struct MeshPerViewShaderParam {
    pub ambient_light: glam::Vec4,                  // +0
    pub point_light_count: u32,                     // +16
    pub directional_light_count: u32,               // 20
    pub spot_light_count: u32,                      // +24
    pub point_lights: [PointLight; 16],             // +32 (64*16 = 1024),
    pub directional_lights: [DirectionalLight; 16], // +1056 (64*16 = 1024),
    pub spot_lights: [SpotLight; 16],               // +2080 (96*16 = 1536)
} // 3616 bytes

I think I misunderstood about the PixelParams struct - I thought that was something (like properties for a material) being passed to the shader as a buffer :)

output_color.xyz = light_accum;

// Gamma correction.
output_color.xyz = output_color.xyz / (output_color.xyz + vec3(1.0));
Copy link
Contributor

@fu5ha fu5ha Sep 6, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Technically this is not gamma correction (i.e. correcting for the output color space transfer function) but is actually an artistic tone mapping operator (Reinhard)

@IngmarBitter IngmarBitter mentioned this pull request Dec 28, 2020
Base automatically changed from master to main February 19, 2021 20:44
@mtsr
Copy link
Contributor

mtsr commented Mar 4, 2021

I've just rebased this PR on main. Now cherry-picking the commits from #1160

@Ratysz
Copy link
Contributor

Ratysz commented Mar 6, 2021

Closing in favor of #1554.

@Ratysz Ratysz closed this Mar 6, 2021
bors bot pushed a commit that referenced this pull request Mar 20, 2021
This is a rebase of StarArawns PBR work from #261 with IngmarBitters work from #1160 cherry-picked on top.

I had to make a few minor changes to make some intermediate commits compile and the end result is not yet 100% what I expected, so there's a bit more work to do.

Co-authored-by: John Mitchell <[email protected]>
Co-authored-by: Ingmar Bitter <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-Rendering Drawing game state to the screen C-Feature A new feature, making something new possible
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants