Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WebGPU: NodeMaterial BSDFs, revision and updates #21322

Merged
merged 29 commits into from
Apr 23, 2021

Conversation

sunag
Copy link
Collaborator

@sunag sunag commented Feb 21, 2021

  • FunctionNode, FunctionCallNode and depedencies
  • Object3DNode - base for ModelNode, CameraNode
  • Object3DNode.VIEW_POSITION - need for lights
  • NormalNode: LOCAL, WORLD, VIEW - revised
  • PositionNode: LOCAL, WORLD, VIEW, VIEW_DIRECTION - revised
  • ConstNode and depedencies
  • revision matrix names
  • MathConsts library
  • MathFunction library
  • BSDFunctions library
  • VarNode variable creation node
  • Reserverd keywords context
  • Flow code support
  • Add WebGPUNodeSampler and WebGPUNodeSampledTexture for instance NodeTexture
  • ContextNode and LightContextNode
  • LightNode node for use THREE.Light
  • PropertyNode for material properties
  • MaterialNode: generate nodes from native material properties
  • LightsNode collect all LightNode to use selective lights per material
  • Finish LightsNode implementation for BlinnPhong
  • webgpu_lights_webgl.html remove test example
  • webgpu_selective_lights.html finish example
  • cleaup

WebGPU - Selective Lights
https://raw.githack.com/sunag/three.js/nodematerial-light/examples/webgpu_lights_selective.html

Suggested syntax for selective lights:

const lightNodeA = new LightNode( lightA );
const lightNodeB = new LightNode( lightB );

// from nodes
material.lightNode = new LightsNode( [ lightNodeA, lightNodeB ] );

// from lights
material.lightNode = LightsNode.fromLights( [ lightA, lightB ] );

@mrdoob mrdoob added this to the r127 milestone Feb 22, 2021

let RE_Direct = null;

if ( material.isMeshPhongMaterial === true ) {
Copy link
Collaborator

@Mugen87 Mugen87 Feb 28, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reasons why you have decided for MeshPhongMaterial instead of MeshStandardMaterial? When I remember correctly, it was considered once to not support phong in WebGPU anymore.

@mrdoob Do you have a preference here? I personally would start with a PBR material as a first lit material. Although phong would be easier to implement/port. And we would need to support it for WebGL anyway.

Copy link
Collaborator Author

@sunag sunag Feb 28, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reasons why you have decided for MeshPhongMaterial instead of MeshStandardMaterial?

It is a popular material and more easier to implement... but I can start with MeshStandardMaterial I see no problem about it.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's wait for @mrdoob's feedback. TBH, I'm not sure anymore if there was a conclusion about this topic and how it looked like.

@Mugen87
Copy link
Collaborator

Mugen87 commented Mar 2, 2021

One question about LightNode. The code from your first post shows how selective lighting can be implemented.

But it is still the plan that materials will still respond to lights added to the scene graph, right? I mean this is obviously not yet implemented since the renderer needs to be enhanced. But I'd like to clarify this roadmap a bit.


if ( material.isMeshPhongMaterial === true ) {

RE_Direct = RE_Direct_BlinnPhong;
Copy link
Collaborator

@Mugen87 Mugen87 Mar 2, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have not yet debugged this part in detail but I assume the define RE_Direct in the phong shader will be set to the generated code of RE_Direct_BlinnPhong?

Copy link
Collaborator Author

@sunag sunag Mar 3, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but RE_Direct is a FunctionCallNode. I am preserving the names to facilitate. NodeMaterial not need set a define for this. With LightContextNode and no defines we can adopt even two light model, like Phong and Physical runing simultaneously on the same material. - This is just adding a few nodes.

@sunag
Copy link
Collaborator Author

sunag commented Mar 3, 2021

But it is still the plan that materials will still respond to lights added to the scene graph, right?

This should be very easy to implement, e.g:

// WebGPUNodeBuilder.js

// cache light context and share with all materials of the scene
const sceneLightNodeSlot = new NodeSlot( new LightContextNode( LightsNode.fromLights( sceneLightsArray ) ), 'LIGHT', 'vec3' );

_parseMaterial() {

	...

	if ( material.lightNode !== undefined ) {
		
		// selective light
		
		const materialLightContextNode = new LightContextNode( material.lightNode );
		
		this.addSlot( 'fragment', new NodeSlot( materialLightContextNode, 'LIGHT', 'vec3' ) );
		
	} else {
	
		// scene lights
	
		this.addSlot( 'fragment', sceneLightNodeSlot );
	
	}
	
}

@sunag
Copy link
Collaborator Author

sunag commented Mar 6, 2021

I would like your option about keywords, what standard should we use:
@mrdoob @Mugen87

  1. NormalLocal, NormalView, MaterialDiffuseColor, ReflectedLightDirectDiffuse
  2. NORMAL_LOCAL, NORMAL_VIEW, MATERIAL_DIFFUSE_COLOR, REFLECTED_LIGHT_DIRECT_DIFFUSE
  3. Suggestions?

1 is the current.

--

void RE_Direct_BlinnPhong( vec3 lightDirection, vec3 lightColor ) {

	float dotNL = saturate( dot( NormalView, lightDirection ) );
	vec3 irradiance = dotNL * lightColor;

#ifndef PHYSICALLY_CORRECT_LIGHTS

		irradiance *= PI; // punctual light

#endif

	ReflectedLightDirectDiffuse += irradiance * BRDF_Diffuse_Lambert( MaterialDiffuseColor.rgb );

}
void RE_Direct_BlinnPhong( vec3 lightDirection, vec3 lightColor ) {

	float dotNL = saturate( dot( NORMAL_VIEW, lightDirection ) );
	vec3 irradiance = dotNL * lightColor;

#ifndef PHYSICALLY_CORRECT_LIGHTS

		irradiance *= PI; // punctual light

#endif

	REFLECTED_LIGHT_DIRECT_DIFFUSE += irradiance * BRDF_Diffuse_Lambert( MATERIAL_DIFFUSE_COLOR.rgb );

}

@mrdoob
Copy link
Owner

mrdoob commented Mar 6, 2021

I think I prefer NormalLocal otherwise it gets confused with defines like PHYSICALLY_CORRECT_LIGHTS.

@sunag
Copy link
Collaborator Author

sunag commented Mar 6, 2021

Another thing... It is possible to maintain compatibility with a non nodes properties values like material.color in case of user not use a NodeMaterial property like material.colorNode. We are looking for this type of compatibility?

@sunag
Copy link
Collaborator Author

sunag commented Mar 6, 2021

For example, in case of material.colorNode is undefined a material.colorNode = new ColorNode( material.color ) is added instead in WebGPUNodeBuilder process.

@sunag
Copy link
Collaborator Author

sunag commented Mar 6, 2021

I create MaterialNode for this ( 07fdf79 ) but I don't know if I should keep it or use only fields intended for nodes.

Example of MaterialNode

 // added by user or WebGPUNodeBuilder if material.colorNode is empty
material.colorNode = new MaterialNode( MaterialNode.COLOR );

// user can change the values of the material in legacy mode (THREE.Color) 
// and MaterialNode will auto update the relative nodes
material.color = new THREE.Color( 0xFF00FF );

@sunag
Copy link
Collaborator Author

sunag commented Mar 6, 2021

If that for interesting I would put other fields in MaterialNode like:
MaterialNode.MAP, MaterialNode.NORMAL_MAP, MaterialNode.EMISSIVE, MaterialNode.ROUGHNESS...

@sunag
Copy link
Collaborator Author

sunag commented May 22, 2021

SkinningPositionNode missing normals

image

@sunag
Copy link
Collaborator Author

sunag commented May 22, 2021

Seems that Uint16BufferAttribute still not work in WebGPU?

FBXLoader.js test:

if ( skeleton ) {

	geo.setAttribute( 'skinIndex', new Float32BufferAttribute( buffers.weightsIndices, 4 ) ); // work
	//geo.setAttribute( 'skinIndex', new Uint16BufferAttribute( buffers.weightsIndices, 4 ) ); // not work

	geo.setAttribute( 'skinWeight', new Float32BufferAttribute( buffers.vertexWeights, 4 ) );

	// used later to bind the skeleton to the model
	geo.FBX_Deformer = skeleton;

}

@Mugen87
Copy link
Collaborator

Mugen87 commented May 22, 2021

Seems that Uint16BufferAttribute still not work in WebGPU?

Should be solved via Mugen87@006b2fc. WebGPURenderPipeline did not process all attribute types yet.

@Mugen87
Copy link
Collaborator

Mugen87 commented May 22, 2021

@sunag Awesome work!

SkinningPositionNode missing normals

Um, do we need a SkinningNormalNode? The advantage of having a single skinning node class is the easier update process of the skeleton.

BTW: By doing this mvpNode.position = new SkinningPositionNode( object );, skinning won't work if the user defines a positionNode (maybe for some custom vertex displacement). I guess the user has to create an instance of SkinningPositionNode then and apply its own custom position node to SkinningPositionNode.position, right?

@sunag
Copy link
Collaborator Author

sunag commented May 22, 2021

This is going to need a review. I think that the way forward would be to generalize to an super class for position/normal manipulation, like LightsNode is for lights.

At the moment I think about it for the next step:

const boneMatrixStructNode = new BoneMatrixStructNode( index, boneTexture, boneTextureSize );

position = new SkinningPositionNode( position, boneMatrixStructNode, weight, bindMatrix, bindMatrixInverse );
normal = new SkinningNormalNode( normal, boneMatrixStructNode, weight, bindMatrix, bindMatrixInverse );

@Mugen87
Copy link
Collaborator

Mugen87 commented May 23, 2021

Where do we pass the instance of SkinnedMesh in this approach? It seems at least a single node class should hold it so it's possible to update the skeleton in the update() method.

I would also suggest to pass in the entire skinned mesh/node to SkinningPositionNode and SkinningNormalNode to simplfy the ctor signature. Except for the position and normal node, all other parameters are properties of the skinned mesh. How about this?

const skinningNode = new SkinningNode( skinnedMesh );

const positionNode = new SkinningPositionNode( position, skinningNode );
const normalNode = new SkinningNormalNode( normal, skinningNode );

@Mugen87
Copy link
Collaborator

Mugen87 commented May 23, 2021

BTW: I think this is a good opportunity to find out how to use the node system to implement renderer features like skinning less hard-wired^^.

@sunag
Copy link
Collaborator Author

sunag commented May 23, 2021

We just need one more intermediate vertex class for position/normal statics, I did not get to add that yet, so that gap remained.

@Mugen87
Copy link
Collaborator

Mugen87 commented May 23, 2021

Okay, then let's continue with the skinning topic when this new class is ready.

@sunag
Copy link
Collaborator Author

sunag commented Aug 24, 2021

@Mugen87 Is possible you merger your skinning PR so I can make some updates?

@Mugen87
Copy link
Collaborator

Mugen87 commented Aug 24, 2021

The PR is quite outdated and it's probably better to make a new branch based on latest dev. I'm trying to prepare a PR tomorrow.

For stability reasons, it's probably better to merge it after r132 has be published anyway (which is planned for tomorrow^^).

@sunag
Copy link
Collaborator Author

sunag commented Aug 24, 2021

For stability reasons, it's probably better to merge it after r132 has be published anyway (which is planned for tomorrow^^).

No problem we can wait after r132 has be published. There is a lot of thing to do here too.

@Mugen87
Copy link
Collaborator

Mugen87 commented Aug 25, 2021

I've realized today that the skinning branch is broken and does not work with latest dev. Here is the rebased branch:

Mugen87@a00b186

It's probably better if you just copy these changes to your local branch and then apply the respective updates.

@sunag
Copy link
Collaborator Author

sunag commented Aug 25, 2021

Thank you very much! Moreover, bring Skinning Mesh to WebGPU was a great initiative.

@sunag
Copy link
Collaborator Author

sunag commented Sep 7, 2021

@Mugen87 I am having several problems using pointer with textures in GLSL to SPIRV.
I dont know when stopped working on WebGPU, but I havent been discovering the origin for a fews days.

layout(set = 0, binding = 0) uniform sampler nodeUniform2_sampler; 
layout(set = 0, binding = 1) uniform texture2D nodeUniform2;

vec3 nodeCode0 (  texture2D A, sampler B  )  {

	return vec3( 0.0 );

}

void main() {

	// Tint SPIRV reader failure:
	// Parser: error: function parameter of pointer type cannot be in 'none' storage class
	nodeCode0( nodeUniform2, nodeUniform2_sampler  )

}

Full code

#version 450

// <node_builder>

#define NODE_MATERIAL

// defines
#define NODE_CODE nodeVary0 = uv; 
#define NODE_CODE_MVP nodeVar0 = ( nodeUniforms.nodeUniform0 * nodeUniforms.nodeUniform1 ); PositionLocal = position; nodeVar1 = nodeCode0( nodeUniform2, nodeUniform2_sampler ); 
#define NODE_MVP ( nodeVar0 * vec4( nodeVar1, 1.0 ) )

// uniforms
layout(set = 0, binding = 0) uniform sampler nodeUniform2_sampler; layout(set = 0, binding = 1) uniform texture2D nodeUniform2; layout(set = 0, binding = 2) uniform NodeUniforms { uniform mat4 nodeUniform0; uniform mat4 nodeUniform1;  } nodeUniforms; 

// attributes
layout(location = 0) in vec3 position; layout(location = 1) in vec2 uv; 

// varys
layout(location = 0) out vec2 nodeVary0; 

// vars
mat4 nodeVar0; vec3 nodeVar1; vec3 PositionLocal; 

// codes
vec3 nodeCode0 (  texture2D A, sampler B  )  {

	return PositionLocal;

}

// </node_builder>

void main(){

	NODE_CODE

	NODE_CODE_MVP

	gl_Position = NODE_MVP;

}

@sunag
Copy link
Collaborator Author

sunag commented Sep 7, 2021

Another example:

layout(set = 0, binding = 0) uniform sampler nodeUniform2_sampler; 
layout(set = 0, binding = 1) uniform texture2D nodeUniform2;

vec3 nodeCode0 (  texture2D A, sampler B  )  {

	// Tint SPIRV reader failure:
	// Parser: error: no matching call to textureSampleLevel(ptr<void, read_write>, ptr<void, read_write>, vec2<f32>, f32)
	return texture( sampler2D( A, B ), vec2( 0.0 ) ).xyz;

}

void main() {

	nodeCode0( nodeUniform2, nodeUniform2_sampler  )

@Mugen87
Copy link
Collaborator

Mugen87 commented Sep 7, 2021

That is strange. I'll try to have a look at this tomorrow.

In the meanwhile, you might want to ask for advice directly at the Dawn or WebGPU chatrooms.

https://matrix.to/#/#webgpu-dawn:matrix.org
https://matrix.to/#/#WebGPU:matrix.org

For this issue, the Dawn chatroom seems more right. The second links is more about WebGPU standard related questions.

@sunag
Copy link
Collaborator Author

sunag commented Sep 7, 2021

Thanks! I published a TextureNode PR needed for test this issue using NodeMaterial: #22501

@sunag
Copy link
Collaborator Author

sunag commented Sep 8, 2021

It worked ( texture pointer ) at that time: #21322 (comment)

@Mugen87
Copy link
Collaborator

Mugen87 commented Sep 8, 2021

Now I see the warning, too:

1 error(s) generated while compiling the shader:
error: no matching call to textureSampleLevel(ptr<void, read_write>, ptr<void, read_write>, vec2, f32)
15 candidate functions:
textureSampleLevel(texture: texture_2d, sampler: sampler, coords: vec2, level: f32) -> vec4
textureSampleLevel(texture: texture_2d, sampler: sampler, coords: vec2, level: f32, offset: vec2) -> vec4
textureSampleLevel(texture: texture_3d, sampler: sampler, coords: vec3, level: f32) -> vec4
textureSampleLevel(texture: texture_cube, sampler: sampler, coords: vec3, level: f32) -> vec4
textureSampleLevel(texture: texture_depth_2d, sampler: sampler, coords: vec2, level: i32) -> f32
textureSampleLevel(texture: texture_2d_array, sampler: sampler, coords: vec2, array_index: i32, level: f32) -> vec4
textureSampleLevel(texture: texture_3d, sampler: sampler, coords: vec3, level: f32, offset: vec3) -> vec4
textureSampleLevel(texture: texture_depth_2d, sampler: sampler, coords: vec2, level: i32, offset: vec2) -> f32
textureSampleLevel(texture: texture_depth_2d_array, sampler: sampler, coords: vec2, array_index: i32, level: i32) -> f32
textureSampleLevel(texture: texture_external, sampler: sampler, coords: vec2) -> vec4
textureSampleLevel(texture: texture_2d_array, sampler: sampler, coords: vec2, array_index: i32, level: f32, offset: vec2) -> vec4
textureSampleLevel(texture: texture_depth_2d_array, sampler: sampler, coords: vec2, array_index: i32, level: i32, offset: vec2) -> f32
textureSampleLevel(texture: texture_depth_cube, sampler: sampler, coords: vec3, level: i32) -> f32
textureSampleLevel(texture: texture_cube_array, sampler: sampler, coords: vec3, array_index: i32, level: f32) -> vec4
textureSampleLevel(texture: texture_depth_cube_array, sampler: sampler, coords: vec3, array_index: i32, level: i32) -> f32

The strange thing is that the texture sampling does not look different compared to the existing samplings in other examples (e.g. in webgpu_sandbox). We always use this pattern:

 texture( sampler2D( texture, sampler ), coords ) )

The only difference is that the skinning code samples in the vertex shader.

@sunag
Copy link
Collaborator Author

sunag commented Sep 8, 2021

Strange that this pattern work if used without function argument in both shader stage.

@sunag sunag deleted the nodematerial-light branch September 28, 2021 09:17
@sunag
Copy link
Collaborator Author

sunag commented Sep 28, 2021

I have just concluded some final steps of SkinnedMesh using VBO in WebGPU. I had to do many approaches to create a good integration and I think that use VBO instead of texture is better for performance too. Still missing, I finish the BufferNode API to publish the PR.

image

@sunag
Copy link
Collaborator Author

sunag commented Sep 28, 2021

I still have many issues with WebGPU, this one I just noticed::

// 60fps
vec3 getSkinningPosition( ) {
	mat4 boneMatX = bones[ int( index.x ) ];
	...
}

// 5~ fps
vec3 getSkinningPosition( in mat4[ 52 ] bones ) {
	mat4 boneMatX = bones[ int( index.x ) ];
	...
}

@mrdoob
Copy link
Owner

mrdoob commented Sep 28, 2021

/cc @Kangz

@Kangz
Copy link

Kangz commented Sep 28, 2021

Uh, this must be because the underlying code is doing a copy of the 52 mat4s when calling the function. You'd be able to more precisly control the costs of what's happening by using WGSL directly. If you want to see what code is produced you can run chrome with the additional --enable-dawn-features=dump_shaders and maybe --enable-dawn-features=dump_shaders,force_wgsl_step (if you're giving SPIR-V directly to Chromium).

Note that the SPIR-V input to createShaderModule is only in unsafe mode and not standard. WebGPU is only specified to ingest WGSL so it would be best for the NodeMaterial to output that if possible.

@sunag
Copy link
Collaborator Author

sunag commented Sep 28, 2021

@Kangz Thanks! This makes a lot of sense and I was already suspicous that other issues I had like this ( #21322 (comment) ) one is related too with GLSL to SPIR-V compiler

@sunag sunag mentioned this pull request Sep 29, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants