-
-
Notifications
You must be signed in to change notification settings - Fork 35.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nodes: Add PixelationNode #28802
Nodes: Add PixelationNode #28802
Conversation
📦 Bundle sizeFull ESM build, minified and gzipped.
🌳 Bundle size after tree-shakingMinimal build including a renderer, camera, empty scene, and dependencies.
|
Side note: This effect will benefit from #28749 since you need a normal render target. You might want to wait a bit until this issue is solved so it's clear how to access the depth/normal etc. buffer in a MRT setup. I personally focus on other effects right now until the above issue is solved. My first attempt of porting this pass looks like so: import TempNode from '../core/TempNode.js';
import { uv } from '../accessors/UVNode.js';
import { addNodeElement, tslFn, nodeObject, float, vec2, vec3 } from '../shadernode/ShaderNode.js';
import { NodeUpdateType } from '../core/constants.js';
import { uniform } from '../core/UniformNode.js';
import { clamp, floor, smoothstep, dot, sign, step } from '../math/MathNode.js';
import { Vector4 } from '../../math/Vector2.js';
class PixelationNode extends TempNode {
constructor( textureNode, depthNode, normalNode, pixelSizeNode, normalEdgeStrength, depthEdgeStrength ) {
super();
this.textureNode = textureNode;
this.depthNode = depthNode;
this.normalNode = normalNode;
this.pixelSizeNode = pixelSizeNode;
this.normalEdgeStrength = normalEdgeStrength;
this.depthEdgeStrength = depthEdgeStrength;
this.updateBeforeType = NodeUpdateType.RENDER;
this._resolution = uniform( new Vector4() );
}
updateBefore() {
const map = this.textureNode.value;
const width = map.image.width;
const height = map.image.height;
this._resolution.value.set( width, height, 1 / width, 1 / height );
}
setup() {
const { textureNode, depthNode, normalNode } = this;
const uvNodeTexture = textureNode.uvNode || uv();
const uvNodeDepth = depthNode.uvNode || uv();
const uvNodeNormal = normalNode.uvNode || uv();
const sampleTexture = () => textureNode.uv( uvNodeTexture );
const sampleDepth = ( x, y ) => depthNode.uv( uvNodeDepth.add( vec2( x, y, ).mul( this._resolution.zw ) ) ).r;
const sampleNormal = ( x, y ) => normalNode.uv( uvNodeNormal.add( vec2( x, y ).mul( this._resolution.zw ) ) ).rgb.mul( 2.0 ).oneMinus();
const depthEdgeIndicator = ( depth ) => {
let diff = 0;
diff = diff.add( clamp( sampleDepth( 1, 0 ).sub( depth ) ) );
diff = diff.add( clamp( sampleDepth( - 1, 0 ).sub( depth ) ) );
diff = diff.add( clamp( sampleDepth( 0, 1 ).sub( depth ) ) );
diff = diff.add( clamp( sampleDepth( 0, - 1 ).sub( depth ) ) );
return floor( smoothstep( 0.01, 0.02, diff ).mul( 2 ) ).div( 2 );
};
const neighborNormalEdgeIndicator = ( x, y, depth, normal ) => {
const depthDiff = sampleDepth( x, y ) - depth;
const neighborNormal = sampleNormal( x, y );
// Edge pixels should yield to faces who's normals are closer to the bias normal.
const normalEdgeBias = vec3( 1, 1, 1 ); // This should probably be a parameter.
const normalDiff = dot( normal.sub( neighborNormal ), normalEdgeBias );
const normalIndicator = clamp( smoothstep( - 0.01, 0.01, normalDiff ), 0.0, 1.0 );
// Only the shallower pixel should detect the normal edge.
const depthIndicator = clamp( sign( depthDiff * .25 + .0025 ), 0.0, 1.0 );
return float( 1.0 ).sub( dot( normal, neighborNormal ) ).mul( depthIndicator ).mul( normalIndicator );
};
const normalEdgeIndicator = ( depth, normal ) => {
let indicator = 0;
indicator = indicator.add( neighborNormalEdgeIndicator( 0, - 1, depth, normal ) );
indicator = indicator.add( neighborNormalEdgeIndicator( 0, 1, depth, normal ) );
indicator = indicator.add( neighborNormalEdgeIndicator( - 1, 0, depth, normal ) );
indicator = indicator.add( neighborNormalEdgeIndicator( 1, 0, depth, normal ) );
return step( 0.1, indicator );
};
const pixelation = tslFn( () => {
const texel = sampleTexture();
let depth = 0;
let normal = vec3( 0 );
if ( this.depthEdgeStrengthNode > 0 || this.normalEdgeStrength > 0 ) {
depth = sampleDepth( 0, 0 );
normal = sampleNormal( 0, 0 );
}
let dei = 0;
if ( this.depthEdgeStrength > 0 ) {
dei = depthEdgeIndicator( depth );
}
let nei = 0;
if ( this.normalEdgeStrength > 0 ) {
nei = normalEdgeIndicator( depth, normal );
}
const strength = dei.greaterThan( 0 ).cond( float( 1 ).sub( dei.mul( this.depthEdgeStrength ) ), nei.mul( this.normalEdgeStrength ).add( 1 ) );
return texel.mul( strength );
} );
const outputNode = pixelation();
return outputNode;
}
}
export const pixelation = ( node, depthNode, normalNode, pixelSize, normalEdgeStrength = 0.3, depthEdgeStrength = 0.4 ) => nodeObject( new PixelationNode( nodeObject( node ).toTexture(), nodeObject( depthNode ), nodeObject( normalNode ), nodeObject( pixelSize ), normalEdgeStrength, depthEdgeStrength ) );
addNodeElement( 'pixelation', pixelation );
export default PixelationNode; This is untested code though. Feel free to use it in your PR! |
Just for future reference, are most of these passes already implemented in TSL but just waiting for the implementation of Auto-MRT? |
No. I'm not sure how else is working on porting FX effects to TSL but I'm experimenting with FXAA right now. When MRT is ready, my next task is GTAO and UnrealBloom. Of course I'll review and support what others contribute. Some things in TSL and in the post processing system still have to be added or fixed so you might hit a point where you need to stall a task. I'm also wondering if we should convert all passes in the post processing directory to TSL. E.g. a single AO pass (GTAO) is maybe sufficient. I definitely vote for |
79c0c49
to
43cb319
Compare
scene.add( target ); | ||
target.position.set( 0, 0, 0 ); | ||
spotLight.castShadow = true; | ||
scene.add( spotLight ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mind creating the example with the exact same lighting conditions than the original version?
It's important to perform a 1:1 comparison so we can see possible deviations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can apply the same lighting parameters to the scene, but the original lighting can't be replicated due to the shadow artifacts present, as mentioned in #28642. In my testing, recreating the lighting conditions of almost any webgl example that uses standard Three.js lights and casts shadows onto other objects exhibits similar rendering issues when that same sample is ported over to the WebGPURenderer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then ignore the shadow casting for now. However, the type of lights and their parametrization should match otherwise the scene's color tone is different which makes it impossible to review the PR.
const scenePass = pass( scene, camera ); | ||
scenePass.setMRT( mrt( { | ||
output: output, | ||
normal: directionToColor(normalView), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be just normal: normalView
.
In the shader, you can then directly use the sampled normal values and don't have to convert them anymore. The render target is of type half float so there is not need to convert to RGB8 and back.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That certainly makes sense to me, although for some reason when testing, the output of directionToColor(normalView) matched the normal output of RenderPixelatedPass while normalView did not. That never seemed right to me though, so I'll go back and check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've reverted back to normalView, but the effect of normalEdgeStrength is noticeably different/weaker compared to using directionToColor(normalView). I'll add some images to show what I mean, but I believe in this instance, to replicate the effect properly, directionToColor( normalView ) is the correct way our normal pass needs to be configured.
WebGL Pixelation tNormal texel output:
WebGL Pixelation Output ( Max NormalEdgeStrength)
WebGPU Pixelation normalView texel output ( i.e the raw output of the scenePass's normal texture node when the normal render target is set to normalView ):
WebGPU Pixelation Output ( Max NormalEdgeStrength with normalView as scenePass normal output )
// Note the complete lack of any edge detection on the Icosahedron
WebGPU Pixelation directionToColor( normalView ) texel output
// Image is not pixelized since the resolution is adjusted as a post-step in this implementation
WebGPU Pixelation Output ( Max NormalEdgeStrength with directionToColor( normalView ) as scenePass normal output )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are roughly analagous to the same effects I noticed before, which is why I ultimately chose directionToColor( normalView ) over normalView, even if intuition would lead us to use normalView.
When comparing |
import { OrbitControls } from 'three/addons/controls/OrbitControls.js'; | ||
import { GUI } from 'three/addons/libs/lil-gui.module.min.js'; | ||
|
||
import { pass, mrt, output, normalView, uniform, directionToColor } from 'three/tsl'; |
Check notice
Code scanning / CodeQL
Unused variable, import, function or class Note
src/nodes/display/PixelationNode.js
Outdated
// Set resolution uniform | ||
|
||
const adjustedWidth = map.image.width / this.pixelSizeNode.value; | ||
const adjustedHeight = map.image.height / this.pixelSizeNode.value; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You must compute integers here (like in the original). This will also make the versions more consistent:
const adjustedWidth = Math.floor( map.image.width / this.pixelSizeNode.value );
const adjustedHeight = Math.floor( map.image.height / this.pixelSizeNode.value );
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW: The original produces its beauty, depth and normal buffers with these adjusted resolution values. The new one does not since it uses the true resolution from the renderer. I wonder if this bit is responsible for the final visual difference.
What I am referring to is some kind of offset of the entire scene that gets larger and more noticeable when increasing the pixel size (it's hard to describe, tbh).
This offset is still something that can be considered as a bug and needs to be resolved (at least we should understand what's going on^^).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As per #28802 (comment), my initial first draft port was what you described, directly resizing the render targets based on the adjusted width and height, but WebGPURenderer and WebGLRenderer seem to handle the final output of a render target differently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you mind implementing the solution again? I would like to investigate the visual differences.
In any event, we should produce the beauty, depth and normal buffers like in RenderPixelatedPass
meaning with a lower resolution. Depending on the pixelSize
value, we are able to produces these buffers with a much lower resolution than now which saves a lot of GPU bandwidth. We have to keep this optimization when porting the effect over to WebGPURenderer
and PostProcessing
.
RenderPixelatedPass
is an alternative to RenderPass
so it make sense to rename the node to PixelationPassNode
and derive it from PassNode
. Instead of using pass()
in the example, it would be:
const scenePass = pixelationPass( scene, camera );
The MRT configuration happens inside PixelationPassNode
then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll convert the PR back to "draft". There are still conceptual issues to implement so a final review is not yet possible.
…olution, change uniform naming to match other uniforms, fix comment spacing in updateBefore
src/nodes/display/PixelationNode.js
Outdated
|
||
} ); | ||
|
||
const lowerResolution = tslFn( () => { |
Check notice
Code scanning / CodeQL
Unused variable, import, function or class Note
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The pass produces now a consistent result! Well done! 🙌
Description
A port of the existing pixelation pass to the node post-processing system. Will add more details when implementation complete...