Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WebGPURenderer: Compute Shaders - Extend max workgroups capabilities #28846

Merged
merged 2 commits into from
Jul 11, 2024

Conversation

RenaudRohlinger
Copy link
Collaborator

Description

The current implementation of dispatchWorkgroups is limited to one dimension, which equals device.limits.maxComputeWorkgroupsPerDimension and is usually around 64k (65535).

With the current implementation, the computeNode is initialized with a WorkgroupSizeX of 64, resulting in 65535 * 64 = 4,194,240.

This PR increases this limit by automatically dispatching the workgroup in the second dimension, raising the default capabilities to 65535^2 * 64 = 274,869,518,400, which I believe should handle every case. 😄

In the future, we could extend this by providing manual control of the X, Y, and Z dimensions for specific use cases, such as physics simulations. However, this should suffice for now, as using 3 dimensions or 1 dimension seems to yield the same performance output with most GPUs.

This contribution is funded by Utsubo

@RenaudRohlinger RenaudRohlinger added this to the r167 milestone Jul 11, 2024
Copy link

github-actions bot commented Jul 11, 2024

📦 Bundle size

Full ESM build, minified and gzipped.

Filesize dev Filesize PR Diff
683.5 kB (169.2 kB) 683.5 kB (169.2 kB) +0 B

🌳 Bundle size after tree-shaking

Minimal build including a renderer, camera, empty scene, and dependencies.

Filesize dev Filesize PR Diff
460.7 kB (111.1 kB) 460.7 kB (111.1 kB) +0 B

@sunag sunag merged commit dbfc594 into mrdoob:dev Jul 11, 2024
12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants