-
-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve ergonomics of Query parallelism #4441
Comments
Just wanted to point out that there are use for task pools outside of queries. It should not be usable only through a query
That would make some of the usage of |
This would mostly just be embedding a ComputeTaskPool in QueryState and fetching/cloning the one stored in the World upon creation. Alternatively, having a global static ComputeTaskPool achieves the same thing.
This is still doable via The other main use case of Command construction can be addressed via either making Commands Send + Sync or making a ParallelCommands alternative. I think @TheRawMeatball had some ideas on how to approach this. |
What problem does this solve or what need does it fill?
Right now calling
Query::par_for_each(_mut)
requires a TaskPool, a batch size, and a function to run on each entity. This basically requires the system to have a task pool resource and have the user hand tune a batch size.Ideally both of these arguments should be optional, instead defaulting to common sane defaults for the types and operations involved.
Additionally, the split between par_for_each and for_each forces the user to be aware of the parallelization of the query. Ideally any Query that is heavy enough should be able to transparently split a for_each run into multiple chunks and parallelize across multiple cores. This should enable systems to dynamically scale heavy loads depending on the current demands of the app.
What solution would you like?
What alternative(s) have you considered?
Transparent APIs for iterator composition (i.e. par_iter w/ Iterator-like APIs).
The text was updated successfully, but these errors were encountered: