-
-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
System-order-independent ECS change tracking #68
Comments
So, the current behaviour has a seriously broken edge case that seems likely to emerge in larger apps. Suppose we have a system which relies on Pulling from #54, there seem to be a few options.
Edit: Apparently #6 is called a "ring buffer", which should help with finding prior art on efficient implementations. |
i'm pretty sure i've read something about a "Changed"-queue and the system gets called over and over again until its empty. that should kinda solve it? |
I can see the value in a self-recursing queue of Changed objects that @fopsdev, but I'm not sure it's a great fit here. The self-recursing Changed queue seems hard to fit in:
|
How about firing events at the end of a frame before clearing the marker bits?
It could potentially be a large number of events |
I don't think this gets around the shortcomings of event-based change detection: namely, that you can't elegantly have several consumers of them. If you have two or more systems that come before your change-generating system, your first system will eat the event and the second one never encounters the event. A simple double-buffer, where we keep around two copies of Changed, one for the current frame and one for the last seems like a strictly better version of this. But I'm still nervous about the complexity and fragility of actually working with it. With a double-buffer, there's no way of knowing whether we already processed this change in the last time step, resulting in double-processing of changes that occured in preceding systems. To get around this, we could only process certain events that occured in the last time step, but then you can't handle antecedent systems and your code breaks again if the change-generating system gets moved. You could bypass this by attaching a unique ID to each changed, but then you're left with the overhead of tracking which change has been seen in every change-detecting system, which is a giant mess and probably very costly. I'm increasingly convinced that solution 6., a ring-buffer, is the only feasible way to handle this. All of the other solutions seem to introduce either subtle logic bugs or require very manual, fragile change detection ala events, which are prone to reintroduce those exact same logic bugs when our end users try to handle the changes on their own. |
But you can, by having an event reader local to your system it will keep its own track of which events it read and let other systems read them too I think a ring buffer has the same issue as a double buffer, you'll have to check that the entity still exists if you try to use it in the next frame |
Ah cool, I was mistaken on exactly how Bevy's event handling works then. That involves keeping track of a seperate list per system though right?
The issue I raised in my previous comment is distinct from this :) The entities-potentially-not-existing issue raised by you (and cart in his original comments) isn't unique to ring buffers or double buffers. It actually crops up in our current Changed implementation in 0.4. Entities can be deleted mid-frame via commands: they're processed at the end of each stage, so you can easily try to access an entity that no longer exists if you're just operating off the Changed list. |
Each system only keeps a counter. There is a |
Here are my thoughts as I start to do some more concrete design work on these proposals, looking at component flags as our lens. Component flags are essentially metadata, attached to a single entity-component unit of data. Desiderata
I would say that overhead on non-flagged components should weigh at about 10 times more than overhead on flagged components, as most components will be unflagged.
System-ordering propertiesIn general, the order in which systems run is neither deterministic nor linear. At first blush, this would seem to make hitting property 8 very challenging: if our systems are running in an unpredictable order, how can we be sure that a single flag always persists exactly long enough to be seen once and no more by each relevant system? If the order was fully unconstrained, for any fixed persistence window, the change may be seen 0, 1 or 2 times by another system. We could get around this by storing a list of But, any valid schedule has two much stronger properties:
This property is fundamental to how games in Bevy should work, rather than a quirk in the current scheduler. Without it, your behavior is fundamentally not reproducible, as you could get data-processing operations happening out of order. As I try to show in my next comment, we can use these properties to persist our component flags for "exactly one tick". If we hang on to our component flags until the next time the originating system is run, they will have been seen by every other relevant system (that reads or writes to the associated entity-component) precisely once. This leads us to another constraint when considering property 9: any system which can write component flags must have write-access to the described entity-component. Similarly, any system which reads component-flags must have read-access to the described entity-component. This makes sense: component flags are essentially a type-agnostic extension of the data contained in every component. We need to wait for changes to them to resolve before we can be confident in using them. |
A simple "one tick" component flag proposal
And that's it! We're guaranteed to see each change exactly once for all other relevant system because of the total order given in the post above: component flags can only be set (either through overwriting or erasing) in systems that have write-access to the entity-component in question. If we overwrite it with a newer flag, we're still processing that data when we should, and not wasting any effort (because we need to process the longer lifetime for the fresher system). Notes
|
To expand on the tick-tock system mentioned above:
(This idea was courtesy of @alercah). Rather than having to traverse through these component flags one at a time like in the I'm not sure if that optimization would be net positive; I worry that it would make filtering queries slower, since the flags wouldn't be stored aligned. |
With the current executor the order is deterministic and linear within exactly the constraints you listed, it even says so The new parallel executor in #1144 will replace the implicit systems ordering with explicit dependencies - you won't be able to generate any sort of "total order over the systems that can write to the flagged component", which kind of makes the rest of the proposal fall apart. You will, however, have a different sort of order: the aforementioned explicit dependencies. Fundamentally, they allow the user to tell Bevy that "this system has to run after that system", which is a less convoluted way to say "this system must observe the side effects of that system". Making a change to a component or a resource is a side effect; how the rest of the systems run in relation to each other is irrelevant to this specific dependency relation. An apparent "side problem": changes to a component can happen by an unrelated system, after running the dependency but before running the dependant. The solution is to do nothing, because that's not our problem - if the user cares about the dependant observing side effects of an unrelated system, they should specify their dependency tree to exclude that situation. Regardless, all of that is largely irrelevant to the problem this issue brings up. The problem is not "how can we improve our light-weight change tracking to also handle multi-frame changes" - that's not what the mechanism is for, it's a filter that enables fine-grained short-circuiting of query iterations, and should be left as just that. The problem is "should we introduce stateful (beefier, less transient) change tracking, and if so, how". |
Thanks for the explanation of the new scheduler: I finally understand what the proposed code does, and definitely agree with you that my proposed design here is incompatible with it. I am pretty firmly of the opinion that we should not go ahead with that scheduling algorithm but that's a better topic for #1184.
I don't think I agree on this point. The existing change-tracking is light-weight, but has unintuitive behavior, results in emergent impossible-to-resolve circular dependencies, is very hard to debug if anything goes wrong and imposes a performance overhead on every component, whether or not it's tracked. I don't believe we should offer change tracking in its current form: while it will do what you want for simple demos and perform well on benchmarks, it makes code bases incredibly brittle by restricting ordering significantly, and is very tricky to catch when it breaks due to the heavily non-local and unpredictable path from changes to detection. |
I still think it does have value: the aforementioned fine-grained short-circuiting of query iterations. As in, "I want to run this piece of code on all things in this query, but only if that one component has changed" - for things like synchronizing external data and what not; again, it's a filter. (EDIT: there might be a more fitting implementation if that's all we want from it, though) What we shouldn't do is advertise it as a control flow mechanism. |
Hmm, I can see its value for cases where the cost of tracking changes at higher granularity / duration is relatively high. In any case, we should expect most of operations that use In that case, we can simply persist This is much better default behavior than before, especially if we can make change tracking only apply to the relevant components. For cases where you really don't want to be repeating work for logic or performance reasons, we may still want to consider implementing something heavier. |
I wonder if it would make sense to tweak the name of |
I like the idea of changing the name of the current behavior to
`ChangedThisFrame` if we stick with the current behavior :)
…On Tue, 5 Jan 2021 at 19:29, Brandon Pickering ***@***.***> wrote:
I wonder if it would make sense to tweak the name of Changed to reflect
this distinction, e.g. ChangedThisFrame. It's easy to forget that Changed
works differently than events.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#68 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA3KABN7VRCOTPE5FFL7JFTSYOVE7ANCNFSM4PFGLZLA>
.
|
I propose we could have both unreliable same-frame, and reliable next-frame change detection (Edit: here reliable means triggers exactly once). Since the reliable version completely ignores the same-frame changes, it is almost as lightweight as the current change detection. Besides this caveat, I think this solves most issues (system-order-dependence, and two systems detecting each other's changes). It would make for a great tool during prototyping a game, since the reliable versions can be easily upgraded to the same-frame version later on as needed. More precicely, we would change the ComponentFlags from: bitflags! {
pub struct ComponentFlags: u8 {
const ADDED = 1;
const MUTATED = 2;
}
} to: bitflags! {
pub struct ComponentFlags: u8 {
const ADDED = 1;
const JUST_ADDED = 2;
const MUTATED = 3;
const JUST_MUTATED = 4;
}
} (Of course the naming is subject to debate).
The memory footprint would not change since |
Yes! The above suggestion is semantically the same as "double buffering", but without having any effect on memory use. These are 1-bit flags we are dealing with, and we only have 2 of them. The single byte of memory we have to use to store them has plenty of space! We could simply use 2 more bits out of that byte! No extra memory needed. |
I'm going to dump my thoughts verbatim from discord:
We'd have to be careful around looping criteria, to make sure the 'counter has been decremented for this system' would be properly managed, although we could just not decrement it on every time after the first and document that you have to manage your own steady-state, which seems like a reasonable limitation. In essense, this makes checking Also if this doesn't make any sense, that is expected. |
What about adding a count for each entity describing when it was last changed and then have each system store the last count for which it was run. |
This sounds like it might be reasonable. I am trying to get a sense of the memory requirements this would have. This would just replace the flags, right? / These counters would be in exactly the same place where we currently have the flags? If so, then that just expands it from 1 byte to 8 (or 4, or 2, could it be made to work with an overflowing counter? not sure...), which sounds like it wouldn't be a huge amount of overhead. Also, I think the "just added" case could be implemented by special-casing the zero value (a newly added component would have a zero counter). If implemented this way, then The real benefit of this proposal is that it does not split change detection into two ("reliable but next frame" vs "same frame but unreliable"), which is the main disadvantage of the "double buffering" approaches discussed before. This might be worth the extra memory overhead. This approach is still cheap on computations / does not require complicated tracking algorithms. |
Correct
A u32 would give 828.5 days before overflowing given a single increase per frame. For a single increase per stage it is likely to overflow in a reasonable amount of time. I don't really think it would be possible to overflow. At least not easily. A system may not run for more than one overflow of the counter, but still need to see the change when it runs again. And yes, I don't think a u64 is much overhead. I expect the actual components to often be much bigger.
That would be a nice way of handling it. |
@cart Would love to hear your opinion w.r.t these last two proposals: 1. Flag-doubling (@Davier 's proposal)This is a simple and straightforward extension of the existing implementation. Semantically equivalent to double buffering, but implemented by just doubling the bit flags, to track changes across 2 frames. This should introduce no additional overhead over the existing implementation. All the flags still fit in 1 byte, and the clear operation is replaced with a bit mask and shift (which are basically free on modern CPUs). The disadvantage is in UX. We would need to split the change detection query filters into 2. Counters (@bjorn3 's proposal, with my revisions)This would allow keeping the UX simple, with only one change detection filter that always works. However, it would have greater memory usage and some perf impact (no idea how much). The 1-byte-per-component flags would be replaced with a 8-byte The logic would be something like this (in pseudocode): static GLOBAL_COUNTER: AtomicU64 = AtomicU64::new(1);
fn on_component_mutation() {
component.counter = GLOBAL_COUNTER;
}
fn is_component_mutated() -> bool {
component.counter > system.counter
}
fn after_run_system() {
GLOBAL_COUNTER += 1;
system.counter = GLOBAL_COUNTER;
} Every time the scheduler runs a system, it increments the global atomic counter and updates the system's counter. The per-component counter is updated on mutation. The two are compared for change detection.
This would obsolete Since this implementation would have a greater perf cost, @alice-i-cecile suggested that it could be mitigated in the future by analyzing the app to collect information about which components and systems need change detection, to disable it where it isn't needed. BTW, we don't need to worry about the So, we have a proposal that is simple in its implementation, but more complex to use, and a proposal that is complex in its implementation, but simple to use. I personally favor the first proposal (due to its elegant simplicity and low overhead), but the second may be worth considering, too. The explicit tradeoff with choosing which type of change detection you want ( |
As a small comment on the second proposal: I don't think that losing access to |
BTW, another idea for Now that we are gonna have per-system counters anyway, that gives us the ability to have a per-system flag, too, for free. Not just on the components. It would halve the range of the counters, but as we established before, we have no reason to worry about overflow. |
The idea here was to replace Your proposal below works well though, and I think resolved the concern there. |
Having reliable change detection would make the transition to the new scheduler way less error-prone, it should probably be done before the 0.5 release. I believe we have reached consensus (otherwise please correct me) on Here are a few comments about addition detection:
I expect the main use case of
Agreed. Moreover, clearing the Added value/flag on first mutation would break the use case I just described (in addition to the multiple-post-processing case @alice-i-cecile mentioned).
We can't rely on users to manually send the appropriate event when they create some component, that would be too brittle. However, events are currently not reliable for systems that skip frames, so they would have little advantage over a flag.
Could you expand on this? I don't see how the per-system flag would work since systems typically handle a lot of components. What I can see is using one bit of the component counters as the current Interestingly, I started writing this comment in favour for events, but changed my mind along the way. I now agree with the use of an PS: in order to get some perf improvement for the |
Can we can simply store the That would make static GLOBAL_COUNTER: AtomicU16 = AtomicU16::new(1);
fn on_component_addition() {
component.added_counter = GLOBAL_COUNTER;
}
fn on_component_mutation() {
component.mutated_counter = GLOBAL_COUNTER;
}
fn is_component_added() -> bool {
(component.added_counter - system.counter) & 0x7FFF > 0
}
fn is_component_mutated() -> bool {
(component.mutated_counter - system.counter) & 0x7FFF > 0
}
fn get_component_lifetime() -> (u16, bool) {
GLOBAL_COUNTER.overflowing_sub(component.added_counter)
}
fn after_run_system() {
// integer overflow is OK
GLOBAL_COUNTER.overflowing_add(1);
system.counter = GLOBAL_COUNTER;
} |
It would solve the problem, but at a cost of more 4 bytes per component. I don't know if it's worth it. Unlike mutation it only happens once per component, so we should be able to find a more efficient way to do it (and I still believe reliable events will be a good solution when we have them).
The counter is increased after each system ran, and the number of system that run in a frame is variable. Would that measure be useful? Also, for a large app at high FPS, the lifetime could overflow in hours. With all that said, I would be in favour of doing that as a short-term reliable solution, and improve it later. |
I agree, reliable events do make more sense in that case.
I believe the counter should be incremented only for systems that mutate the component (it's a mutation timestamp after all). That should slow down the counter a lot, to the point where overflows occur very infrequently (if at all). That said, I see how measuring lifetimes this way is likely non-deterministic (and therefore useless). |
My only reservation with the generation counter solution is that its not exact, if we have the same set of systems and components in a given frame it should act as a pure function and have the exact same end state regardless of how we get there. If we start having indeterminate behavior because of scheduling / counter overflow / missed additions etc. we won't have the ability to reliably compute the same frame / entities across a network potentially, or on long running tasks. Any solution we come up with probably needs these constraints on it:
Not sure what that solution ends up looking like, but I would definitely die on the hill of reliable/predictable computation |
The planned solution (proposal 5 in #1471) should be reliable. To answer your concerns:
I don't see why, but it would indeed be an issue. Could you expand on this? |
💯 I was literally just starting to write up essentially the same proposal and came across this thread so glad to see this is already in the works. Events have the same problem here, where the buffer get's cleared before it is guaranteed that all readers have had a chance to consume the events (i.e. in the case of a fixed timestep system). Would the implementation for Flags be generalized so that cases like Events could be implemented in a similar way (e.g. so the events can be kept in buffer until all consuming systems have had a chance to read them)? |
Events being non-persistent was one of the original selling points of bevy, it's an intentional design decision. It allows events to be performant, without concern for allocations or potential runaway memory usage. I think they should be kept as they are. |
@jamadazi I believe you that it was an intentional decision, but the current event architecture is incompatible with RunCriteria, and I'm not seeing how this is a trade-off for performance/memory efficiency.
|
I think this is more of a problem than you're making it out to be / it creates new problems. For example, consider an app that listens to some high volume event (collisions), both in the systems that run during the Menu state and the InGame state. Without additional logic to "unsubscribe", the Menu event listener will cause the collision event queue to grow unbounded. "runaway memory usage" will also happen when the RunCondition is infrequent (it doesnt need to be never). Ex: "every 20 minutes check for collision events".
This isn't clear cut imo.
The current impl is better for events that are either burst-ey or events that happen is small volumes. Your suggested impl is better for events that occur in the same volume each update. |
This is a good point. I was thinking of this more from the perspective of two systems that are both "consistently running" but not necessarily every frame (i.e. fixed timestep systems). The case you describe for disjoint states as run conditions definitely runs into runaway memory problems.
I agree that it isn't clear cut, however for burst-ey events you're going to end up with multiple allocations on the frames where you have a burst of events (if you have 64 events you'll have 4 allocations). I think its arguably better to keep the buffer allocated even for the frames where you're getting 0 events for two reasons.
I'll have to think on this some more, but it seems like there's two use cases - consuming "new events" and consuming "events that happened since I ran last". But even in the latter case you might have nesting based on the disjoint states (menu vs ingame). |
A draft implementation of Proposal 5 (Generation-Counter Change Detection) from #1471 is now available for review, courtesy of @Davier. |
# Problem Definition The current change tracking (via flags for both components and resources) fails to detect changes made by systems that are scheduled to run earlier in the frame than they are. This issue is discussed at length in [#68](#68) and [#54](#54). This is very much a draft PR, and contributions are welcome and needed. # Criteria 1. Each change is detected at least once, no matter the ordering. 2. Each change is detected at most once, no matter the ordering. 3. Changes should be detected the same frame that they are made. 4. Competitive ergonomics. Ideally does not require opting-in. 5. Low CPU overhead of computation. 6. Memory efficient. This must not increase over time, except where the number of entities / resources does. 7. Changes should not be lost for systems that don't run. 8. A frame needs to act as a pure function. Given the same set of entities / components it needs to produce the same end state without side-effects. **Exact** change-tracking proposals satisfy criteria 1 and 2. **Conservative** change-tracking proposals satisfy criteria 1 but not 2. **Flaky** change tracking proposals satisfy criteria 2 but not 1. # Code Base Navigation There are three types of flags: - `Added`: A piece of data was added to an entity / `Resources`. - `Mutated`: A piece of data was able to be modified, because its `DerefMut` was accessed - `Changed`: The bitwise OR of `Added` and `Changed` The special behavior of `ChangedRes`, with respect to the scheduler is being removed in [#1313](#1313) and does not need to be reproduced. `ChangedRes` and friends can be found in "bevy_ecs/core/resources/resource_query.rs". The `Flags` trait for Components can be found in "bevy_ecs/core/query.rs". `ComponentFlags` are stored in "bevy_ecs/core/archetypes.rs", defined on line 446. # Proposals **Proposal 5 was selected for implementation.** ## Proposal 0: No Change Detection The baseline, where computations are performed on everything regardless of whether it changed. **Type:** Conservative **Pros:** - already implemented - will never miss events - no overhead **Cons:** - tons of repeated work - doesn't allow users to avoid repeating work (or monitoring for other changes) ## Proposal 1: Earlier-This-Tick Change Detection The current approach as of Bevy 0.4. Flags are set, and then flushed at the end of each frame. **Type:** Flaky **Pros:** - already implemented - simple to understand - low memory overhead (2 bits per component) - low time overhead (clear every flag once per frame) **Cons:** - misses systems based on ordering - systems that don't run every frame miss changes - duplicates detection when looping - can lead to unresolvable circular dependencies ## Proposal 2: Two-Tick Change Detection Flags persist for two frames, using a double-buffer system identical to that used in events. A change is observed if it is found in either the current frame's list of changes or the previous frame's. **Type:** Conservative **Pros:** - easy to understand - easy to implement - low memory overhead (4 bits per component) - low time overhead (bit mask and shift every flag once per frame) **Cons:** - can result in a great deal of duplicated work - systems that don't run every frame miss changes - duplicates detection when looping ## Proposal 3: Last-Tick Change Detection Flags persist for two frames, using a double-buffer system identical to that used in events. A change is observed if it is found in the previous frame's list of changes. **Type:** Exact **Pros:** - exact - easy to understand - easy to implement - low memory overhead (4 bits per component) - low time overhead (bit mask and shift every flag once per frame) **Cons:** - change detection is always delayed, possibly causing painful chained delays - systems that don't run every frame miss changes - duplicates detection when looping ## Proposal 4: Flag-Doubling Change Detection Combine Proposal 2 and Proposal 3. Differentiate between `JustChanged` (current behavior) and `Changed` (Proposal 3). Pack this data into the flags according to [this implementation proposal](#68 (comment)). **Type:** Flaky + Exact **Pros:** - allows users to acc - easy to implement - low memory overhead (4 bits per component) - low time overhead (bit mask and shift every flag once per frame) **Cons:** - users must specify the type of change detection required - still quite fragile to system ordering effects when using the flaky `JustChanged` form - cannot get immediate + exact results - systems that don't run every frame miss changes - duplicates detection when looping ## [SELECTED] Proposal 5: Generation-Counter Change Detection A global counter is increased after each system is run. Each component saves the time of last mutation, and each system saves the time of last execution. Mutation is detected when the component's counter is greater than the system's counter. Discussed [here](#68 (comment)). How to handle addition detection is unsolved; the current proposal is to use the highest bit of the counter as in proposal 1. **Type:** Exact (for mutations), flaky (for additions) **Pros:** - low time overhead (set component counter on access, set system counter after execution) - robust to systems that don't run every frame - robust to systems that loop **Cons:** - moderately complex implementation - must be modified as systems are inserted dynamically - medium memory overhead (4 bytes per component + system) - unsolved addition detection ## Proposal 6: System-Data Change Detection For each system, track which system's changes it has seen. This approach is only worth fully designing and implementing if Proposal 5 fails in some way. **Type:** Exact **Pros:** - exact - conceptually simple **Cons:** - requires storing data on each system - implementation is complex - must be modified as systems are inserted dynamically ## Proposal 7: Total-Order Change Detection Discussed [here](#68 (comment)). This proposal is somewhat complicated by the new scheduler, but I believe it should still be conceptually feasible. This approach is only worth fully designing and implementing if Proposal 5 fails in some way. **Type:** Exact **Pros:** - exact - efficient data storage relative to other exact proposals **Cons:** - requires access to the scheduler - complex implementation and difficulty grokking - must be modified as systems are inserted dynamically # Tests - We will need to verify properties 1, 2, 3, 7 and 8. Priority: 1 > 2 = 3 > 8 > 7 - Ideally we can use identical user-facing syntax for all proposals, allowing us to re-use the same syntax for each. - When writing tests, we need to carefully specify order using explicit dependencies. - These tests will need to be duplicated for both components and resources. - We need to be sure to handle cases where ambiguous system orders exist. `changing_system` is always the system that makes the changes, and `detecting_system` always detects the changes. The component / resource changed will be simple boolean wrapper structs. ## Basic Added / Mutated / Changed 2 x 3 design: - Resources vs. Components - Added vs. Changed vs. Mutated - `changing_system` runs before `detecting_system` - verify at the end of tick 2 ## At Least Once 2 x 3 design: - Resources vs. Components - Added vs. Changed vs. Mutated - `changing_system` runs after `detecting_system` - verify at the end of tick 2 ## At Most Once 2 x 3 design: - Resources vs. Components - Added vs. Changed vs. Mutated - `changing_system` runs once before `detecting_system` - increment a counter based on the number of changes detected - verify at the end of tick 2 ## Fast Detection 2 x 3 design: - Resources vs. Components - Added vs. Changed vs. Mutated - `changing_system` runs before `detecting_system` - verify at the end of tick 1 ## Ambiguous System Ordering Robustness 2 x 3 x 2 design: - Resources vs. Components - Added vs. Changed vs. Mutated - `changing_system` runs [before/after] `detecting_system` in tick 1 - `changing_system` runs [after/before] `detecting_system` in tick 2 ## System Pausing 2 x 3 design: - Resources vs. Components - Added vs. Changed vs. Mutated - `changing_system` runs in tick 1, then is disabled by run criteria - `detecting_system` is disabled by run criteria until it is run once during tick 3 - verify at the end of tick 3 ## Addition Causes Mutation 2 design: - Resources vs. Components - `adding_system_1` adds a component / resource - `adding system_2` adds the same component / resource - verify the `Mutated` flag at the end of the tick - verify the `Added` flag at the end of the tick First check tests for: #333 Second check tests for: #1443 ## Changes Made By Commands - `adding_system` runs in Update in tick 1, and sends a command to add a component - `detecting_system` runs in Update in tick 1 and 2, after `adding_system` - We can't detect the changes in tick 1, since they haven't been processed yet - If we were to track these changes as being emitted by `adding_system`, we can't detect the changes in tick 2 either, since `detecting_system` has already run once after `adding_system` :( # Benchmarks See: [general advice](https://github.com/bevyengine/bevy/blob/master/docs/profiling.md), [Criterion crate](https://github.com/bheisler/criterion.rs) There are several critical parameters to vary: 1. entity count (1 to 10^9) 2. fraction of entities that are changed (0% to 100%) 3. cost to perform work on changed entities, i.e. workload (1 ns to 1s) 1 and 2 should be varied between benchmark runs. 3 can be added on computationally. We want to measure: - memory cost - run time We should collect these measurements across several frames (100?) to reduce bootup effects and accurately measure the mean, variance and drift. Entity-component change detection is much more important to benchmark than resource change detection, due to the orders of magnitude higher number of pieces of data. No change detection at all should be included in benchmarks as a second control for cases where missing changes is unacceptable. ## Graphs 1. y: performance, x: log_10(entity count), color: proposal, facet: performance metric. Set cost to perform work to 0. 2. y: run time, x: cost to perform work, color: proposal, facet: fraction changed. Set number of entities to 10^6 3. y: memory, x: frames, color: proposal # Conclusions 1. Is the theoretical categorization of the proposals correct according to our tests? 2. How does the performance of the proposals compare without any load? 3. How does the performance of the proposals compare with realistic loads? 4. At what workload does more exact change tracking become worth the (presumably) higher overhead? 5. When does adding change-detection to save on work become worthwhile? 6. Is there enough divergence in performance between the best solutions in each class to ship more than one change-tracking solution? # Implementation Plan 1. Write a test suite. 2. Verify that tests fail for existing approach. 3. Write a benchmark suite. 4. Get performance numbers for existing approach. 5. Implement, test and benchmark various solutions using a Git branch per proposal. 6. Create a draft PR with all solutions and present results to team. 7. Select a solution and replace existing change detection. Co-authored-by: Brice DAVIER <[email protected]> Co-authored-by: Carter Anderson <[email protected]>
Closed by #1471!! |
The current change tracking system is very lightweight, both for event generation and consumption (which is why we can enable it by default), but it has the following problem:
In most cases, this can be worked around via system registration order and stages, but I anticipate some cases that cannot work around this behavior, as well as general confusion from users like: "why isn't my system running ... im definitely mutating this component?"
I think it would be useful to (optionally) allow stateful "order independent" change tracking. I posted some ideas here: #54
The text was updated successfully, but these errors were encountered: