Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[loader-v2] Fixing global cache reads & read-before-write on publish #15285

Merged
merged 5 commits into from
Nov 18, 2024

Conversation

georgemitenkov
Copy link
Contributor

@georgemitenkov georgemitenkov commented Nov 15, 2024

Description

  • Capture global cache reads as well. Resolve first to captured reads (per transaction), then to global cache, then to per-block, then to state view.
  • Issue read-before-write for modules at commit.

How Has This Been Tested?

To test, use RUST_MIN_STACK=104857600 cargo test --release --package aptos-executor-benchmark --lib tests::test_publish_transaction

Commenting out

self.remote.read_state_value(&state_key).map_err(|err| {
    let msg = format!(
        "Error when enforcing read-before-write for module {}::{}: {:?}",
        addr, name, err
    );
    PartialVMError::new(StatusCode::STORAGE_ERROR).with_message(msg)
})?;

causes panic on not satisfying read-before-write. To test the captured read parts, inserting a panic after a single transaction is executed (instead of logging "[aptos_vm] Transaction breaking invariant violation ... ") is no longer triggerred. Increased the number of runs of a test to 10 to ensure we catch those cases.

Key Areas to Review

Type of Change

  • New feature
  • Bug fix
  • Breaking change
  • Performance improvement
  • Refactoring
  • Dependency update
  • Documentation update
  • Tests

Which Components or Systems Does This Change Impact?

  • Validator Node
  • Full Node (API, Indexer, etc.)
  • Move/Aptos Virtual Machine
  • Aptos Framework
  • Aptos CLI/SDK
  • Developer Infrastructure
  • Move Compiler
  • Other (specify)

Checklist

  • I have read and followed the CONTRIBUTING doc
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I identified and added all stakeholders and component owners affected by this change as reviewers
  • I tested both happy and unhappy path of the functionality
  • I have made corresponding changes to the documentation

Copy link

trunk-io bot commented Nov 15, 2024

⏱️ 12h 18m total CI duration on this PR
Slowest 15 Jobs Cumulative Duration Recent Runs
execution-performance / single-node-performance 7h 3m 🟩🟩🟩🟩🟩 (+6 more)
execution-performance / test-target-determinator 49m 🟩🟩🟩🟩🟩 (+6 more)
test-target-determinator 32m 🟩🟩🟩 (+4 more)
check 28m 🟩🟩🟩 (+5 more)
rust-images / rust-all 19m 🟥🟩
check-dynamic-deps 18m 🟩🟩🟩🟩🟩 (+8 more)
rust-cargo-deny 16m 🟩🟩🟩🟩 (+5 more)
rust-move-tests 13m 🟩
rust-move-tests 13m 🟩
fetch-last-released-docker-image-tag 13m 🟩🟩🟩 (+4 more)
rust-move-tests 12m 🟩
rust-move-tests 12m 🟩
rust-move-tests 12m 🟩
rust-move-tests 10m
rust-move-tests 8m

🚨 1 job on the last run was significantly faster/slower than expected

Job Duration vs 7d avg Delta
execution-performance / single-node-performance 38m 16m +142%

settingsfeedbackdocs ⋅ learn more about trunk.io

Copy link
Contributor Author

georgemitenkov commented Nov 15, 2024

This stack of pull requests is managed by Graphite. Learn more about stacking.

@georgemitenkov georgemitenkov marked this pull request as ready for review November 15, 2024 02:39
@georgemitenkov georgemitenkov requested review from msmouse and igor-aptos and removed request for sasha8 and danielxiangzl November 15, 2024 02:39
@georgemitenkov georgemitenkov added CICD:run-e2e-tests when this label is present github actions will run all land-blocking e2e tests from the PR CICD:run-execution-performance-test Run execution performance test CICD:run-execution-performance-full-test Run execution performance test (full version) labels Nov 15, 2024

This comment has been minimized.

This comment has been minimized.

Comment on lines 298 to 305
enum ModuleRead<DC, VC, S> {
/// Read from the cross-block module cache.
GlobalCache,
GlobalCache(Arc<ModuleCode<DC, VC, S>>),
/// Read from per-block cache ([SyncCodeCache]) used by parallel execution.
PerBlockCache(Option<(Arc<ModuleCode<DC, VC, S>>, Option<TxnIndex>)>),
Copy link
Contributor

@igor-aptos igor-aptos Nov 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you explain why do we distinguish reads here based on where we got the data from? also what is Option<TxnIndex> in the PerBlockCache ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Option - module does not exist (in StateView even).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Different reads - different validations. We need to check that global reads are still valid, and per-block reads have the same version

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stupid formatting, didn't show I was referring to TxnIndex

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah - None is a storage version

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Different validation paths: for global cache read we need to check if the read is still valid in cache. For per-block we go to MVHashMap. Now, the question is about storage read: we issue it only when there is a cache miss in per-block cache, so it gets validated there.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically "storage version" can be later drained into global cache, but otherwise exists only in per-block

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so from validation perspective - there is no distinction

distinction is ONLY there to make updating global cache (i.e. draining to it) be faster/cheaper by skipping things that are already there.

is that correct?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually could be an useful thing to add as a brief comment

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a comment

This comment has been minimized.

This comment has been minimized.

@georgemitenkov georgemitenkov force-pushed the george/loader-fixes branch 2 times, most recently from 59e4942 to dc8af3f Compare November 15, 2024 12:22
@georgemitenkov georgemitenkov changed the title Loader fixes [loader-v2] Fixing global cache reads & read-before-write on publish Nov 15, 2024

This comment has been minimized.

This comment has been minimized.

This comment has been minimized.

This comment has been minimized.

Comment on lines 298 to 305
enum ModuleRead<DC, VC, S> {
/// Read from the cross-block module cache.
GlobalCache,
GlobalCache(Arc<ModuleCode<DC, VC, S>>),
/// Read from per-block cache ([SyncCodeCache]) used by parallel execution.
PerBlockCache(Option<(Arc<ModuleCode<DC, VC, S>>, Option<TxnIndex>)>),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so from validation perspective - there is no distinction

distinction is ONLY there to make updating global cache (i.e. draining to it) be faster/cheaper by skipping things that are already there.

is that correct?

@@ -661,7 +658,7 @@ where
}

self.module_reads.iter().all(|(key, read)| match read {
ModuleRead::GlobalCache => global_module_cache.contains_valid(key),
ModuleRead::GlobalCache(_) => global_module_cache.contains_valid(key),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this whole match be equivalent to:

        self.module_reads.iter().all(|(key, read)| {
            let previous_version = match read {
              ModuleRead::GlobalCache(_) => None, // i.e. storage version
              ModuleRead::PerBlockCache(previous) => previous.as_ref().map(|(_, version)| *version);
            };
            let current_version = per_block_module_cache.get_module_version(key);
            current_version == previous_version
        })

why do we need to update GlobalCache at all while executing a block?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do if we read first from it (to know if entry is overridden or not). An alternative is to check lower level cache first, but this means performance penalty due to locking.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code can be somewhat equivalent, but:

let current_version = per_block_module_cache.get_module_version(key);

causes a prefetch of storage version by default. We would need to special case validation to not do it. An we also end up locking the cache (shard, worst case), instead of checking an atomic bool

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is because we may publish a module that invalidates the global cache that's being read I think

}

// Otherwise, it is a miss. Check global cache.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we check global cache before checking state.versioned_map.module_cache ?

on rolling commit - are we updating GlobalCache itself?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We update global cache at rolling commit - if published keys exist in global cache, we mark them as invalid. So reads to them results in a cache miss and we fallback to MVHashMap where we have placed the write at commit time.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can check versioned before, but then you end up acquiring a lock for potentially non-republished module (publish is rare). If 32 threads do this for aptos-framework, this is bad.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So instead, we lookup in global first, but check an atomic bool flag there (better than a lock), so we optimize for read case

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, then I would rename PerBlockCache to UnfinalizedBlockCache or something like that - to make it clear it only ever refers to things before rolling commit, and GlobalCache is global and updated within the block itself.

(you can do that in separate PR of course :) )

@georgemitenkov georgemitenkov enabled auto-merge (squash) November 17, 2024 23:01

This comment has been minimized.

This comment has been minimized.

This comment has been minimized.

This comment has been minimized.

This comment has been minimized.

This comment has been minimized.

@georgemitenkov georgemitenkov merged commit 0a16e9e into main Nov 18, 2024
76 of 99 checks passed
@georgemitenkov georgemitenkov deleted the george/loader-fixes branch November 18, 2024 04:24
github-actions bot pushed a commit that referenced this pull request Nov 18, 2024
…15285)

- Enforces read-before-write for module publishes.
- Records all module reads in captured reads, not just per-block.
- Adds a workload + test to publish and call modules.

Co-authored-by: Igor <[email protected]>
(cherry picked from commit 0a16e9e)
Copy link
Contributor

💚 All backports created successfully

Status Branch Result
aptos-release-v1.24

Questions ?

Please refer to the Backport tool documentation and see the Github Action logs for details

This comment has been minimized.

This comment has been minimized.

This comment has been minimized.

Copy link
Contributor

✅ Forge suite realistic_env_max_load success on 1ed9b8012565a6779542044695db775941506e20

two traffics test: inner traffic : committed: 14235.58 txn/s, latency: 2794.71 ms, (p50: 2700 ms, p70: 2700, p90: 3000 ms, p99: 3300 ms), latency samples: 5412740
two traffics test : committed: 99.90 txn/s, latency: 1484.26 ms, (p50: 1400 ms, p70: 1500, p90: 1600 ms, p99: 1700 ms), latency samples: 1800
Latency breakdown for phase 0: ["MempoolToBlockCreation: max: 2.007, avg: 1.552", "ConsensusProposalToOrdered: max: 0.335, avg: 0.300", "ConsensusOrderedToCommit: max: 0.400, avg: 0.384", "ConsensusProposalToCommit: max: 0.697, avg: 0.684"]
Max non-epoch-change gap was: 0 rounds at version 0 (avg 0.00) [limit 4], 0.89s no progress at version 2809632 (avg 0.20s) [limit 15].
Max epoch-change gap was: 0 rounds at version 0 (avg 0.00) [limit 4], 8.70s no progress at version 2809630 (avg 8.70s) [limit 15].
Test Ok

Copy link
Contributor

✅ Forge suite framework_upgrade success on 2bb2d43037a93d883729869d65c7c6c75b028fa1 ==> 1ed9b8012565a6779542044695db775941506e20

Compatibility test results for 2bb2d43037a93d883729869d65c7c6c75b028fa1 ==> 1ed9b8012565a6779542044695db775941506e20 (PR)
Upgrade the nodes to version: 1ed9b8012565a6779542044695db775941506e20
framework_upgrade::framework-upgrade::full-framework-upgrade : committed: 1326.01 txn/s, submitted: 1328.00 txn/s, failed submission: 1.99 txn/s, expired: 1.99 txn/s, latency: 2389.52 ms, (p50: 2100 ms, p70: 2400, p90: 3900 ms, p99: 5400 ms), latency samples: 119780
framework_upgrade::framework-upgrade::full-framework-upgrade : committed: 1350.34 txn/s, submitted: 1353.03 txn/s, failed submission: 2.69 txn/s, expired: 2.69 txn/s, latency: 2274.82 ms, (p50: 2100 ms, p70: 2400, p90: 3300 ms, p99: 4600 ms), latency samples: 120480
5. check swarm health
Compatibility test for 2bb2d43037a93d883729869d65c7c6c75b028fa1 ==> 1ed9b8012565a6779542044695db775941506e20 passed
Upgrade the remaining nodes to version: 1ed9b8012565a6779542044695db775941506e20
framework_upgrade::framework-upgrade::full-framework-upgrade : committed: 1474.30 txn/s, submitted: 1476.58 txn/s, failed submission: 2.28 txn/s, expired: 2.28 txn/s, latency: 2187.59 ms, (p50: 2100 ms, p70: 2400, p90: 3300 ms, p99: 4400 ms), latency samples: 129080
Test Ok

Copy link
Contributor

✅ Forge suite compat success on 2bb2d43037a93d883729869d65c7c6c75b028fa1 ==> 1ed9b8012565a6779542044695db775941506e20

Compatibility test results for 2bb2d43037a93d883729869d65c7c6c75b028fa1 ==> 1ed9b8012565a6779542044695db775941506e20 (PR)
1. Check liveness of validators at old version: 2bb2d43037a93d883729869d65c7c6c75b028fa1
compatibility::simple-validator-upgrade::liveness-check : committed: 14614.13 txn/s, latency: 1974.12 ms, (p50: 1800 ms, p70: 1900, p90: 2200 ms, p99: 5700 ms), latency samples: 559680
2. Upgrading first Validator to new version: 1ed9b8012565a6779542044695db775941506e20
compatibility::simple-validator-upgrade::single-validator-upgrading : committed: 7722.40 txn/s, latency: 3563.27 ms, (p50: 3700 ms, p70: 4100, p90: 4900 ms, p99: 5300 ms), latency samples: 140560
compatibility::simple-validator-upgrade::single-validator-upgrade : committed: 7780.68 txn/s, latency: 4143.45 ms, (p50: 4300 ms, p70: 4500, p90: 6000 ms, p99: 6200 ms), latency samples: 255280
3. Upgrading rest of first batch to new version: 1ed9b8012565a6779542044695db775941506e20
compatibility::simple-validator-upgrade::half-validator-upgrading : committed: 7472.70 txn/s, latency: 3675.21 ms, (p50: 4100 ms, p70: 4500, p90: 4800 ms, p99: 5000 ms), latency samples: 135820
compatibility::simple-validator-upgrade::half-validator-upgrade : committed: 7379.51 txn/s, latency: 4315.75 ms, (p50: 4500 ms, p70: 4600, p90: 6600 ms, p99: 6800 ms), latency samples: 245020
4. upgrading second batch to new version: 1ed9b8012565a6779542044695db775941506e20
compatibility::simple-validator-upgrade::rest-validator-upgrading : committed: 12147.19 txn/s, latency: 2317.41 ms, (p50: 2600 ms, p70: 2600, p90: 2800 ms, p99: 2900 ms), latency samples: 208120
compatibility::simple-validator-upgrade::rest-validator-upgrade : committed: 6101.60 txn/s, submitted: 6101.79 txn/s, expired: 0.19 txn/s, latency: 2667.60 ms, (p50: 2600 ms, p70: 2800, p90: 3000 ms, p99: 3500 ms), latency samples: 386948
5. check swarm health
Compatibility test for 2bb2d43037a93d883729869d65c7c6c75b028fa1 ==> 1ed9b8012565a6779542044695db775941506e20 passed
Test Ok

ibalajiarun pushed a commit that referenced this pull request Nov 18, 2024
…15285) (#15298)

- Enforces read-before-write for module publishes.
- Records all module reads in captured reads, not just per-block.
- Adds a workload + test to publish and call modules.

Co-authored-by: Igor <[email protected]>
(cherry picked from commit 0a16e9e)

Co-authored-by: George Mitenkov <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CICD:run-e2e-tests when this label is present github actions will run all land-blocking e2e tests from the PR CICD:run-execution-performance-full-test Run execution performance test (full version) CICD:run-execution-performance-test Run execution performance test v1.24
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants