Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(avm): storage #4673

Merged
merged 50 commits into from
Mar 6, 2024
Merged

feat(avm): storage #4673

merged 50 commits into from
Mar 6, 2024

Conversation

Maddiaa0
Copy link
Member

@Maddiaa0 Maddiaa0 commented Feb 19, 2024

Overview

Works around brillig blowup issue by altering the read and write opcodes to take in arrays of data. This is potentially just a short term fix.

  • Reading and writing to storage now take in arrays, code will not compile without this change, due to an ssa issue -> ISSUE

  • Tag checks on memory now just make sure the tag is LESS than uint64, rather than asserting that the memory tag is uint32, this should be fine.

  • We had to blow up the memory space of the avm to u64 as the entire noir compiler works with u64s. Arrays will not work unless we either

subrepo:
  subdir:   "noir"
  merged:   "c44ef1484"
upstream:
  origin:   "https://github.com/noir-lang/noir"
  branch:   "master"
  commit:   "c44ef1484"
git-subrepo:
  version:  "0.4.6"
  origin:   "???"
  commit:   "???"
Copy link
Member Author

Maddiaa0 commented Feb 19, 2024

@@ -42,7 +42,7 @@ pub(crate) const BRILLIG_INTEGER_ARITHMETIC_BIT_SIZE: u32 = 127;
/// The Brillig VM does not apply a limit to the memory address space,
/// As a convention, we take use 64 bits. This means that we assume that
/// memory has 2^64 memory slots.
pub(crate) const BRILLIG_MEMORY_ADDRESSING_BIT_SIZE: u32 = 32;
pub(crate) const BRILLIG_MEMORY_ADDRESSING_BIT_SIZE: u32 = 64; // This did not match the comment
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sirasistant is this legal

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that AVM has 32-bit memory address space. Not sure if that matters, but maybe Brillig should have a 32-bit space too?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the reason i changed it is that the ssa bit size was 64 bits, which meant that when brillig took a constant from ssa, and a constant from brillig tags would fail on operations.

I can go deeper into the compiler and see what happens if i switch ssa to assign constants to be 32 bits rather than 64 but it will take a bit more work

@AztecBot
Copy link
Collaborator

AztecBot commented Feb 19, 2024

Benchmark results

Metrics with a significant change:

  • note_trial_decrypting_time_in_ms (8): 6.86 (-94%)
Detailed results

All benchmarks are run on txs on the Benchmarking contract on the repository. Each tx consists of a batch call to create_note and increment_balance, which guarantees that each tx has a private call, a nested private call, a public call, and a nested public call, as well as an emitted private note, an unencrypted log, and public storage read and write.

This benchmark source data is available in JSON format on S3 here.

Values are compared against data from master at commit 6eb6778c and shown if the difference exceeds 1%.

L2 block published to L1

Each column represents the number of txs on an L2 block published to L1.

Metric 8 txs 32 txs 64 txs
l1_rollup_calldata_size_in_bytes 5,700 18,884 36,452
l1_rollup_calldata_gas 66,120 238,940 469,844
l1_rollup_execution_gas 194,068 500,246 909,102
l2_block_processing_time_in_ms 1,173 (-1%) 4,467 8,867
note_successful_decrypting_time_in_ms 199 (+1%) 540 986 (-1%)
note_trial_decrypting_time_in_ms ⚠️ 6.86 (-94%) 62.1 (-7%) 117 (-8%)
l2_block_building_time_in_ms 16,073 (-1%) 64,415 (-1%) 128,336 (-1%)
l2_block_rollup_simulation_time_in_ms 12,216 (-1%) 49,105 97,805 (-1%)
l2_block_public_tx_process_time_in_ms 3,828 (-1%) 15,245 (-1%) 30,367 (-1%)

L2 chain processing

Each column represents the number of blocks on the L2 chain where each block has 16 txs.

Metric 5 blocks 10 blocks
node_history_sync_time_in_ms 14,066 (-2%) 26,903 (-3%)
note_history_successful_decrypting_time_in_ms 1,238 (-1%) 2,379 (-1%)
note_history_trial_decrypting_time_in_ms 86.2 (-11%) 135 (-4%)
node_database_size_in_bytes 18,800,720 35,536,976 (+1%)
pxe_database_size_in_bytes 29,923 59,478

Circuits stats

Stats on running time and I/O sizes collected for every circuit run across all benchmarks.

Circuit circuit_simulation_time_in_ms circuit_input_size_in_bytes circuit_output_size_in_bytes
private-kernel-init 252 44,736 28,001
private-kernel-ordering 192 52,625 14,627
base-rollup 1,313 177,932 933
root-rollup 70.4 (-1%) 4,192 825
private-kernel-inner 320 73,715 28,001
public-kernel-app-logic 195 32,254 25,379
merge-rollup 5.74 2,712 933

Tree insertion stats

The duration to insert a fixed batch of leaves into each tree type.

Metric 1 leaves 2 leaves 8 leaves 16 leaves 32 leaves 64 leaves 128 leaves 512 leaves 1024 leaves 2048 leaves 4096 leaves
batch_insert_into_append_only_tree_16_depth_ms 9.80 (-1%) 10.3 (+1%) 14.8 (-2%) 16.4 (+2%) 22.1 (-1%) 35.2 N/A N/A N/A N/A N/A
batch_insert_into_append_only_tree_16_depth_hash_count 16.9 17.5 23.0 31.6 47.0 79.0 N/A N/A N/A N/A N/A
batch_insert_into_append_only_tree_16_depth_hash_ms 0.569 (-1%) 0.571 (+1%) 0.630 (-2%) 0.505 (+2%) 0.462 (-1%) 0.439 N/A N/A N/A N/A N/A
batch_insert_into_append_only_tree_32_depth_ms N/A N/A N/A N/A N/A 45.6 72.1 229 443 866 1,721
batch_insert_into_append_only_tree_32_depth_hash_count N/A N/A N/A N/A N/A 96.0 159 543 1,055 2,079 4,127
batch_insert_into_append_only_tree_32_depth_hash_ms N/A N/A N/A N/A N/A 0.468 0.445 0.419 0.414 0.412 0.412
batch_insert_into_indexed_tree_20_depth_ms N/A N/A N/A N/A N/A 53.5 (-1%) 108 332 (-2%) 657 1,306 2,598
batch_insert_into_indexed_tree_20_depth_hash_count N/A N/A N/A N/A N/A 104 207 691 1,363 2,707 5,395
batch_insert_into_indexed_tree_20_depth_hash_ms N/A N/A N/A N/A N/A 0.477 0.485 0.454 (-1%) 0.452 0.453 0.452
batch_insert_into_indexed_tree_40_depth_ms N/A N/A N/A N/A 60.8 N/A N/A N/A N/A N/A N/A
batch_insert_into_indexed_tree_40_depth_hash_count N/A N/A N/A N/A 109 N/A N/A N/A N/A N/A N/A
batch_insert_into_indexed_tree_40_depth_hash_ms N/A N/A N/A N/A 0.532 N/A N/A N/A N/A N/A N/A

Miscellaneous

Transaction sizes based on how many contracts are deployed in the tx.

Metric 0 deployed contracts
tx_size_in_bytes 19,179

Transaction processing duration by data writes.

Metric 0 new note hashes 1 new note hashes
tx_pxe_processing_time_ms 2,606 (-1%) 1,368
Metric 0 public data writes 1 public data writes
tx_sequencer_processing_time_ms 0.0301 476

Comment on lines 53 to 55
case AddressingMode.INDIRECT:
mem.checkTag(TypeTag.UINT32, offset);
mem.checkTag(TypeTag.UINT64, offset); // brillig word size is 64 bits
resolved[i] = Number(mem.get(offset).toBigInt());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤔 we shouldn't have to do this since AVM memory offsets are all supposed to be 32 bits

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a consequence of the comment above, the address space of brillig / ssa were misaligned, as long as no one is trying to use the avm this week, I can wait for that to be resolved and just stack on this pr

Copy link
Collaborator

@dbanks12 dbanks12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Only weirdness is the 64bit addressing in brillig and the fact that that's forcing us to expect 64 bit words as AVM memory offsets

Base automatically changed from md/02-09-feat_avm_hashing_to_simulator to master February 20, 2024 01:39
@Maddiaa0 Maddiaa0 marked this pull request as ready for review March 5, 2024 10:38
@Maddiaa0 Maddiaa0 requested a review from fcarreiro March 6, 2024 11:10
Copy link
Contributor

@fcarreiro fcarreiro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please link the github issue for this PR on the description (but don't close it until we drop the arrays :))

avm-transpiler/src/transpile.rs Show resolved Hide resolved
avm-transpiler/src/transpile.rs Outdated Show resolved Hide resolved
avm-transpiler/src/transpile.rs Outdated Show resolved Hide resolved
yarn-project/simulator/src/avm/avm_simulator.test.ts Outdated Show resolved Hide resolved
yarn-project/simulator/src/avm/avm_simulator.test.ts Outdated Show resolved Hide resolved
yarn-project/simulator/src/avm/avm_simulator.test.ts Outdated Show resolved Hide resolved
@@ -255,6 +255,12 @@ export class TaggedMemory {
}
}

public checkTagLessThanEqual(tag: TypeTag, offset: number) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider having checkIsValidMemoryOffsetTag() or something like that, which we can then just use everywhere we need to check for mem.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in replace of this or in addition?

public write(storageAddress: Fr, key: Fr, value: Fr) {
this.cache.write(storageAddress, key, value);
public write(storageAddress: Fr, key: Fr, values: Fr[]) {
for (const [index, value] of Object.entries(values)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't need to do this here. You can do the loop in the opcode (opcodes/storage.ts) and write your journal/trace accesses/tests the way you want them to be in the final version. Then when you can do the loop in noir you just change the opcode.

@@ -64,6 +64,24 @@ describe('Hashing Opcodes', () => {
const result = context.machineState.memory.get(dstOffset);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not against more tests, but unrelated to this PR? :)

@fcarreiro fcarreiro self-requested a review March 6, 2024 17:01
Copy link
Contributor

@fcarreiro fcarreiro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@Maddiaa0 Maddiaa0 enabled auto-merge (squash) March 6, 2024 17:35
@Maddiaa0 Maddiaa0 merged commit bfdbf2e into master Mar 6, 2024
19 of 20 checks passed
@Maddiaa0 Maddiaa0 deleted the md/02-16-feat_avm_storage branch March 6, 2024 18:43
AztecBot added a commit to noir-lang/noir that referenced this pull request Mar 6, 2024
## Overview
Works around brillig blowup issue by altering the read and write opcodes
to take in arrays of data. This is potentially just a short term fix.

- Reading and writing to storage now take in arrays, code will not
compile without this change, due to an ssa issue ->[ ISSUE
](AztecProtocol/aztec-packages#4979)

- Tag checks on memory now just make sure the tag is LESS than uint64,
rather than asserting that the memory tag is uint32, this should be
fine.

- We had to blow up the memory space of the avm to u64 as the entire
noir compiler works with u64s. Arrays will not work unless we either
    - Make the avm 64 bit addressable ( probably fine )
- Make noir 32 bit addressable ( requires alot of buy in )
AztecProtocol/aztec-packages#4814

---------

Co-authored-by: sirasistant <[email protected]>
ludamad added a commit that referenced this pull request Mar 7, 2024
ludamad added a commit that referenced this pull request Mar 7, 2024
Reverts #4673 due to an uncaught issue with
end-to-end tests
AztecBot added a commit to noir-lang/noir that referenced this pull request Mar 7, 2024
AztecBot added a commit to noir-lang/noir that referenced this pull request Mar 7, 2024
AztecBot pushed a commit to AztecProtocol/aztec-nr that referenced this pull request Mar 19, 2024
Reverts AztecProtocol/aztec-packages#4673 due to an uncaught issue with
end-to-end tests
superstar0402 added a commit to superstar0402/aztec-nr that referenced this pull request Aug 16, 2024
Reverts AztecProtocol/aztec-packages#4673 due to an uncaught issue with
end-to-end tests
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants