Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge tag 'v4.1.0' into develop #1

Merged
merged 298 commits into from
Apr 23, 2023

Conversation

yangby-cryptape
Copy link
Collaborator

Synchronize the develop branch with the last stable version in the upstream
repository.

It's better to use the option "Squash and merge" to merge this PR.

realbigsean and others added 30 commits October 5, 2022 17:14
* Ran Cargo fmt

* Added Capella Data Structures to consensus/types
FIXME's:
 * consensus/fork_choice/src/fork_choice.rs
 * consensus/state_processing/src/per_epoch_processing/capella.rs
 * consensus/types/src/execution_payload_header.rs
 
TODO's:
 * consensus/state_processing/src/per_epoch_processing/capella/partial_withdrawals.rs
 * consensus/state_processing/src/per_epoch_processing/capella/full_withdrawals.rs
* add capella gossip boiler plate

* get everything compiling

Co-authored-by: realbigsean <[email protected]
Co-authored-by: Mark Mackey <[email protected]>

* small cleanup

* small cleanup

* cargo fix + some test cleanup

* improve block production

* add fixme for potential panic

Co-authored-by: Mark Mackey <[email protected]>
* Revert "Add more gossip verification conditions"

This reverts commit 1430b56.

* Revert "Add todos"

This reverts commit 91efb9d.

* Revert "Reprocess blob sidecar messages"

This reverts commit 21bf3d3.

* Add the coupled topic

* Decode SignedBeaconBlockAndBlobsSidecar correctly

* Process Block and Blobs in beacon processor

* Remove extra blob publishing logic from vc

* Remove blob signing in vc

* Ugly hack to compile
* add eip4844 block processing

* fix blob processing code

* consensus logic fixes and cleanup

* use safe arith
* Add transparent support

* Add `Config` struct

* Deprecate `enum_behaviour`

* Partially remove enum_behaviour from project

* Revert "Partially remove enum_behaviour from project"

This reverts commit 46ffb7f.

* Revert "Deprecate `enum_behaviour`"

This reverts commit 89b64a6.

* Add `struct_behaviour`

* Tidy

* Move tests into `ssz_derive`

* Bump ssz derive

* Fix comment

* newtype transaparent ssz

* use ssz transparent and create macros for  per fork implementations

* use superstruct map macros

Co-authored-by: Paul Hauner <[email protected]>
* start feature gating

* feature gate withdrawals
* Fixes to make EF Capella tests pass

* Clippy for state_processing
* Massive Update to Engine API

* Update beacon_node/execution_layer/src/engine_api/json_structures.rs

Co-authored-by: Michael Sproul <[email protected]>

* Update beacon_node/execution_layer/src/engine_api/json_structures.rs

Co-authored-by: Michael Sproul <[email protected]>

* Update beacon_node/beacon_chain/src/execution_payload.rs

Co-authored-by: realbigsean <[email protected]>

* Update beacon_node/execution_layer/src/engine_api.rs

Co-authored-by: realbigsean <[email protected]>

Co-authored-by: Michael Sproul <[email protected]>
Co-authored-by: realbigsean <[email protected]>
- return `None` on pre-4844 blob requests
int88 and others added 16 commits April 3, 2023 03:02
## Issue Addressed

NA

## Proposed Changes

remove duplicate log message.

## Additional Info

NA
> This is currently a WIP and all features are subject to alteration or removal at any time.

## Overview

The successor to sigp#2873.

Contains the backbone of `beacon.watch` including syncing code, the initial API, and several core database tables.

See `watch/README.md` for more information, requirements and usage.
## Issue Addressed

Attempt to fix sigp#3812 

## Proposed Changes

Move web3signer binary download script out of build script to avoid downloading unless necessary. If this works, it should also reduce the build time for all jobs that runs compilation.
## Issue Addressed

N/A

## Proposed Changes

Adds a flag for disabling peer scoring. This is useful for local testing and testing small networks for new features.
…gp#4160)

## Issue Addressed

sigp#4146 

## Proposed Changes

Removes the `ExecutionOptimisticForkVersionedResponse` type and the associated Beacon API endpoint which is now deprecated. Also removes the test associated with the endpoint.
I realized this is redundant while reasoning about how the `store` is implemented given the [definition of `ItemStore`](https://github.com/sigp/lighthouse/blob/v4.0.1/beacon_node/store/src/lib.rs#L107)
```rust
pub trait ItemStore<E: EthSpec>: KeyValueStore<E> + Sync + Send + Sized + 'static {
    ...
}
```
## Proposed Changes

This change attempts to prevent failed re-orgs by:

1. Lowering the re-org cutoff from 2s to 1s. This is informed by a failed re-org attempted by @yorickdowne's node. The failed block was requested in the 1.5-2s window due to a Vouch failure, and failed to propagate to the majority of the network before the attestation deadline at 4s.
2. Allow users to adjust their re-org cutoff depending on observed network conditions and their risk profile. The static 2 second cutoff was too rigid.
3. Add a `--proposer-reorg-disallowed-offsets` flag which can be used to prohibit reorgs at certain slots. This is intended to help workaround an issue whereby reorging blocks at slot 1 are currently taking ~1.6s to propagate on gossip rather than ~500ms. This is suspected to be due to a cache miss in current versions of Prysm, which should be fixed in their next release.

## Additional Info

I'm of two minds about removing the `shuffling_stable` check which checks for blocks at slot 0 in the epoch. If we removed it users would be able to configure Lighthouse to try reorging at slot 0, which likely wouldn't work very well due to interactions with the proposer index cache. I think we could leave it for now and revisit it later.
## Proposed Changes

We already make some attempts to avoid processing RPC blocks when a block from the same proposer is already being processed through gossip. This PR strengthens that guarantee by using the existing cache for `observed_block_producers` to inform whether an RPC block's processing should be delayed.
## Issue Addressed

Updated Lighthouse book on Section 2 and added some FAQs

## Proposed Changes

All changes are made in the book/src .md files.

## Additional Info

Please provide any additional information. For example, future considerations
or information useful for reviewers.


Co-authored-by: chonghe <[email protected]>
Co-authored-by: Michael Sproul <[email protected]>
## Issue Addressed

NA

## Proposed Changes

Similar to sigp#4181 but without the version bump and a more nuanced fix.

Patches the high CPU usage seen after the Capella fork which was caused by processing exits when there are skip slots.

## Additional Info

~~This is an imperfect solution that will cause us to drop some exits at the fork boundary. This is tracked at sigp#4184.~~
## Issue Addressed

NA

## Proposed Changes

Apply two changes to code introduced in sigp#4179:

1. Remove the `ERRO` log for when we error on `proposer_has_been_observed()`. We were seeing a lot of this in our logs for finalized blocks and it's a bit noisy.
1. Use `false` rather than `true` for `proposal_already_known` when there is an error. If a block raises an error in `proposer_has_been_observed()` then the block must be invalid, so we should process (and reject) it now rather than queuing it.

For reference, here is one of the offending `ERRO` logs:

```
ERRO Failed to check observed proposers block_root: 0x5845…878e, source: rpc, error: FinalizedBlock { slot: Slot(5410983), finalized_slot: Slot(5411232) }
```

## Additional Info

NA
## Proposed Changes

Builds on sigp#4028 to use the new payload bodies methods in the HTTP API as well.

## Caveats

The payloads by range method only works for the finalized chain, so it can't be used in the execution engine integration tests because we try to reconstruct unfinalized payloads there.
## Issue Addressed

Closes sigp#4185

## Proposed Changes

- Set user agent to `Lighthouse/vX.Y.Z-<commit hash>` by default
- Allow tweaking user agent via `--builder-user-agent "agent"`
## Issue Addressed

There was a [`VecDeque` bug](rust-lang/rust#108453) in some recent versions of the Rust standard library (1.67.0 & 1.67.1) that could cause Lighthouse to panic (reported by `@Sea Monkey` on discord). See full logs below.

The issue was likely introduced in Rust 1.67.0 and [fixed](rust-lang/rust#108475) in 1.68, and we were able to reproduce the panic ourselves using [@michaelsproul's fuzz tests](https://github.com/michaelsproul/lighthouse/blob/fuzz-lru-time-cache/beacon_node/lighthouse_network/src/peer_manager/fuzz.rs#L111) on both Rust 1.67.0 and 1.67.1. 

Users that uses our Docker images or binaries are unlikely affected, as our Docker images were built with `1.66`, and latest binaries were built with latest stable (`1.68.2`). It likely impacts user that builds from source using Rust versions 1.67.x.

## Proposed Changes

Bump Rust version (MSRV) to latest stable `1.68.2`. 

## Additional Info

From `@Sea Monkey` on Lighthouse Discord:

> Crash on goerli using `unstable` `dd124b2d6804d02e4e221f29387a56775acccd08`

```
thread 'tokio-runtime-worker' panicked at 'Key must exist', /mnt/goerli/goerli/lighthouse/common/lru_cache/src/time.rs:68:28
stack backtrace:
Apr 15 09:37:36.993 WARN Peer sent invalid block in single block lookup, peer_id: 16Uiu2HAm6ZuyJpVpR6y51X4Enbp8EhRBqGycQsDMPX7e5XfPYznG, error: WouldRevertFinalizedSlot { block_slot: Slot(5420212), finalized_slot: Slot(5420224) }, root: 0x10f6…3165, service: sync
   0: rust_begin_unwind
             at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/std/src/panicking.rs:575:5
   1: core::panicking::panic_fmt
             at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/panicking.rs:64:14
   2: core::panicking::panic_display
             at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/panicking.rs:135:5
   3: core::panicking::panic_str
             at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/panicking.rs:119:5
   4: core::option::expect_failed
             at /rustc/d5a82bbd26e1ad8b7401f6a718a9c57c96905483/library/core/src/option.rs:1879:5
   5: lru_cache::time::LRUTimeCache<Key>::raw_remove
   6: lighthouse_network::peer_manager::PeerManager<TSpec>::handle_ban_operation
   7: lighthouse_network::peer_manager::PeerManager<TSpec>::handle_score_action
   8: lighthouse_network::peer_manager::PeerManager<TSpec>::report_peer
   9: network::service::NetworkService<T>::spawn_service::{{closure}}
  10: <futures_util::future::select::Select<A,B> as core::future::future::Future>::poll
  11: <futures_util::future::future::map::Map<Fut,F> as core::future::future::Future>::poll
  12: <futures_util::future::future::flatten::Flatten<Fut,<Fut as core::future::future::Future>::Output> as core::future::future::Future>::poll
  13: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
  14: tokio::runtime::task::core::Core<T,S>::poll
  15: tokio::runtime::task::harness::Harness<T,S>::poll
  16: tokio::runtime::scheduler::multi_thread::worker::Context::run_task
  17: tokio::runtime::scheduler::multi_thread::worker::Context::run
  18: tokio::macros::scoped_tls::ScopedKey<T>::set
  19: tokio::runtime::scheduler::multi_thread::worker::run
  20: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
  21: tokio::runtime::task::core::Core<T,S>::poll
  22: tokio::runtime::task::harness::Harness<T,S>::poll
  23: tokio::runtime::blocking::pool::Inner::run
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Apr 15 09:37:37.069 INFO Saved DHT state                         service: network
Apr 15 09:37:37.070 INFO Network service shutdown                service: network
Apr 15 09:37:37.132 CRIT Task panic. This is a bug!              advice: Please check above for a backtrace and notify the developers, message: <none>, task_name: network
Apr 15 09:37:37.132 INFO Internal shutdown received              reason: Panic (fatal error)
Apr 15 09:37:37.133 INFO Shutting down..                         reason: Failure("Panic (fatal error)")
Apr 15 09:37:37.135 WARN Unable to free worker                   error: channel closed, msg: did not free worker, shutdown may be underway
Apr 15 09:37:39.350 INFO Saved beacon chain to disk              service: beacon
Panic (fatal error)
```
## Issue Addressed

NA

## Proposed Changes

Avoids reprocessing loops introduced in sigp#4179. (Also somewhat related to sigp#4192).

Breaks the re-queue loop by only re-queuing when an RPC block is received before the attestation creation deadline.

I've put `proposal_is_known` behind a closure to avoid interacting with the `observed_proposers` lock unnecessarily. 

## Additional Info

NA
## Issue Addressed

NA

## Proposed Changes

Bump versions.

## Additional Info

NA
ashuralyk
ashuralyk previously approved these changes Apr 21, 2023
Copy link

@ashuralyk ashuralyk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this PR can't reviewed because it's too big, it's counting on you to check twice for sure

ImJeremyHe
ImJeremyHe previously approved these changes Apr 21, 2023
fjchen7
fjchen7 previously approved these changes Apr 21, 2023
Copy link

@ashuralyk ashuralyk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CI issues are ok to me, they are not our fault

@ashuralyk ashuralyk merged commit 3bbabed into synapseweb3:develop Apr 23, 2023
@yangby-cryptape yangby-cryptape deleted the pr/merge-4.1.0 branch April 24, 2023 06:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.