Summary
This low-priority major release introduces new features and optimisations which are backwards-incompatible with Lighthouse v5.x.y. If you are running Nethermind we recommend waiting for the release of Nethermind v1.30.0 before upgrading due to an incompatibility (see Known Issues).
After many months of testing, this releases stabilises hierarchical state diffs, resulting in much more compact archive nodes! This long-awaited feature was also known as on-disk tree-states, and was available for pre-release testing in Lighthouse v5.1.222-exp.
Other notable changes include:
- Removal and deprecation of several old CLI flags. Some flags will need to be removed in order for Lighthouse to start, see below for a full list.
- Improved beacon node failover and prioritisation in the validator client.
- Support for
engine_getBlobsV1
to speed up import and propagation of blobs. - Optimised peer discovery and long-term subnet subscription logic.
- New commands for
lighthouse validator-manager
. - Improved light client support. Enabled via
--light-client-server
. - SSZ by default for blocks published by the VC.
⚠️ Breaking Changes ⚠️
Upgrading to Lighthouse v6.0.0 should be automatic for most users, but you must:
- Remove any unsupported CLI flags (see below), and
- Be aware of the one-way database migration and the changes to archive nodes.
Once you upgrade a beacon node to Lighthouse v6.0.0, you cannot downgrade to v5.x.y without re-syncing.
⚠️ Database Migration ⚠️
The beacon node database migration for v6.0.0 is applied automatically upon upgrading. No manual action is required to upgrade.
There is no database downgrade available. We did not take this choice lightly, but in order to deliver hierarchical state diffs, a one-way database migration was simplest. If you do find yourself wanting to downgrade, re-syncing using checkpoint sync is highly recommended as it will get the node back online in just a few minutes.
For Archive Nodes
The migration enables hierarchical state diffs which necessitates the deletion of previously stored historic states. If you are running an archive node, then all historic states will be deleted upon upgrading. If you would like to continue running an archive node, you should use the --reconstruct-historic-states
flag so that state reconstruction can restart from slot 0.
If you would like to change the density of diffs, you can use the new flag --hierarchy-exponents
which should be applied the first time you start after upgrading. We have found that the hierarchy-exponents
configuration does not greatly impact query times which tend to be dominated by cache builds and affected more by query ordering. We still recommend avoiding parallel state queries at the same slot, and making use of sequential calls where possible (e.g. in indexing services). We plan to continue optimising parallel queries and cache builds in future releases, without requiring a re-sync.
For more information on configuring the hierarchy exponents see the updated documentation on Database Configuration in the Lighthouse book.
Hierarchy Exponents | Storage requirement | Sequential slot query | Uncached query |
---|---|---|---|
5,9,11,13,16,18,21 (default) | 418 GiB | 250-700 ms | up to 10 s |
5,7,11 (frequent snapshots) | 589 GiB | 250-700 ms | up to 6 s |
0,5,7,11 (per-slot diffs) | 1915 GiB+ | 250-700 ms | up to 2 s |
As part of the archive node changes the format of the "anchor" has also changed. For an archive node the anchor will no longer be null
and will instead take the value:
"anchor": {
"anchor_slot": "0",
"oldest_block_slot": "0",
"oldest_block_parent": "0x0000000000000000000000000000000000000000000000000000000000000000",
"state_upper_limit": "0",
"state_lower_limit": "0"
}
Don't be put off by the state_upper_limit
being equal to 0: this indicates that all states with slots >= 0
are available, i.e. full state history.
NOTE: if you are upgrading from v5.1.222-exp you need to re-sync from scratch. The database upgrade will fail if attempted.
⚠️ Removed CLI Flags ⚠️
The following beacon node flags which were previously deprecated have been deleted. You must remove them from your beacon node arguments before updating to v6.0.0:
--self-limiter
--http-spec-fork
--http-allow-sync-stalled
--disable-lock-timeouts
--always-prefer-builder-payload
--progressive-balances
--disable-duplicate-warn-logs
-l
(env logger)
The following validator client flags have also been deleted and must be removed before starting up:
--latency-measurement-service
--disable-run-on-all
--produce-block-v3
In many cases the behaviour enabled by these flags has become the default and no replacement flag is necessary. If you would like to fine-tune some aspect of Lighthouse's behaviour the full list of CLI flags is available in the book:
⚠️ Deprecated CLI Flags ⚠️
The following beacon node flags have been deprecated. You should remove them, but the beacon node will still start if they are provided.
--eth1
--dummy-eth1
The following global (BN and VC) flags have also been deprecated:
--terminal-total-difficulty-override
--terminal-block-hash-override
--terminal-block-hash-epoch-override
--safe-slots-to-import-optimistically
⚠️ Modified CLI Flags ⚠️
The beacon node flag --purge-db
will now only delete the database in interactive mode, and requires manual confirmation. If it is provided in a non-interactive context, e.g. under systemd
or docker
then it will have no effect. The beacon node will start without anything being deleted.
If you wish to use the old purge-db behaviour it is available via the flag --purge-db-force
which never asks for confirmation.
Network Optimisations
This release includes optimisations to Lighthouse's subnet subscription logic, updates to peer discovery, and fine-tuning of IDONTWANT
and rate limiting.
Users should see similar performance with reduced bandwidth. Users running a large number of validators (1000+) on a single beacon node may notice a reduction in the number of subscribed subnets, but can opt-in to subscribing to more subnets using --subscribe-all-subnets
if desired (e.g. for marginally increasing block rewards from included attestations).
Validator Client Fallback Optimisations
The beacon node fallback feature in the validator client has been refactored for greater responsiveness. Validator clients running with multiple beacon nodes will now switch more aggressively to the "healthiest" looking beacon node, where health status is determined by:
- Sync distance (head distance from the current slot).
- Execution layer (EL) health (whether the EL is online and not erroring).
- Optimistic sync status (whether the EL is syncing).
The impact of this change should be less downtime during upgrades, and better resilience to faulty or broken beacon nodes.
Users running majority clients should be aware that in the case of a faulty majority client, the validator client may prefer the faulty chain due to it appearing healthier. The best defense against this problem is to run some (or all) validator clients without any connection to a beacon node running a majority CL client or majority EL client.
New Validator Manager Commands
The Lighthouse validator manager is the recommended way to manage validators from the CLI, without having to shut down the validator client.
In this release it has gained several new capabilities:
lighthouse vm import
: now supports standard keystores generated by other tools likestaking-deposit-cli
.lighthouse vm list
: a new read-only command to list the validator keys that a VC has imported.lighthouse vm delete
: a new command to remove a key from a validator client, e.g. after exiting.
For details on these commands and available flags, see the docs: https://lighthouse-book.sigmaprime.io/validator-manager.html
Future releases will continue to expand the number of available commands, with the goal of eventually deprecating the previous lighthouse account-manager
CLI.
Fetch Blobs Optimisation
This release supports the new engine_getBlobsV1
API for accelerating the import of blocks with blobs. If the API is supported by your execution node, Lighthouse will use it to load blobs from the mempool without waiting for them to arrive on gossip. Our testing indicates that this will help Ethereum scale to a higher blob count, but we need more data from real networks before committing to a blob count increase.
There are several new Prometheus metrics to track the hit rate:
beacon_blobs_from_el_received_total
beacon_blobs_from_el_expected_total
beacon_blobs_from_el_hit_total
Logs at debug level also show the operation of this new feature (grep
for fetch_blobs
).
At the time of writing the follow execution clients support the API:
- Reth v1.0.7 or newer.
- Besu v24.9.1 or newer.
- Geth v1.14.12 or newer.
Unsupported:
- Nethermind v1.29.x (buggy, see below).
- Erigon.
🐛 Known Issues 🐛
Nethermind v1.29.x engine_getBlobsV1
bug
Nethermind versions v1.29.0 and v1.29.1 include an implementation of engine_getBlobsV1
which is not compliant with the API specification and does not work with Lighthouse. For Nethermind users, we recommend delaying upgrading to Lighthouse v6.0.0 until a new Nethermind release is available.
The error log generated by Lighthouse when the buggy version of Nethermind is used is:
ERRO Error fetching or processing blobs from EL, block_root: 0xe18b4a4eee03c2bf781f5aa9d5e5a1d62b01a9be4e4fe4bab106f9463b73a80c, error: RequestFailed(EngineError(Api { error: Json(Error("invalid type: map, expected a sequence", line: 0, column: 0)) }))
This error log can be safely ignored, although it may cause a slight degradation to node performance as Lighthouse flip-flops the EL's state between healthy and unhealthy.
See NethermindEth/nethermind#7650 for details.
SigP Checkpoint Sync Server
The Sigma Prime checkpoint sync servers at *.checkpoint.sigp.io
are currently running in a reduced capacity. We are working on fixing this as quickly as possible. In the meantime we recommend checkpoint syncing from one of the other providers listed here:
Update Priority
This table provides priorities for which classes of users should update particular components.
User Class | Beacon Node | Validator Client |
---|---|---|
Staking Users | Low | Low |
Non-Staking Users | Low | --- |
See Update Priorities for more information about this table.
Lighthouse BNs and VCs from v6.0.0 and v5.x.y are compatible. However, we recommend that users update both the VC and BN to v6.0.0 if upgrading.
All changes
See full changelog here.
Binaries
See pre-built binaries documentation.
The binaries are signed with Sigma Prime's PGP key: 15E66D941F697E28F49381F426416DC3F30674B0
System | Architecture | Binary | PGP Signature |
---|---|---|---|
x86_64 | lighthouse-v6.0.0-x86_64-apple-darwin.tar.gz | PGP Signature | |
x86_64 | lighthouse-v6.0.0-x86_64-unknown-linux-gnu.tar.gz | PGP Signature | |
aarch64 | lighthouse-v6.0.0-aarch64-unknown-linux-gnu.tar.gz | PGP Signature | |
x86_64 | lighthouse-v6.0.0-x86_64-windows.tar.gz | PGP Signature | |
System | Option | - | Resource |
Docker | v6.0.0 | sigp/lighthouse |