-
Notifications
You must be signed in to change notification settings - Fork 486
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge master into feature/1000k-assets #2069
Merged
tsachiherman
merged 267 commits into
algorand:feature/1000-assets
from
algorandskiy:pavel/feature/1000-assets
Apr 19, 2021
Merged
Merge master into feature/1000k-assets #2069
tsachiherman
merged 267 commits into
algorand:feature/1000-assets
from
algorandskiy:pavel/feature/1000-assets
Apr 19, 2021
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Improve speed of BlockEvaluator eval() by pre-fetching account data, decreasing latency at various points during block accounting. Co-authored-by: Tsachi Herman <[email protected]>
…nd#1867) Add configuration flag to loadgenerator utility to specify if it it should repeat after the first round.
Fix few minor log string formatting
…ge-2-4-1-stable 2.4.1-stable Re-merge
In order to support pseudo archival relays, we need to provide two-tier approach to the selection of peers in the catchup logic. This PR adds the `peerSelector` struct, which encapsulates this logic. It also adds an important performance feedback loop - the result of the request is being fed back into the subsequent peer selection. A faster peer would be be favorable over a slower peer. A peer that failed providing a past block, would drop to the bottom of the list.
Currently the `Accounts` array is not displayed properly in tealdbg. The debugger does not account for the special case that index 0 of the `Accounts` array is always the sender. This means that if the ForeignAccounts txn field contains `n` accounts, the `Accounts` array in tealdbg contains the sender's account followed by `n-1` of the ForeignAccounts. The last account never gets displayed. This PR fixes this problem by making tealdbg aware that the `Accounts` array has 1 more member (the sender's account) than `len(txn.Accounts)`. As a consequence of this, the sender's account will always appear at index 0 of the `Accounts` array in the debugger, even when the transaction's ForeignAccounts array is empty. This aligns with the behavior of the TEAL op `txna Accounts 0` always returning the sender's address, even in stateless TEAL.
Switch to msgp 47, which eliminates the possibility of MarshalMsg returning an error.
…is (algorand#1877) A change to use go tooling to get go version from go.mod used jq to parse the results. This breaks on platforms that don't have jq. This change just removes the dependency (and replaces it with the more accessible awk).
This change optimizes ID() and GetEncodedLength() for Transaction, SignedTxn, and SignedTxnInBlock. There are two sources of overhead that this change avoids: interface conversions (to msgp.Marshaler) and memory allocations (for temporary slices to hold the encoding). To avoid interface conversions, the code invokes the msgp-generated MarshalMsg() method directly. To avoid memory allocation overhead, the code uses a sync.Pool to track existing dirty slices that can be used for encoding. This seems to be more efficient than letting the Go GC manage slices, probably because the encoder is OK with dirty (non-zero) slices.
This PR speeds up merklearray, both with some small absolute performance wins (avoiding crypto.HashObj overheads), as well as parallelism in hashing the leaves and the internal nodes. For tiny trees (10 elements where each element is something like 10-100 bytes), the parallelism adds a slight overhead (about 30-40 microseconds on my test machine, on top of a total runtime of 25-80 microseconds). But for anything larger (either more elements or more complex-to-hash elements), this ends up being an improvement. This might have some benefit for the current use of merklearray (namely, compact certificates), but should be even more useful for the upcoming use of merklearray for hashing all transactions in a block.
The memory profiler shown that the `LimitedReaderSlurper` is one of the bigger memory consumer on a relay. This behavior aligns well with the original intent - preallocate memory so that we won't need to reallocate it on every message. The unintended consequence of that was that a relay that has many incoming active connections might be using more memory than it really needs to. To address this solution, I have used series of reading buffers. These buffers would be dynamically allocated on per-need basis. In order to retain original performance characteristics, I have set the size of the base buffer to be larger than the average message. Doing so would allow us to avoid allocation on *most* of the messages, and allocate buffers only when truly needed. Another advantage is that the allocated memory is of a fixed, and small size. Allocating smaller buffers improves the performance when compared with larger buffers.
This PR adds `AssetClosingAmount` to the `ApplyData`, conditioned by a new `ConsensusParam.EnableAssetCloseAmount`. A corresponding `AssetClosingAmount` was added to the REST API to align with the variable already presents on the indexer v2.
Adds "omitempty" to AssetCloseToAmount
This PR removes support for the Merkle transaction commitments that we used a long time ago (well before mainnet launch).
Add a user service for algod that is part of the tarball downloaded by updater. In addition, add functionality to update.sh to invoke systemctl with the --user flag to stop the user service. Installing the service as usual has not changed: sudo ./systemd-setup.sh $USER $GROUP Installing as a user service: ./systemd-setup-user.sh $USER
This PR creates a new consensus upgrade which enables the `EnableAssetCloseAmount` consensus parameter.
Using `goal network create` would create a network directories containing a `consensus.json` file. That file would contain the `null` string.
…op (algorand#1896) The agreement mainLoop function was bootstrapping the Deadline with the parameters based on the `protocol.ConsensusCurrentVersion`. This might be incorrect when the binary supports protocols that haven't (yet) been adopted by the network.
…rand#1899) Improve the asset-misc e2e test by using a long unicode asset name.
The broadcastThread implementation was sub-optimal, and this PR addresses the issues we had there - 1. The broadcastThread is now using a single broadcasting go-routine rather than 4. 2. The broadcastThread/innerBroadcast used to drop queued messages just because there are no current peers. Instead, it will hold off until there is an available peer before queuing up the messages to the peers. 3. The peers array is being updated only if there is enough "wait time" between consecutive messages. In case of a high message burst, the peers would not get updated to avoid taking the peers lock. 4. The node's broadcast call was replaced with non-blocking to align with previous behavior. 5. The single thread implementation would ensure that queued messages would also be sent to the underlying peers at the order in which they were enqueued. This, in turn would ensure message id monotonicity.
The agreement was printing out the following error message: ``` (1) unable to retrieve consensus version for round 8764517, defaulting to binary consensus version ``` The culprit was a recent change to the [agreement initialization code](algorand#1896) that would try to use the consensus version of the next round in order to find out some of the agreement runtime parameters. This PR improves the data.Ledger.ConsensusVersion logic, allowing it to "predict" future protocol versions, based on known and previously agreed upon block headers.
Improve the reliability of the `TestApplicationsUpgradeOverGossip` e2e test by using a specific historical protocols and avoiding dynamically modifying current and/or future protocols.
…1902) The expect tests were calling `goal network start -r <root dir> -d <data dir>` and `goal network stop -r <root dir> -d <data dir>`. While this is not harmful in any way, the neither of these commands is doing anything with the extra data directory parameter. Given that this parameter is completely ignored, the is no point in passing it in. ( as it would only mislead the reader of these tests )
master re-merge
Remove the deprecated network v1 fetcher service over websocket connections.
This will enable short proofs of transactions being present in a given block. The proof includes both the transaction ID (which is likely the most useful part, knowing that some transaction ID was committed), as well as the hash of the entire SignedTxn and ApplyData (in case some application wants to know what LogicSig.Args were used, or what the ApplyData was for rewards calculation). The Merkle tree uses the same merklearray code that's already used for compact certificates.
Attempt to make the GitHub integration a little friendlier. 1. There is a new links feature. Rather than using a comment in the questions template, give a link to the community forum. 2. There is a new security feature. Rather than using a comment in each template, give a link to the vulnerability disclosure page. 3. Use headers consistently instead of bold in some. 4. Comment out all descriptions, this way you don't have to delete anything when creating a new issue.
…lgorand#1970) Add comments to disassembly of constant load lines to show constant
teal documentation: Adding extra docs for pushbytes and pushint
xisting code wait 30 seconds between sending SIGTERM and SIGKILL. When sending the SIGKILL, the process is deleted, but the pid file remains on disk. Leaving the pid file on disk could cause subsequent failures - and we could easily avoid them by clearing this file if we SIGKILL'ed the process.
…orand#2031) ## Overview Synchronize `testing.T` access across go-routines to avoid generating a data race in case of an asynchronous node crash feedback. ## Summary When the node controller notice that the node exits, it reports it back to the test fixture. However, the test fixture cannot report that to the underlying test since doing so would create a data race : the `testing.T` is not meant to support concurrency. To address that, this PR provides an abstraction over the `testing.T` called `TestingTB`, which retain the same functionality, but uses a mutex to synchronize the access.
## Summary When we send a large message (ie a proposal payload), sometimes we realize after starting sending the message that we don't need to send the message (ie the proposal isn't the best one, the payload is malformed). Thus we wish to have support to be able to cancel sending the message after it is enqueued / starting to be sent. This pr adds the functionality of broadcasting an array of messages, and the ability to cancel sending after any subset of messages in the array is sent.
Add pingpong mode to create lots of assets each with total amount 1 (aka NFTs)
Script for repeatedly snapshotting algod heap profile, creates snapshot svg reports and differential svg reports.
…gorand#2027) ledger: split committed round range by consensus protocol version
## Summary * Apps logic needs to ignore result and error from logic evaluator but but fail on other errors * The bug happened in app refactor PR * Added unit and e2e test
## Summary * Opting in does not allocate key value map * State delta merging code assumed the map in old AD was allocated * Added that missed allocation, and a test verifying combinations * Inspected and removed TODOs from applyStorageDelta
Fix node panicing during shutdown due to unsynchronized compactcert database access.
…lgorand#2049) When using the MerkleTrie, the trie avoid making loads from disk for pages that aren't needed. In particular, it won't load the latest page ( known as the deferred page ) until it needs to commit it. The implementation had a bug in the `loadPage` method, where it would reset the loading page flag incorrectly.
On ARM32, all 64 bit atomic operations need to be using a 64-bit aligned address. This PR moves the requestNonce to a 64 bit aligned address.
Filter out automated testing useless verbosed output.
This PR extends the `catchpointdump` utility and add the ability to scan a tracker database and retrieve the merkle trie statistics. The statistics themselves aren't very useful on their own, but the scan process verify the structure of the merkle trie, allowing us to verify the consistency of a given database.
The message pack generator is very noisy. It tends to emit lot of messages that aren't being used. This PR cache the output of the message pack so that it would only present these in case of an error.
… being installed. (algorand#2056) Ensure the participation key database is being correctly closed after being installed.
Merged
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
Merge + fixes