-
Notifications
You must be signed in to change notification settings - Fork 608
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(twap): prune records over multiple blocks #7427
Changes from 10 commits
54f8b0b
71c1906
8a3d55b
b0ad14a
e496eea
83b6d47
c0df035
a078f0b
9caaac3
81e7ebe
f84a716
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -11,6 +11,13 @@ import ( | |
"github.com/osmosis-labs/osmosis/v23/x/twap/types" | ||
) | ||
|
||
// NumRecordsToPrunePerBlock is the number of records to prune per block. | ||
// Two records are deleted per incentive record: | ||
// 1. by time index | ||
// 2. by pool index | ||
// Therefore, setting this to 1000 means 500 complete incentive records are deleted per block. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. nit: twap records There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can we elaborate on the choice here? Why 1000/500? |
||
var NumRecordsToPrunePerBlock uint16 = 1000 | ||
|
||
type timeTooOldError struct { | ||
Time time.Time | ||
} | ||
|
@@ -73,15 +80,22 @@ func (k Keeper) StoreHistoricalTWAP(ctx sdk.Context, twap types.TwapRecord) { | |
// So, in order to have correct behavior for the desired guarantee, | ||
// we keep the newest record that is older than the pruning time. | ||
// This is why we would keep the -50 hour and -1hour twaps despite a 48hr pruning period | ||
func (k Keeper) pruneRecordsBeforeTimeButNewest(ctx sdk.Context, lastKeptTime time.Time) error { | ||
// | ||
// If we reach the per block pruning limit, we store the last key seen in the pruning state. | ||
// This is so that we can continue pruning from where we left off in the next block. | ||
// If we have pruned all records, we set the pruning state to not pruning. | ||
// There is a small bug here where we store more seenPoolAssetTriplets than we need to. | ||
// Issue added here: https://github.com/osmosis-labs/osmosis/issues/7435 | ||
// The bloat is minimal though, and is not at risk of getting out of hand. | ||
func (k Keeper) pruneRecordsBeforeTimeButNewest(ctx sdk.Context, state types.PruningState) error { | ||
store := ctx.KVStore(k.storeKey) | ||
|
||
// Reverse iterator guarantees that we iterate through the newest per pool first. | ||
// Due to how it is indexed, we will only iterate times starting from | ||
// lastKeptTime exclusively down to the oldest record. | ||
iter := store.ReverseIterator( | ||
[]byte(types.HistoricalTWAPTimeIndexPrefix), | ||
types.FormatHistoricalTimeIndexTWAPKey(lastKeptTime, 0, "", "")) | ||
state.LastKeySeen) | ||
defer iter.Close() | ||
|
||
// We mark what (pool id, asset 0, asset 1) triplets we've seen. | ||
|
@@ -93,6 +107,8 @@ func (k Keeper) pruneRecordsBeforeTimeButNewest(ctx sdk.Context, lastKeptTime ti | |
} | ||
seenPoolAssetTriplets := map[uniqueTriplet]struct{}{} | ||
ValarDragon marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
var numPruned uint16 | ||
|
||
for ; iter.Valid(); iter.Next() { | ||
timeIndexKey := iter.Key() | ||
timeS, poolId, asset0, asset1, err := types.ParseFieldsFromHistoricalTimeKey(timeIndexKey) | ||
|
@@ -117,6 +133,24 @@ func (k Keeper) pruneRecordsBeforeTimeButNewest(ctx sdk.Context, lastKeptTime ti | |
store.Delete(timeIndexKey) | ||
poolIndexKey := types.FormatHistoricalPoolIndexTWAPKeyFromStrTime(poolId, asset0, asset1, timeS) | ||
store.Delete(poolIndexKey) | ||
|
||
// Increment the number of records pruned by 2, since we delete two records per iteration. | ||
numPruned += 2 | ||
|
||
if numPruned >= NumRecordsToPrunePerBlock { | ||
// We have hit the limit, so we stop pruning. | ||
break | ||
} | ||
} | ||
|
||
if !iter.Valid() { | ||
// The iterator is exhausted, so we have pruned all records. | ||
state.IsPruning = false | ||
k.SetPruningState(ctx, state) | ||
} else { | ||
// We have not pruned all records, so we update the last key seen. | ||
state.LastKeySeen = iter.Key() | ||
k.SetPruningState(ctx, state) | ||
Comment on lines
+151
to
+153
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I may ahve missed it, but I don't think we have any logic testing this code path. (or rather that keys from this are working as we want) From reading the code, this loosk right to me! There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We don't, I will add before merge There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This test shows pruning happening over multiple blocks f84a716 It verifies
|
||
} | ||
return nil | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This helper func is no longer needed since we determine the lastKeptTime at epoch end and store in a state entry.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like we should still keep this function so we can unit test it? (Just move the logic into here)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or do you think the current test is good / simple enough?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can reimplement the test, I thought the state.go test tests the same thing, but I can double check
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah this is tested in even more depth here
osmosis/x/twap/store_test.go
Lines 321 to 555 in 81e7ebe
Deleting the func along with the test is fine imo