Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BB-295 - bootstrap ts compilation #2337

Open
wants to merge 85 commits into
base: development/8.6
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
85 commits
Select commit Hold shift + click to select a range
1250c7c
BB-400: bump mongodb to 4.4.21
williamlardier May 23, 2023
a3deac2
Merge branch 'improvement/BB-424-bump-8.6.24' into tmp/octopus/w/8.7/…
bert-e Jun 23, 2023
651e77f
Merge branch 'bugfix/BB-415' into tmp/octopus/w/8.7/bugfix/BB-415
bert-e Jun 23, 2023
1aa6af6
Merge branch 'bugfix/BB-366/addGCReadAccountIDs' into tmp/octopus/w/8…
bert-e Jun 26, 2023
7e36c6b
Merge branches 'w/8.7/bugfix/BB-415' and 'q/2424/8.6/bugfix/BB-415' i…
bert-e Jun 27, 2023
8bae7da
Merge branches 'w/8.7/bugfix/BB-366/addGCReadAccountIDs' and 'q/2410/…
bert-e Jun 30, 2023
c9da303
Merge branch 'w/8.6/bugfix/BB-425/order' into tmp/octopus/w/8.7/bugfi…
bert-e Jul 5, 2023
4d38929
Merge branch 'w/8.6/bugfix/BB-425/bump' into tmp/octopus/w/8.7/bugfix…
bert-e Jul 7, 2023
d0c7637
Merge remote-tracking branch 'origin/w/8.6/bugfix/BB-419-increaseNoti…
jonathan-gramain Jul 12, 2023
1ce81e6
Merge branch 'improvement/BB-427-remove-zero-byte-skip-for-dmf' into …
bert-e Jul 24, 2023
b85a135
Merge branch 'feature/BB-189-support-archived-object-delete' into tmp…
bert-e Jul 28, 2023
efa9ca6
Merge branches 'w/8.7/feature/BB-189-support-archived-object-delete' …
bert-e Jul 28, 2023
5e8356c
Merge branch 'feature/BB-429' into tmp/octopus/w/8.7/feature/BB-429
bert-e Jul 31, 2023
967801f
Merge branch 'w/8.6/bugfix/BB-431/get-backbeat-client' into tmp/octop…
bert-e Aug 7, 2023
840f4cc
Merge branch 'w/8.6/improvement/BB-428/lifecycle-listings' into tmp/o…
bert-e Aug 8, 2023
96f034c
Merge branch 'improvement/BB-430-improve-oplog-rule-lifecycle' into t…
bert-e Aug 11, 2023
1def2e9
Merge branches 'w/8.7/improvement/BB-430-improve-oplog-rule-lifecycle…
bert-e Aug 11, 2023
a19c37e
Merge branch 'w/8.6/improvement/BB-433/bump' into tmp/octopus/w/8.7/i…
bert-e Aug 17, 2023
6930895
Merge branch 'improvement/BB-432-remove-useless-kafka-connectors' int…
bert-e Aug 29, 2023
a06a103
Merge branch 'w/8.6/improvement/BB-434/newerNoncurrentVersions' into …
bert-e Sep 1, 2023
87667e1
Merge branch 'w/8.6/improvement/BB-434/newerNoncurrentVersions' into …
bert-e Sep 1, 2023
4c18dd2
Merge branch 'bugfix/BB-435' into tmp/octopus/w/8.7/bugfix/BB-435
bert-e Sep 5, 2023
fc629a4
Merge branches 'w/8.7/bugfix/BB-435' and 'q/2444/8.6/bugfix/BB-435' i…
bert-e Sep 6, 2023
93f8d0d
Merge branch 'w/8.6/feature/BB-413-circuit-breaker-backport' into tmp…
bert-e Sep 14, 2023
42cc186
Merge branch 'w/8.6/feature/BB-413-circuit-breaker-backport' into tmp…
bert-e Sep 14, 2023
fb16e04
Merge branch 'w/8.6/feature/BB-413-circuit-breaker-backport' into tmp…
bert-e Sep 14, 2023
c5417fd
Merge branch 'improvement/BB-436' into tmp/octopus/w/8.7/improvement/…
bert-e Sep 21, 2023
bec53ee
Merge branches 'w/8.7/improvement/BB-436' and 'q/2445/8.6/improvement…
bert-e Sep 26, 2023
8724618
Merge branch 'improvement/BB-438' into tmp/octopus/w/8.7/improvement/…
bert-e Oct 4, 2023
797286c
Merge branch 'bugfix/BB-448/could-not-get-location-configuration-erro…
bert-e Oct 11, 2023
df6840a
Merge branch 'bugfix/BB-437' into tmp/octopus/w/8.7/bugfix/BB-437
bert-e Oct 13, 2023
9fdf145
Merge branch 'improvement/BB-439' into tmp/octopus/w/8.7/improvement/…
bert-e Oct 13, 2023
9f25d10
Merge branch 'w/8.6/feature/BB-451-delete-notification-fields' into t…
bert-e Oct 13, 2023
93c7b8d
Merge branches 'w/8.7/improvement/BB-439' and 'q/2451/8.6/improvement…
bert-e Oct 16, 2023
b281fc0
Merge branch 'bugfix/BB-445' into tmp/octopus/w/8.7/bugfix/BB-445
bert-e Oct 16, 2023
6f453e7
Merge branches 'w/8.7/bugfix/BB-445' and 'q/2452/8.6/bugfix/BB-445' i…
bert-e Oct 16, 2023
d226698
Merge branch 'w/8.6/bugfix/BB-450/replication' into tmp/octopus/w/8.7…
bert-e Oct 17, 2023
e662e1d
Merge branch 'w/8.6/bugfix/BB-450/replication' into tmp/octopus/w/8.7…
bert-e Oct 17, 2023
9a6a22f
Merge branch 'improvement/BB-443' into tmp/octopus/w/8.7/improvement/…
bert-e Oct 17, 2023
935f7f8
Merge branch 'w/8.6/improvement/BB-450/bump' into tmp/octopus/w/8.7/i…
bert-e Oct 18, 2023
cffab7c
Merge branch 'improvement/BB-453-call-getmetadata-after-error-consume…
bert-e Oct 18, 2023
5611b49
Merge branches 'w/8.7/improvement/BB-453-call-getmetadata-after-error…
bert-e Oct 18, 2023
726be8e
Merge branch 'bugfix/BB-459-forward-signals' into tmp/octopus/w/8.7/b…
bert-e Oct 20, 2023
c46fe6a
Remove default behaviour of selecting the first cold location on expi…
Kerkesni Oct 20, 2023
9664b3e
add lint rule to avoid exclusive tests
Kerkesni Oct 23, 2023
06b4532
Merge branch 'bugfix/BB-457-oplog-disabled-cold-transition' into tmp/…
bert-e Oct 24, 2023
aed1556
Merge branches 'w/8.7/bugfix/BB-457-oplog-disabled-cold-transition' a…
bert-e Oct 24, 2023
6371ed2
Merge branch 'improvement/BB-460-Check-error-code-before-getting-meta…
bert-e Oct 24, 2023
eb95ffc
Merge branches 'w/8.7/improvement/BB-460-Check-error-code-before-gett…
bert-e Oct 24, 2023
9ab7e24
Merge branch 'w/8.6/bugfix/BB-462/lifecycle' into tmp/octopus/w/8.7/b…
bert-e Oct 25, 2023
45f32b5
Merge branch 'bugfix/BB-463-change-connector-partition-name-on-restar…
bert-e Oct 26, 2023
dbbcf68
Merge branches 'w/8.7/bugfix/BB-463-change-connector-partition-name-o…
bert-e Oct 26, 2023
23db209
Merge branch 'bugfix/BB-441' into tmp/octopus/w/8.7/bugfix/BB-441
bert-e Oct 27, 2023
a3379f8
Merge branches 'w/8.7/bugfix/BB-441' and 'q/2463/8.6/bugfix/BB-441' i…
bert-e Oct 27, 2023
7b932c4
Merge branch 'improvement/BB-417' into q/8.7
bert-e Oct 27, 2023
18c96ab
Merge branch 'bugfix/BB-440' into tmp/octopus/w/8.7/bugfix/BB-440
bert-e Oct 27, 2023
fe24b56
Merge branches 'w/8.7/bugfix/BB-440' and 'q/2466/8.6/bugfix/BB-440' i…
bert-e Oct 27, 2023
b9715e0
Merge branch 'bugfix/BB-449' into tmp/octopus/w/8.7/bugfix/BB-449
bert-e Oct 27, 2023
69a6f74
Merge branches 'w/8.7/bugfix/BB-449' and 'q/2467/8.6/bugfix/BB-449' i…
bert-e Oct 27, 2023
e27dce6
Merge branch 'bugfix/BB-455-clean-shutdown-backbeat-consumer' into tm…
bert-e Oct 30, 2023
daa2c72
Merge branches 'w/8.7/bugfix/BB-455-clean-shutdown-backbeat-consumer'…
bert-e Oct 30, 2023
850b910
Merge branch 'w/8.6/feature/BB-452-delete-notification-versionid' int…
bert-e Nov 2, 2023
e6754d3
BB-295 - introduce typescript and jest
Oct 10, 2022
6aa3903
BB-295 - switch to jest
Oct 11, 2022
97bc084
BB-295 - use compiled output in tests
Oct 10, 2022
1183125
BB-295 - catch up jest CLI differences
Oct 10, 2022
bb3b7ed
BB-295 - fix eslint
Oct 10, 2022
62c0149
BB-295 - use ts-jest
Oct 10, 2022
3a6c067
BB-295 - streamline syntheticbucketd container build
Oct 10, 2022
7bab447
BB-295 - no parallelism in tests
Oct 11, 2022
9647a57
BB-295 - adapt to jest API
Oct 11, 2022
84979de
BB-295 - stop leaking redis connections in BBAPI tests
Oct 12, 2022
b7c8c17
BB-295 - stop leaking redis connection in qp tests
Oct 12, 2022
4ae2001
BB-295 - sample migration
Oct 13, 2022
a83468c
BB-295 - fix eslint with ts files
Oct 13, 2022
52e3de1
BB-295 - move tests to .spec.js
Oct 13, 2022
7a77874
BB-295 - fix missed tests
Oct 14, 2022
7e42bbe
BB-295 - use jest with expression filters instead of paths
Oct 14, 2022
6a80098
BB-295 - set forceExit on every jest launch
Oct 14, 2022
579be61
BB-295 - fix image build issue
Kerkesni Nov 23, 2022
3e15903
BB-295 - fix functional tests
Kerkesni Dec 15, 2022
897c063
Fix syntheticbucketd dockerfile to support caching deps in layer
francoisferrand Mar 1, 2023
3fdd455
Fix jest lint warning
francoisferrand Nov 6, 2023
8f77206
Reenable bucket conductor test with bucketd
francoisferrand Nov 7, 2023
102140b
Reenable disabled tess
francoisferrand Nov 7, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 40 additions & 6 deletions .eslintrc
Original file line number Diff line number Diff line change
@@ -1,8 +1,15 @@
{
"extends": "scality",
"extends": [
"scality",
"plugin:jest/recommended"
],
"parserOptions": {
"ecmaVersion": 2020
},
"plugins": [
"jest",
"mocha"
],
"rules": {
"object-curly-newline": "off",
"import/newline-after-import": "off",
Expand Down Expand Up @@ -33,13 +40,40 @@
"space-unary-ops": "off",
"no-undef-init": "off",
"newline-per-chained-call": "off",
"no-useless-escape": "off"
"no-useless-escape": "off",
"mocha/no-exclusive-tests": "error",
"jest/no-done-callback": "off",
"jest/expect-expect": [
"warn",
{
"assertFunctionNames": [
"expect",
"assert",
"assert.*",
"_assertCredentials"
]
}
],
"import/extensions": [
"error",
"ignorePackages",
{
"js": "never",
"jsx": "never",
"ts": "never",
"tsx": "never"
}
]
},
"env": {
"jest/globals": true
},
"settings": {
"import/resolver": {
"node": {
"paths": ["/backbeat/node_modules", "node_modules"]
}
"node": {
"paths": ["/backbeat/node_modules", "node_modules"],
"extensions": [".js", ".jsx", ".ts", ".tsx"]
}
}
}
}
}
2 changes: 1 addition & 1 deletion .github/dockerfiles/mongodb/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM mongo:4.2.24
FROM mongo:4.4.21

ENV USER=scality \
HOME_DIR=/home/scality \
Expand Down
12 changes: 6 additions & 6 deletions .github/workflows/tests.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -44,8 +44,7 @@ jobs:
uses: docker/build-push-action@v4
with:
push: true
context: .
file: .github/dockerfiles/syntheticbucketd/Dockerfile
context: tests/utils/syntheticbucketd
tags: "ghcr.io/scality/backbeat/syntheticbucketd:${{ github.sha }}"
cache-from: type=gha,scope=syntheticbucketd
cache-to: type=gha,mode=max,scope=syntheticbucketd
Expand Down Expand Up @@ -105,6 +104,8 @@ jobs:
cache: yarn
- name: Install node dependencies
run: yarn install --ignore-engines --frozen-lockfile --network-concurrency 1
- name: Compile TypeScript
run: yarn build
- name: Install ginkgo
run: go get github.com/onsi/ginkgo/ginkgo@${GINKGO_VERSION}
- name: Lint markdown
Expand Down Expand Up @@ -147,11 +148,10 @@ jobs:
BACKBEAT_CONFIG_FILE: "tests/config.json"
- name: run backbeat notification feature tests
run: yarn run ft_test:notification

- name: run ballooning tests for lifecycle conductor
run: yarn mocha tests/performance/lifecycle/conductor-check-memory-balloon.js
run: .github/scripts/run_ft_tests.bash perf_test:lifecycle
env:
# Constrain heap long-lived heap size to 150MB, so that pushing 200K messages
# Constrain heap long-lived heap size to 200MB, so that pushing 200K messages
# will crash if they end up in memory all at the same time (circuit breaking
# ineffective) while waiting to be committed to the kafka topic.
NODE_OPTIONS: '--max-old-space-size=150'
NODE_OPTIONS: '--max-old-space-size=200'
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,6 @@ node_modules

# Redis
*.rdb

# TypeScript compilation output
dist/
20 changes: 13 additions & 7 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -29,24 +29,30 @@ RUN wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSI
&& rm dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz

COPY package.json yarn.lock /usr/src/app/
RUN yarn install --ignore-engines --frozen-lockfile --production --network-concurrency 1 \
&& rm -rf /var/lib/apt/lists/* \
&& rm -rf ~/.node-gyp \
&& rm -rf /tmp/yarn-*

RUN yarn install --ignore-engines --frozen-lockfile --network-concurrency 1

COPY . /usr/src/app
Kerkesni marked this conversation as resolved.
Show resolved Hide resolved

RUN yarn build

RUN rm -rf node_modules \
&& yarn install --production --frozen-lockfile --ignore-optional

################################################################################
FROM node:${NODE_VERSION}

RUN apt-get update && \
apt-get install -y --no-install-recommends \
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
jq \
tini \
&& rm -rf /var/lib/apt/lists/*

WORKDIR /usr/src/app

# Keep the .git directory in order to properly report version
COPY . /usr/src/app
COPY ./package.json ./docker-entrypoint.sh ./
COPY --from=builder /usr/src/app/dist ./
COPY --from=builder /usr/src/app/node_modules ./node_modules/
COPY --from=builder /usr/local/bin/dockerize /usr/local/bin/

Expand Down
22 changes: 14 additions & 8 deletions extensions/lifecycle/tasks/LifecycleUpdateExpirationTask.js
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ const { errors } = require('arsenal');
const ObjectMD = require('arsenal').models.ObjectMD;
const BackbeatTask = require('../../../lib/tasks/BackbeatTask');
const ActionQueueEntry = require('../../../lib/models/ActionQueueEntry');
const locations = require('../../../conf/locationConfig.json') || {};

class LifecycleUpdateExpirationTask extends BackbeatTask {
/**
Expand Down Expand Up @@ -144,13 +143,20 @@ class LifecycleUpdateExpirationTask extends BackbeatTask {
});

async.waterfall([
next => this._getMetadata(entry, log, next),
(objMD, next) => {
const coldLocation = entry.getAttribute('target.location') ||
// If location not specified, use the first (and only) location
// This is a temporary fix, until Sorbet is fixed to provide the information
Object.keys(locations).find(name => locations[name].isCold);

next => {
const coldLocation = entry.getAttribute('target.location');
if (!coldLocation) {
// this should never happen as sorbet always sets the location attribute
log.error('missing target location', {
entry: entry.getLogInfo(),
method: 'LifecycleUpdateExpirationTask.processActionEntry',
});
return next(errors.MissingParameter.customizeDescription('missing target location'));
}
return next(null, coldLocation);
},
(coldLocation, next) => this._getMetadata(entry, log, (err, objMD) => next(err, coldLocation, objMD)),
(coldLocation, objMD, next) => {
const archive = objMD.getArchive();

// Confirm the object has indeed expired: it can happen that the
Expand Down
6 changes: 3 additions & 3 deletions extensions/replication/queueProcessor/QueueProcessor.js
Original file line number Diff line number Diff line change
Expand Up @@ -459,18 +459,18 @@ class QueueProcessor extends EventEmitter {
*/
_setupRedis(redisConfig) {
// redis pub/sub for pause/resume
const redis = new Redis(redisConfig);
this._redis = new Redis(redisConfig);
// redis subscribe to site specific channel
const channelName = `${this.repConfig.topic}-${this.site}`;
redis.subscribe(channelName, err => {
this._redis.subscribe(channelName, err => {
if (err) {
this.logger.fatal('queue processor failed to subscribe to ' +
`crr redis channel for location ${this.site}`,
{ method: 'QueueProcessor.constructor',
error: err });
process.exit(1);
}
redis.on('message', (channel, message) => {
this._redis.on('message', (channel, message) => {
const validActions = {
pauseService: this._pauseService.bind(this),
resumeService: this._resumeService.bind(this),
Expand Down
2 changes: 1 addition & 1 deletion extensions/replication/tasks/ReplicateObject.js
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ const ObjectMDLocation = require('arsenal').models.ObjectMDLocation;
const BackbeatClient = require('../../../lib/clients/BackbeatClient');
const BackbeatMetadataProxy = require('../../../lib/BackbeatMetadataProxy');

const mapLimitWaitPendingIfError = require('../../../lib/util/mapLimitWaitPendingIfError');
const mapLimitWaitPendingIfError = require('../../../lib/util/mapLimitWaitPendingIfError').default;
const { attachReqUids, TIMEOUT_MS } = require('../../../lib/clients/utils');
const getExtMetrics = require('../utils/getExtMetrics');
const BackbeatTask = require('../../../lib/tasks/BackbeatTask');
Expand Down
2 changes: 1 addition & 1 deletion lib/BackbeatConsumer.js
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ const Logger = require('werelogs').Logger;
const { BreakerState, CircuitBreaker } = require('breakbeat').CircuitBreaker;

const BackbeatProducer = require('./BackbeatProducer');
const OffsetLedger = require('./OffsetLedger');
const OffsetLedger = require('./OffsetLedger').default;
const KafkaBacklogMetrics = require('./KafkaBacklogMetrics');
const {
startCircuitBreakerMetricsExport,
Expand Down
77 changes: 36 additions & 41 deletions lib/OffsetLedger.js → lib/OffsetLedger.ts
Original file line number Diff line number Diff line change
@@ -1,4 +1,11 @@
const assert = require('assert');
import assert from 'assert';

type PartitionOffsets = {
processing: number[],
latestConsumed: number | null
};

type TopicOffsets = { [key: number]: PartitionOffsets}

/**
* @class OffsetLedger
Expand All @@ -8,16 +15,20 @@ const assert = require('assert');
* when messages are processed asynchronously and processing may end
* out-of-order.
*/
class OffsetLedger {

export default class OffsetLedger {

#ledger: { [topic: string]: TopicOffsets };

constructor() {
this._ledger = {};
this.#ledger = {};
}

_getPartitionOffsets(topic, partition) {
let topicOffsets = this._ledger[topic];
_getPartitionOffsets(topic: string, partition: number) {
let topicOffsets = this.#ledger[topic];
if (!topicOffsets) {
topicOffsets = {};
this._ledger[topic] = topicOffsets;
this.#ledger[topic] = topicOffsets;
}
let partitionOffsets = topicOffsets[partition];
if (!partitionOffsets) {
Expand All @@ -30,7 +41,7 @@ class OffsetLedger {
return partitionOffsets;
}

_getPartitionCommittableOffset(partitionOffsets) {
_getPartitionCommittableOffset(partitionOffsets: PartitionOffsets): number | null {
// we can commit up to the lowest offset still being processed
// since it means all lower offsets have already been
// processed. If nothing is being processed, the latest
Expand All @@ -49,13 +60,8 @@ class OffsetLedger {
/**
* Function to be called as soon as a new message is received from
* a Kafka topic and about to start being processed.
*
* @param {string} topic - topic name
* @param {number} partition - partition number
* @param {number} offset - offset of consumed message
* @return {undefined}
*/
onOffsetConsumed(topic, partition, offset) {
onOffsetConsumed(topic: string, partition: number, offset: number): undefined {
// make sure offset is a positive number not to jeopardize
// processing sanity
assert(Number.isInteger(offset) && offset >= 0);
Expand All @@ -78,13 +84,10 @@ class OffsetLedger {
/**
* Function to be called when a message is completely processed.
*
* @param {string} topic - topic name
* @param {number} partition - partition number
* @param {number} offset - offset of processed message
* @return {number} - highest committable offset for this
* topic/partition (as returned by getCommittableOffset())
* @return highest committable offset for this topic/partition
* (as returned by getCommittableOffset())
*/
onOffsetProcessed(topic, partition, offset) {
onOffsetProcessed(topic: string, partition: number, offset: number): number | null {
const partitionOffsets = this._getPartitionOffsets(topic, partition);
partitionOffsets.processing =
partitionOffsets.processing.filter(pOff => pOff !== offset);
Expand All @@ -94,12 +97,9 @@ class OffsetLedger {
/**
* Get the highest committable offset for a topic/partition
*
* @param {string} topic - topic name
* @param {number} partition - partition number
* @param {number} offset - offset of processed message
* @return {number} - highest committable offset for this topic/partition
* @return highest committable offset for this topic/partition
*/
getCommittableOffset(topic, partition) {
getCommittableOffset(topic: string, partition: number): number | null {
const partitionOffsets = this._getPartitionOffsets(topic, partition);
return this._getPartitionCommittableOffset(partitionOffsets);
}
Expand All @@ -108,28 +108,25 @@ class OffsetLedger {
* Get how many entries have been consumed but not yet fully
* processed/committable
*
* @param {string} [topic] - topic name
* @param {number} [partition] - partition number
* @return {number} - number of consumed but not committable
* entries for this topic/partition, or for this topic (if no
* partition given), or for all topics and partitions (if none of
* topic and partition is provided)
* @return number of consumed but not committable entries for this
* topic/partition, or for this topic (if no partition given), or for
* all topics and partitions (if none of topic and partition is provided)
*/
getProcessingCount(topic, partition) {
if (topic && !this._ledger[topic]) {
getProcessingCount(topic?: string, partition?: number): number {
if (topic && !this.#ledger[topic]) {
return 0;
}
if (topic && partition !== undefined &&
!this._ledger[topic][partition]) {
!this.#ledger[topic][partition]) {
return 0;
}
let count = 0;
const topics = topic ? [topic] : Object.keys(this._ledger);
const topics = topic ? [topic] : Object.keys(this.#ledger);
topics.forEach(t => {
const partitions = partition !== undefined ?
[partition] : Object.keys(this._ledger[t]);
[partition] : Object.keys(this.#ledger[t]);
partitions.forEach(p => {
const partitionOffsets = this._ledger[t][p];
const partitionOffsets = this.#ledger[t][p];
count += partitionOffsets.processing.length;
});
});
Expand All @@ -139,12 +136,10 @@ class OffsetLedger {
/**
* Export the ledger in JSON format (useful for debugging)
*
* @return {string} a JSON-serialized representation of the
* @return a JSON-serialized representation of the
* current state of the ledger
*/
toString() {
return JSON.stringify(this._ledger);
toString(): string {
return JSON.stringify(this.#ledger);
}
}

module.exports = OffsetLedger;
Loading
Loading