Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Concurrent batch processor features → batch processor #11248

Closed
wants to merge 37 commits into from

Conversation

jmacd
Copy link
Contributor

@jmacd jmacd commented Sep 23, 2024

Description

Contributes downstream functionality back to the core component.

  • Enable unlimited concurrency
  • Wait for context timeout or batch response from pipeline
  • Trace the batch processor.
  • Add EarlyReturn feature: enable legacy behavior (true)
  • Add MaxConcurrency feature: enable legacy behavior (1)

TODO: Add feature gate.

  • The EarlyReturn=true legacy behavior should transition to false.
  • The MaxConcurrency=1 legacy behavior should transition to unlimited.

Caveats Despite a year+ long effort to add equivalent and satisfactory batching support in the exporterhelper subcomponent, we still today lack support for back-pressure with batching and reliable error transmission. I believe it is time to say "Yes/And" here. I support the efforts to improve exporterhelper and will contribute to that project myself, however I think repairing the legacy batch processor is also a good idea, given how long this has taken.

The changes here were developed downstream, see https://github.com/open-telemetry/otel-arrow/blob/main/collector/processor/concurrentbatchprocessor

Link to tracking issue

I'm listing a number of issues that are connected with this, both issues pointing to unmet needs in the exporterhelper batcher and missing features in the legacy batcher. Accepting these changes will allow significantly improved batching support in the interim period until the new batching support is complete.

Part of #10825 -- until this features is complete, users who depend on metadata_keys in the batch processor will not be able to upgrade
Part of #10368 -- I see this as the root cause, we haven't been able to introduce concurrency to the exporterhelper w/o also introducing a queue, which interferes with error transmission
Fixes #8245 -- my original report about the problem solved here -- we add concurrency with batching and error transmission and do not depend on a queue (persistent or in-memory)
Part of #9591 -- users must use one of the available memory limiting mechanisms in conjunction with the batch processor
Part of #8122 -- until this is finished, users depend on the original batch processor
Part of #7460 -- another statement of #8245; the batch processor does not propagate errors today, and this fixes the batch processor's contribution to the problem.

Testing

New tests are included.

Documentation

TODO/WIP

@bogdandrutu
Copy link
Member

Confused how you fix #11213?

@jmacd
Copy link
Contributor Author

jmacd commented Sep 27, 2024

@bogdandrutu that was a mistake.

Copy link
Member

@bogdandrutu bogdandrutu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Josh, this change is a bit too large and complicated to be reviewed in one PR. Can we split the 2 functionalities in 2 different PRs?

  1. The PR for multiple consumers.
  2. The PR for sync.

@jmacd jmacd closed this Sep 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

exporterhelper support for back-pressure
2 participants