-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Batch optimized SparkRunner groupByKey #33322
Batch optimized SparkRunner groupByKey #33322
Conversation
Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment |
assign set of reviewers |
Assigning reviewers. If you would like to opt out of this review, comment R: @damccorm for label build. Available commands:
The PR bot will only process comments in the main thread (not review comments). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks - had a few questions, but overall it seems like an improvement
...test/java/org/apache/beam/runners/spark/translation/GroupNonMergingWindowsFunctionsTest.java
Show resolved
Hide resolved
...ers/spark/src/main/java/org/apache/beam/runners/spark/translation/GroupCombineFunctions.java
Show resolved
Hide resolved
@damccorm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This LGTM, I'm rerunning the failing test suite to see if we can get a green signal before merging though (failure was due to pulling Java licenses, so likely unrelated to this change)
Please add a meaningful description for your change here
fixes #20943
This PR improves the performance of GroupByKey transform in SparkRunner by replacing the current implementation that uses Spark's
groupByKey
withcombineByKey
.The current implementation uses Spark's
groupByKey
which causes all the data to be shuffled across the network before grouping. This can lead to significant performance overhead and potential OOM issues with large datasets.By switching to Spark's
combineByKey
, we can:Before optimized -
6.1GiB
for shuffleAfter optimized -
1487.7MiB
for shuffle.Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123
), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>
instead.CHANGES.md
with noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.