Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change Point Analysis #23931

Merged
merged 73 commits into from
Dec 22, 2022
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
Show all changes
73 commits
Select commit Hold shift + click to select a range
6448677
ADd methods to Pytorch Benchmark to run Change Point analysis
AnandInguva Oct 26, 2022
b2239e7
Add GH issue API for Python
AnandInguva Oct 28, 2022
8d1dd98
Add support for commenting on the issue
AnandInguva Oct 31, 2022
a231516
Add a description about the failing test
AnandInguva Oct 31, 2022
d947923
Add gh_issue object for perf tests
AnandInguva Nov 1, 2022
3fca01c
Update API to read from parameters from config file
AnandInguva Nov 1, 2022
5b194ea
Add find sibling change point method
AnandInguva Nov 2, 2022
156af50
Add support for storing the metadata after regression issue is created
AnandInguva Nov 4, 2022
544b8fb
Add logic for finding sibling changepoint
AnandInguva Nov 4, 2022
5f3a94c
Update test_name
AnandInguva Nov 7, 2022
fc1de36
Update GH action
AnandInguva Nov 7, 2022
f825e4e
Update Description of the issue
AnandInguva Nov 7, 2022
c2cb8ae
Add commenting on the issue feature
AnandInguva Nov 7, 2022
13c24bb
Add logic for not creating alert when its the same changepoint
AnandInguva Nov 7, 2022
bde9e14
Update config file
AnandInguva Nov 8, 2022
7711303
Change owner name
AnandInguva Nov 8, 2022
fad21a2
fix lint, pydocs
AnandInguva Nov 8, 2022
26d72cc
Add two tests in the config file
AnandInguva Nov 9, 2022
bd39279
Fix lint
AnandInguva Nov 9, 2022
c4c4486
Add labels to the config file
AnandInguva Nov 9, 2022
18861e5
Fixup lints, and typehints
AnandInguva Nov 9, 2022
3fd0899
add additional params to the config file
AnandInguva Nov 9, 2022
0612c79
Refactor change point analysis code (#82)
AnandInguva Nov 16, 2022
af22414
Fixup lint
AnandInguva Nov 16, 2022
963827c
Changes based on comments
AnandInguva Nov 20, 2022
2600e24
Refactor fetch metrics from BQ
AnandInguva Nov 23, 2022
1ae5949
Add readme on perf alerting tool
AnandInguva Nov 23, 2022
0d7c72a
Add try catch statements
AnandInguva Nov 23, 2022
6d28eb5
Update github action
AnandInguva Nov 23, 2022
00bfee0
Add awaiting triage label
AnandInguva Nov 23, 2022
929124b
Add link to edvisive algorithm
AnandInguva Nov 23, 2022
a02f275
Check with previous change point timestamp in the sibling change poin…
AnandInguva Nov 29, 2022
801d79f
Changes based on comments on PR
AnandInguva Nov 29, 2022
af44c31
Fix test name
AnandInguva Nov 30, 2022
d167500
Merge remote-tracking branch 'upstream/master' into change-point-with…
AnandInguva Nov 30, 2022
2f49383
Refactor code
AnandInguva Dec 1, 2022
ef015b8
Move constants and helper functions to separate .py file
AnandInguva Dec 1, 2022
d96460f
Merge remote-tracking branch 'upstream/master' into change-point-with…
AnandInguva Dec 1, 2022
6db24e5
Refactoring
AnandInguva Dec 2, 2022
bdbf7a5
Rename files
AnandInguva Dec 2, 2022
d66b15d
Add tests
AnandInguva Dec 2, 2022
759d3c3
Refactor and add steps in the doc string
AnandInguva Dec 2, 2022
716a388
extend finding duplicate change points to search last 10 reported CPs
AnandInguva Dec 2, 2022
01c06e2
fix tests
AnandInguva Dec 2, 2022
8842b25
Fix whitespace lint
AnandInguva Dec 2, 2022
abef62d
Fix up lint
AnandInguva Dec 2, 2022
7104e61
Add _ to internal variables
AnandInguva Dec 5, 2022
fd57d19
Remove steps and add it to the PR description
AnandInguva Dec 5, 2022
e1b59c6
Make big_query_metrics_fetcher as a function
AnandInguva Dec 5, 2022
372ebc7
Add return in doc string
AnandInguva Dec 5, 2022
e8770cd
Add perf label and awaiting triage label default while creating alert
AnandInguva Dec 5, 2022
43185c9
Refactor
AnandInguva Dec 5, 2022
1797cc1
Modify loop
AnandInguva Dec 5, 2022
d130f1f
Fix up runtime errors
AnandInguva Dec 5, 2022
433f5fe
Refactor metrics fetcher
AnandInguva Dec 6, 2022
7672b99
Add test
AnandInguva Dec 6, 2022
b0f7b9c
Changes based on comments
AnandInguva Dec 12, 2022
327ea01
Remove the confusion on sorted order of metrics and timestamps
AnandInguva Dec 12, 2022
6fdfab4
Changes issue templates and add TODOs
AnandInguva Dec 12, 2022
117f5fe
Add triaging issues section
AnandInguva Dec 12, 2022
3a453cd
Fixup lint, whitespace
AnandInguva Dec 12, 2022
397e7c7
Add notebook
AnandInguva Dec 14, 2022
41a1a00
Changes based on comments
AnandInguva Dec 14, 2022
a0c3504
Mark optional params in readme
AnandInguva Dec 14, 2022
57a4ef0
correct triage link
AnandInguva Dec 17, 2022
fad6bbe
Add unit tests and fix lints
AnandInguva Dec 17, 2022
102d1b7
Fix up lint
AnandInguva Dec 19, 2022
6b83629
Apply suggestions from code review
AnandInguva Dec 22, 2022
4d7daf8
Change default optional param
AnandInguva Dec 22, 2022
6a22e77
modify mock tests
AnandInguva Dec 22, 2022
fed8018
add docstring
AnandInguva Dec 22, 2022
748ebd5
Change default values
AnandInguva Dec 22, 2022
c12b594
Add tests to GHA
AnandInguva Dec 22, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 8 additions & 5 deletions sdks/python/apache_beam/testing/analyzers/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,10 @@
under the License.
-->

<h1>Performance alerts for Beam Python performance and load tests</h1>
# Performance alerts for Beam Python performance and load tests


<h2> Alerts </h2>
## Alerts

Performance regressions or improvements detected with the [Change Point Analysis](https://en.wikipedia.org/wiki/Change_detection) using [edivisive](https://github.com/apache/beam/blob/0a91d139dea4276dc46176c4cdcdfce210fc50c4/.test-infra/jenkins/job_InferenceBenchmarkTests_Python.groovy#L30)
analyzer are automatically filed as Beam GitHub issues with a label `perf-alert`.
Expand All @@ -34,7 +34,7 @@ If a performance alert is created on a test, a GitHub issue will be created and
URL, issue number along with the change point value and timestamp are exported to BigQuery. This data will be used to analyze the next change point observed on the same test to
update already created GitHub issue or ignore performance alert by not creating GitHub issue to avoid duplicate issue creation.

<h2> Config file structure </h2>
## Config file structure
The config file defines the structure to run change point analysis on a given test. To add a test to the config file,
please follow the below structure.

Expand All @@ -55,11 +55,13 @@ test_1:
num_runs_in_change_point_window: 7
```

**NOTE**: `test_name` should be in the format `apache_beam.foo.bar`. It should point to a single test target.

**Note**: If the source is **BigQuery**, the metrics_dataset, metrics_table, project and metric_name should match with the values defined for performance/load tests.
The above example uses this [test configuration](https://github.com/apache/beam/blob/0a91d139dea4276dc46176c4cdcdfce210fc50c4/.test-infra/jenkins/job_InferenceBenchmarkTests_Python.groovy#L30)
to fill up the values required to fetch the data from source.

<h3>Different ways to avoid false positive change points</h3>
### Different ways to avoid false positive change points

**min_runs_between_change_points**: As the metric data moves across the runs, the change point analysis can place the
change point in a slightly different place. These change points refer to the same regression and are just noise.
Expand All @@ -72,9 +74,10 @@ Sometimes, the change point found might be way back in time and could be irrelev
reported only when it was observed in the last 7 runs from the current run,
setting `num_runs_in_change_point_window=7` will achieve it.

<h2> Register a test for performance alerts. </h2>
## Register a test for performance alerts.

If a new test needs to be registered for the performance alerting tool, please add the required test parameters to the
config file.

[//]: # (TODO : Add triaging section)

4 changes: 0 additions & 4 deletions sdks/python/apache_beam/testing/analyzers/constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,3 @@
}, {
'name': _ISSUE_URL, 'field_type': 'STRING', 'mode': 'REQUIRED'
}]

_TITLE_TEMPLATE = """
Performance Regression or Improvement: {}:{}
"""
41 changes: 24 additions & 17 deletions sdks/python/apache_beam/testing/analyzers/github_issues_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,11 +44,16 @@
"X-GitHub-Api-Version": "2022-11-28"
}

# Fill the GitHub issue description with the below variables.
_ISSUE_DESCRIPTION_HEADER = """
Affected metric: `{}`
_ISSUE_TITLE_TEMPLATE = """
Performance Regression or Improvement: {}:{}
"""
_METRIC_INFO = "timestamp: {}, metric_value: `{}`"

# TODO: Provide a better debugging tool for the user to visualize metrics.
# For example, a link to dashboards or query to get recent data to analyze,etc.
AnandInguva marked this conversation as resolved.
Show resolved Hide resolved
_ISSUE_DESCRIPTION_TEMPLATE = """
Performance change found in the test: `{}` for the metric: `{}`.
"""
_METRIC_INFO_TEMPLATE = "timestamp: {}, metric_value: `{}`"
_AWAITING_TRIAGE_LABEL = 'awaiting triage'
_PERF_ALERT_LABEL = 'perf-alert'

Expand Down Expand Up @@ -96,8 +101,8 @@ def comment_on_issue(issue_number: int,
comment_description: If an issue with issue_number is open,
then comment on the issue with the using comment_description.
AnandInguva marked this conversation as resolved.
Show resolved Hide resolved
Returns:
Boolean, indicating a comment was added to issue, and URL directing to
the comment.
tuple[bool, Optional[str]] indicating if a comment was added to
issue, and the comment URL.
"""
url = 'https://api.github.com/repos/{}/{}/issues/{}'.format(
_BEAM_GITHUB_REPO_OWNER, _BEAM_GITHUB_REPO_NAME, issue_number)
Expand All @@ -122,14 +127,15 @@ def comment_on_issue(issue_number: int,
return False, None


def add_awaiting_triage_label_to_issue(issue_number: int):
def add_awaiting_triage_label(issue_number: int):
url = 'https://api.github.com/repos/{}/{}/issues/{}/labels'.format(
_BEAM_GITHUB_REPO_OWNER, _BEAM_GITHUB_REPO_NAME, issue_number)
requests.post(
url, json.dumps({'labels': [_AWAITING_TRIAGE_LABEL]}), headers=_HEADERS)


def get_issue_description(
test_name: str,
metric_name: str,
timestamps: List[pd.Timestamp],
AnandInguva marked this conversation as resolved.
Show resolved Hide resolved
metric_values: List,
Expand All @@ -139,8 +145,8 @@ def get_issue_description(
Args:
metric_name: Metric name used for the Change Point Analysis.
timestamps: Timestamps of the metrics when they were published to the
Database.
metric_values: Values of the metric for the previous runs.
Database. Timestamps are expected in ascending order.
metric_values: metric values for the previous runs.
change_point_index: Index for the change point. The element in the
index of the metric_values would be the change point.
max_results_to_display: Max number of results to display from the change
Expand All @@ -151,18 +157,19 @@ def get_issue_description(
"""

# TODO: Add mean and median before and after the changepoint index.
upper_bound = min(
change_point_index + max_results_to_display + 1, len(metric_values))
lower_bound = max(0, change_point_index - max_results_to_display)
max_timestamp_index = min(
change_point_index + max_results_to_display, len(metric_values) - 1)
min_timestamp_index = max(0, change_point_index - max_results_to_display)

description = _ISSUE_DESCRIPTION_HEADER.format(metric_name) + 2 * '\n'
description = _ISSUE_DESCRIPTION_TEMPLATE.format(
test_name, metric_name) + 2 * '\n'

runs_to_display = [
_METRIC_INFO.format(timestamps[i].ctime(), metric_values[i])
for i in range(lower_bound, upper_bound)
_METRIC_INFO_TEMPLATE.format(timestamps[i].ctime(), metric_values[i])
for i in range(max_timestamp_index, min_timestamp_index - 1, -1)
AnandInguva marked this conversation as resolved.
Show resolved Hide resolved
]

runs_to_display[change_point_index - lower_bound] += " <---- Anomaly"
runs_to_display[change_point_index - min_timestamp_index] += " <---- Anomaly"
return description + '\n'.join(runs_to_display)


Expand All @@ -182,6 +189,6 @@ def report_change_point_on_issues(
comment_description=description
)
if commented_on_issue:
add_awaiting_triage_label_to_issue(issue_number=existing_issue_number)
add_awaiting_triage_label(issue_number=existing_issue_number)
return existing_issue_number, issue_url
return create_issue(title=title, description=description, labels=labels)
22 changes: 13 additions & 9 deletions sdks/python/apache_beam/testing/analyzers/perf_analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,14 +76,17 @@ def run_change_point_analysis(params, test_id, big_query_metrics_fetcher):
if not change_point_index:
return

if not is_change_point_in_valid_window(num_runs_in_change_point_window,
change_point_index):
if not is_change_point_in_valid_window(
num_runs_in_change_point_window,
len(timestamps) - 1 - change_point_index):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make a helper variable for len(timestamps) - 1 - change_point_index) that will explain what this is (I'm confused actually what it is - is_change_point_in_valid_window takes a variable called change_point_index, but the meaning of that change_point_index would be different from the meaning of change_point_index in this code, since apparently we need to do some calculations first, hence also confusion that change_point_index means a different concept in two places in the code. please fix.

logging.info(
'Performance regression/improvement found for the test: %s. '
'Since the change point index %s '
'Since the change point run %s '
'lies outside the num_runs_in_change_point_window distance: %s, '
'alert is not raised.' %
(test_name, change_point_index, num_runs_in_change_point_window))
'alert is not raised.' % (
params['test_name'],
len(timestamps) - 1 - change_point_index,
num_runs_in_change_point_window))
return

is_alert = True
Expand All @@ -103,13 +106,14 @@ def run_change_point_analysis(params, test_id, big_query_metrics_fetcher):
timestamps=timestamps,
min_runs_between_change_points=min_runs_between_change_points)

# TODO: remove before merging.
AnandInguva marked this conversation as resolved.
Show resolved Hide resolved
logging.info("Performance alert is %s for test %s" % (is_alert, test_name))
if is_alert:
issue_number, issue_url = create_performance_alert(
metric_name, test_name, timestamps,
metric_values, change_point_index,
params.get('labels', None),
last_reported_issue_number)
metric_name, params['test_name'], timestamps,
metric_values, change_point_index,
params.get('labels', None),
last_reported_issue_number)

issue_metadata = GitHubIssueMetaData(
issue_timestamp=pd.Timestamp(
Expand Down
46 changes: 32 additions & 14 deletions sdks/python/apache_beam/testing/analyzers/perf_analysis_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,7 @@
from google.api_core import exceptions

from apache_beam.testing.analyzers import constants
from apache_beam.testing.analyzers.github_issues_utils import get_issue_description
from apache_beam.testing.analyzers.github_issues_utils import report_change_point_on_issues
from apache_beam.testing.analyzers import github_issues_utils
from apache_beam.testing.load_tests import load_test_metrics_utils
from apache_beam.testing.load_tests.load_test_metrics_utils import BigQueryMetricsFetcher
from apache_beam.testing.load_tests.load_test_metrics_utils import BigQueryMetricsPublisher
Expand All @@ -58,8 +57,6 @@ class GitHubIssueMetaData:

def is_change_point_in_valid_window(
num_runs_in_change_point_window: int, change_point_index: int) -> bool:
# If the change point is more than N runs behind the most recent run,
# Ignore the change point and don't raise an alert for it.
return num_runs_in_change_point_window >= change_point_index
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

an index typically ranges from 0 to # of entries-1 should it therefore be return num_runs_in_change_point_window > change_point_index ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That sounds right. Changed it



Expand Down Expand Up @@ -94,14 +91,15 @@ def is_perf_alert(
Search the previous_change_point_timestamps with current observed
change point sibling window and determine if it is a duplicate
change point or not.
timestamps are expected to be in ascending order.

Return False if the current observed change point is a duplicate of
already reported change points else return True.
"""
sibling_change_point_min_timestamp = timestamps[min(
change_point_index + min_runs_between_change_points, len(timestamps) - 1)]
sibling_change_point_max_timestamp = timestamps[max(
sibling_change_point_min_timestamp = timestamps[max(
0, change_point_index - min_runs_between_change_points)]
sibling_change_point_max_timestamp = timestamps[min(
change_point_index + min_runs_between_change_points, len(timestamps) - 1)]
# Search a list of previous change point timestamps and compare it with
# current change point timestamp. We do this in case, if a current change
# point is already reported in the past.
Expand Down Expand Up @@ -129,6 +127,15 @@ def validate_config(keys):
def fetch_metric_data(
params: Dict[str, Any], big_query_metrics_fetcher: BigQueryMetricsFetcher
) -> Tuple[List[Union[int, float]], List[pd.Timestamp]]:
"""
Args:
params: Dict containing keys required to fetch data from a data source.
big_query_metrics_fetcher: A BigQuery metrics fetcher for fetch metrics.
Returns:
Tuple[List[Union[int, float]], List[pd.Timestamp]]: Tuple containing list
of metric_values and list of timestamps. Both are sorted in ascending
order wrt timestamps.
"""
query = f"""
SELECT *
FROM {params['project']}.{params['metrics_dataset']}.{params['metrics_table']}
Expand All @@ -137,18 +144,26 @@ def fetch_metric_data(
LIMIT {constants._NUM_DATA_POINTS_TO_RUN_CHANGE_POINT_ANALYSIS}
"""
metric_data: pd.DataFrame = big_query_metrics_fetcher.fetch(query=query)
metric_data.sort_values(
by=[load_test_metrics_utils.SUBMIT_TIMESTAMP_LABEL], inplace=True)
return (
metric_data[load_test_metrics_utils.VALUE_LABEL],
metric_data[load_test_metrics_utils.SUBMIT_TIMESTAMP_LABEL])
metric_data[load_test_metrics_utils.VALUE_LABEL].tolist(),
metric_data[load_test_metrics_utils.SUBMIT_TIMESTAMP_LABEL].tolist())


def find_latest_change_point_index(metric_values: List[Union[float, int]]):
"""
Args:
metric_values: Metric values used to run change point analysis.
Returns:
int: Right most change point index observed on metric_values.
"""
change_points_idx = e_divisive(metric_values)
if not change_points_idx:
return None
# Consider the latest change point.
tvalentyn marked this conversation as resolved.
Show resolved Hide resolved
change_points_idx.sort(reverse=True)
change_point_index = change_points_idx[0]
change_points_idx.sort()
change_point_index = change_points_idx[-1]
AnandInguva marked this conversation as resolved.
Show resolved Hide resolved
return change_point_index


Expand Down Expand Up @@ -179,16 +194,19 @@ def create_performance_alert(
Creates performance alert on GitHub issues and returns GitHub issue
number and issue URL.
"""
description = get_issue_description(
description = github_issues_utils.get_issue_description(
test_name=test_name,
metric_name=metric_name,
timestamps=timestamps,
metric_values=metric_values,
change_point_index=change_point_index,
max_results_to_display=(
constants._NUM_RESULTS_TO_DISPLAY_ON_ISSUE_DESCRIPTION))

issue_number, issue_url = report_change_point_on_issues(
title=constants._TITLE_TEMPLATE.format(test_name, metric_name),
issue_number, issue_url = github_issues_utils.report_change_point_on_issues(
title=github_issues_utils._ISSUE_TITLE_TEMPLATE.format(
test_name, metric_name
),
description=description,
labels=labels,
existing_issue_number=existing_issue_number)
Expand Down
7 changes: 3 additions & 4 deletions sdks/python/apache_beam/testing/analyzers/tests_config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,15 +16,16 @@
#

test_1:
test_name: apache_beam.testing.benchmarks.inference.pytorch_image_classification_benchmarks
test_name: apache_beam.testing.benchmarks.inference.pytorch_image_classification_benchmarks_16
metrics_dataset: beam_run_inference
metrics_table: torch_inference_imagenet_results_resnet152
project: apache-beam-testing
metric_name: mean_load_model_latency_milli_secs
labels:
- run-inference
# TODO: Remove before merging.
min_runs_between_change_points: 5
AnandInguva marked this conversation as resolved.
Show resolved Hide resolved
num_runs_in_change_point_window: 7
num_runs_in_change_point_window: 14

test_2:
test_name: apache_beam.testing.benchmarks.inference.pytorch_image_classification_benchmarks
Expand All @@ -34,5 +35,3 @@ test_2:
metric_name: mean_load_model_latency_milli_secs
labels:
- run-inference
min_runs_between_change_points: 5
num_runs_in_change_point_window: 7