-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change Point Analysis #23931
Change Point Analysis #23931
Changes from 5 commits
6448677
b2239e7
8d1dd98
a231516
d947923
3fca01c
5b194ea
156af50
544b8fb
5f3a94c
fc1de36
f825e4e
c2cb8ae
13c24bb
bde9e14
7711303
fad21a2
26d72cc
bd39279
c4c4486
18861e5
3fd0899
0612c79
af22414
963827c
2600e24
1ae5949
0d7c72a
6d28eb5
00bfee0
929124b
a02f275
801d79f
af44c31
d167500
2f49383
ef015b8
d96460f
6db24e5
bdbf7a5
d66b15d
759d3c3
716a388
01c06e2
8842b25
abef62d
7104e61
fd57d19
e1b59c6
372ebc7
e8770cd
43185c9
1797cc1
d130f1f
433f5fe
7672b99
b0f7b9c
327ea01
6fdfab4
117f5fe
3a453cd
397e7c7
41a1a00
a0c3504
57a4ef0
fad6bbe
102d1b7
6b83629
4d7daf8
6a22e77
fed8018
748ebd5
c12b594
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -17,11 +17,9 @@ | |
under the License. | ||
--> | ||
|
||
<h1>Performance alerts for Beam Python performance and load tests</h1> | ||
|
||
|
||
<h2> Alerts </h2> | ||
# Performance alerts for Beam Python performance and load tests | ||
|
||
## Alerts | ||
Performance regressions or improvements detected with the [Change Point Analysis](https://en.wikipedia.org/wiki/Change_detection) using [edivisive](https://github.com/apache/beam/blob/0a91d139dea4276dc46176c4cdcdfce210fc50c4/.test-infra/jenkins/job_InferenceBenchmarkTests_Python.groovy#L30) | ||
analyzer are automatically filed as Beam GitHub issues with a label `perf-alert`. | ||
|
||
|
@@ -34,7 +32,7 @@ If a performance alert is created on a test, a GitHub issue will be created and | |
URL, issue number along with the change point value and timestamp are exported to BigQuery. This data will be used to analyze the next change point observed on the same test to | ||
update already created GitHub issue or ignore performance alert by not creating GitHub issue to avoid duplicate issue creation. | ||
|
||
<h2> Config file structure </h2> | ||
## Config file structure | ||
The config file defines the structure to run change point analysis on a given test. To add a test to the config file, | ||
please follow the below structure. | ||
|
||
|
@@ -55,11 +53,13 @@ test_1: | |
num_runs_in_change_point_window: 7 | ||
``` | ||
|
||
**NOTE**: `test_name` should be in the format `apache_beam.foo.bar`. It should point to a single test target. | ||
|
||
**Note**: If the source is **BigQuery**, the metrics_dataset, metrics_table, project and metric_name should match with the values defined for performance/load tests. | ||
The above example uses this [test configuration](https://github.com/apache/beam/blob/0a91d139dea4276dc46176c4cdcdfce210fc50c4/.test-infra/jenkins/job_InferenceBenchmarkTests_Python.groovy#L30) | ||
to fill up the values required to fetch the data from source. | ||
|
||
<h3>Different ways to avoid false positive change points</h3> | ||
### Different ways to avoid false positive change points | ||
|
||
**min_runs_between_change_points**: As the metric data moves across the runs, the change point analysis can place the | ||
change point in a slightly different place. These change points refer to the same regression and are just noise. | ||
|
@@ -72,9 +72,61 @@ Sometimes, the change point found might be way back in time and could be irrelev | |
reported only when it was observed in the last 7 runs from the current run, | ||
setting `num_runs_in_change_point_window=7` will achieve it. | ||
|
||
<h2> Register a test for performance alerts. </h2> | ||
## Register a test for performance alerts | ||
|
||
If a new test needs to be registered for the performance alerting tool, please add the required test parameters to the | ||
config file. | ||
|
||
## Triage performance alert issues | ||
|
||
All the performance/load tests metrics defined at [beam/.test-infra/jenkins](https://github.com/apache/beam/tree/master/.test-infra/jenkins) are imported to [Grafana dashboards](http://104.154.241.245/d/1/getting-started?orgId=1) for visualization. Please | ||
find the alerted test dashboard to find a spike in the metric values. | ||
|
||
For example, for the below configuration, | ||
* test: `apache_beam.testing.benchmarks.inference.pytorch_image_classification_benchmarks` | ||
* metric_name: `mean_load_model_latency_milli_secs` | ||
|
||
Grafana dashboard can be found at http://104.154.241.245/d/ZpS8Uf44z/python-ml-runinference-benchmarks?orgId=1&viewPanel=7 | ||
|
||
If the dashboard for a test is not found, you can use the | ||
code below to generate a plot for the given test, metric_name. | ||
|
||
If the performance/load test store the results in BigQuery using this [schema](https://github.com/apache/beam/blob/83679216cce2d52dbeb7e837f06ca1d57b31d509/sdks/python/apache_beam/testing/load_tests/load_test_metrics_utils.py#L66), | ||
then use the following code to fetch the metric_values for a `metric_name` for the last `30` runs and display a plot using matplotlib. | ||
|
||
**NOTE**: Install matplotlib and pandas using `pip install matplotlib pandas`. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. nice, thanks. this would be better as a notebook in the repo, that people could refer to and update as necessary. |
||
``` | ||
import pandas as pd | ||
import matplotlib.pyplot as plt | ||
|
||
from apache_beam.testing.load_tests import load_test_metrics_utils | ||
from apache_beam.testing.load_tests.load_test_metrics_utils import BigQueryMetricsFetcher | ||
|
||
bq_project = 'apache-beam-testing' | ||
bq_dataset = '<bq-dataset-name>' | ||
bq_table = '<bq-table>' | ||
metric_name = '<perf-alerted-metric-name>' | ||
|
||
query = f""" | ||
SELECT * | ||
FROM {bq_project}.{bq_dataset}.{bq_table} | ||
WHERE CONTAINS_SUBSTR(({load_test_metrics_utils.METRICS_TYPE_LABEL}), '{metric_name}') | ||
ORDER BY {load_test_metrics_utils.SUBMIT_TIMESTAMP_LABEL} DESC | ||
LIMIT 30 | ||
""" | ||
|
||
big_query_metrics_fetcher = BigQueryMetricsFetcher() | ||
metric_data: pd.DataFrame = big_query_metrics_fetcher.fetch(query=query) | ||
# sort the data to view it in chronological order. | ||
metric_data.sort_values( | ||
by=[load_test_metrics_utils.SUBMIT_TIMESTAMP_LABEL], inplace=True) | ||
|
||
metric_data.plot(x=load_test_metrics_utils.SUBMIT_TIMESTAMP_LABEL, | ||
y=load_test_metrics_utils.VALUE_LABEL) | ||
plt.show() | ||
``` | ||
|
||
If you confirm there is a change in the pattern of the values for a test, find the timestamp of when that change happened | ||
and use that timestamp to find possible culprit commit. | ||
|
||
If the performance alert is a `false positive`, close the issue as `Close as not planned`. |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -21,7 +21,6 @@ | |
# regressions for benchmark/load/performance test. | ||
|
||
import argparse | ||
|
||
import logging | ||
import os | ||
import uuid | ||
|
@@ -34,11 +33,11 @@ | |
import pandas as pd | ||
|
||
from apache_beam.testing.analyzers import constants | ||
from apache_beam.testing.analyzers.perf_analysis_utils import GitHubIssueMetaData | ||
from apache_beam.testing.analyzers.perf_analysis_utils import create_performance_alert | ||
from apache_beam.testing.analyzers.perf_analysis_utils import fetch_metric_data | ||
from apache_beam.testing.analyzers.perf_analysis_utils import get_existing_issues_data | ||
from apache_beam.testing.analyzers.perf_analysis_utils import find_latest_change_point_index | ||
from apache_beam.testing.analyzers.perf_analysis_utils import GitHubIssueMetaData | ||
from apache_beam.testing.analyzers.perf_analysis_utils import get_existing_issues_data | ||
from apache_beam.testing.analyzers.perf_analysis_utils import is_change_point_in_valid_window | ||
from apache_beam.testing.analyzers.perf_analysis_utils import is_perf_alert | ||
from apache_beam.testing.analyzers.perf_analysis_utils import publish_issue_metadata_to_big_query | ||
|
@@ -76,14 +75,17 @@ def run_change_point_analysis(params, test_id, big_query_metrics_fetcher): | |
if not change_point_index: | ||
return | ||
|
||
if not is_change_point_in_valid_window(num_runs_in_change_point_window, | ||
change_point_index): | ||
if not is_change_point_in_valid_window( | ||
num_runs_in_change_point_window, | ||
len(timestamps) - 1 - change_point_index): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. make a helper variable for |
||
logging.info( | ||
'Performance regression/improvement found for the test: %s. ' | ||
'Since the change point index %s ' | ||
'Since the change point run %s ' | ||
'lies outside the num_runs_in_change_point_window distance: %s, ' | ||
'alert is not raised.' % | ||
(test_name, change_point_index, num_runs_in_change_point_window)) | ||
'alert is not raised.' % ( | ||
params['test_name'], | ||
len(timestamps) - 1 - change_point_index, | ||
num_runs_in_change_point_window)) | ||
return | ||
|
||
is_alert = True | ||
|
@@ -103,13 +105,14 @@ def run_change_point_analysis(params, test_id, big_query_metrics_fetcher): | |
timestamps=timestamps, | ||
min_runs_between_change_points=min_runs_between_change_points) | ||
|
||
# TODO: remove before merging. | ||
AnandInguva marked this conversation as resolved.
Show resolved
Hide resolved
|
||
logging.info("Performance alert is %s for test %s" % (is_alert, test_name)) | ||
if is_alert: | ||
issue_number, issue_url = create_performance_alert( | ||
metric_name, test_name, timestamps, | ||
metric_values, change_point_index, | ||
params.get('labels', None), | ||
last_reported_issue_number) | ||
metric_name, params['test_name'], timestamps, | ||
metric_values, change_point_index, | ||
params.get('labels', None), | ||
last_reported_issue_number) | ||
|
||
issue_metadata = GitHubIssueMetaData( | ||
issue_timestamp=pd.Timestamp( | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there an url that is not based on IP address?