Skip to content

Commit

Permalink
[DOCS] Changes level offset of anomaly detection pages (#59911) (#59943)
Browse files Browse the repository at this point in the history
  • Loading branch information
lcawl authored Jul 21, 2020
1 parent bcc4a41 commit 0c5b52f
Show file tree
Hide file tree
Showing 19 changed files with 50 additions and 204 deletions.
2 changes: 1 addition & 1 deletion docs/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ apply plugin: 'elasticsearch.rest-resources'

/* List of files that have snippets that will not work until platinum tests can occur ... */
buildRestTests.expectedUnconvertedCandidates = [
'reference/ml/anomaly-detection/transforms.asciidoc',
'reference/ml/anomaly-detection/ml-configuring-transform.asciidoc',
'reference/ml/anomaly-detection/apis/delete-calendar-event.asciidoc',
'reference/ml/anomaly-detection/apis/get-bucket.asciidoc',
'reference/ml/anomaly-detection/apis/get-category.asciidoc',
Expand Down
52 changes: 0 additions & 52 deletions docs/reference/ml/anomaly-detection/configuring.asciidoc

This file was deleted.

Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[role="xpack"]
[[ml-count-functions]]
=== Count functions
= Count functions

Count functions detect anomalies when the number of events in a bucket is
anomalous.
Expand All @@ -22,7 +22,7 @@ The {ml-features} include the following count functions:

[float]
[[ml-count]]
===== Count, high_count, low_count
== Count, high_count, low_count

The `count` function detects anomalies when the number of events in a bucket is
anomalous.
Expand Down Expand Up @@ -145,7 +145,7 @@ and the `summary_count_field_name` property. For more information, see

[float]
[[ml-nonzero-count]]
===== Non_zero_count, high_non_zero_count, low_non_zero_count
== Non_zero_count, high_non_zero_count, low_non_zero_count

The `non_zero_count` function detects anomalies when the number of events in a
bucket is anomalous, but it ignores cases where the bucket count is zero. Use
Expand Down Expand Up @@ -215,7 +215,7 @@ data is sparse, use the `count` functions, which are optimized for that scenario

[float]
[[ml-distinct-count]]
===== Distinct_count, high_distinct_count, low_distinct_count
== Distinct_count, high_distinct_count, low_distinct_count

The `distinct_count` function detects anomalies where the number of distinct
values in one field is unusual.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[role="xpack"]
[[ml-functions]]
== Function reference
= Function reference

The {ml-features} include analysis functions that provide a wide variety of
flexible ways to analyze data for anomalies.
Expand Down Expand Up @@ -41,17 +41,3 @@ These functions effectively ignore empty buckets.
* <<ml-rare-functions>>
* <<ml-sum-functions>>
* <<ml-time-functions>>

include::functions/count.asciidoc[]

include::functions/geo.asciidoc[]

include::functions/info.asciidoc[]

include::functions/metric.asciidoc[]

include::functions/rare.asciidoc[]

include::functions/sum.asciidoc[]

include::functions/time.asciidoc[]
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[role="xpack"]
[[ml-geo-functions]]
=== Geographic functions
= Geographic functions

The geographic functions detect anomalies in the geographic location of the
input data.
Expand All @@ -13,7 +13,7 @@ geographic functions.

[float]
[[ml-lat-long]]
==== Lat_long
== Lat_long

The `lat_long` function detects anomalies in the geographic location of the
input data.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[[ml-info-functions]]
=== Information Content Functions
= Information Content Functions

The information content functions detect anomalies in the amount of information
that is contained in strings within a bucket. These functions can be used as
Expand All @@ -12,7 +12,7 @@ The {ml-features} include the following information content functions:

[float]
[[ml-info-content]]
==== Info_content, High_info_content, Low_info_content
== Info_content, High_info_content, Low_info_content

The `info_content` function detects anomalies in the amount of information that
is contained in strings in a bucket.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[role="xpack"]
[[ml-metric-functions]]
=== Metric functions
= Metric functions

The metric functions include functions such as mean, min and max. These values
are calculated for each bucket. Field values that cannot be converted to
Expand All @@ -20,7 +20,7 @@ function.

[float]
[[ml-metric-min]]
==== Min
== Min

The `min` function detects anomalies in the arithmetic minimum of a value.
The minimum value is calculated for each bucket.
Expand Down Expand Up @@ -55,7 +55,7 @@ entry mistakes. It models the minimum amount for each product over time.

[float]
[[ml-metric-max]]
==== Max
== Max

The `max` function detects anomalies in the arithmetic maximum of a value.
The maximum value is calculated for each bucket.
Expand Down Expand Up @@ -113,7 +113,7 @@ response times for each bucket.

[float]
[[ml-metric-median]]
==== Median, high_median, low_median
== Median, high_median, low_median

The `median` function detects anomalies in the statistical median of a value.
The median value is calculated for each bucket.
Expand Down Expand Up @@ -151,7 +151,7 @@ median `responsetime` is unusual compared to previous `responsetime` values.

[float]
[[ml-metric-mean]]
==== Mean, high_mean, low_mean
== Mean, high_mean, low_mean

The `mean` function detects anomalies in the arithmetic mean of a value.
The mean value is calculated for each bucket.
Expand Down Expand Up @@ -221,7 +221,7 @@ values.

[float]
[[ml-metric-metric]]
==== Metric
== Metric

The `metric` function combines `min`, `max`, and `mean` functions. You can use
it as a shorthand for a combined analysis. If you do not specify a function in
Expand Down Expand Up @@ -258,7 +258,7 @@ when the mean, min, or max `responsetime` is unusual compared to previous

[float]
[[ml-metric-varp]]
==== Varp, high_varp, low_varp
== Varp, high_varp, low_varp

The `varp` function detects anomalies in the variance of a value which is a
measure of the variability and spread in the data.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[role="xpack"]
[[ml-rare-functions]]
=== Rare functions
= Rare functions

The rare functions detect values that occur rarely in time or rarely for a
population.
Expand Down Expand Up @@ -35,7 +35,7 @@ The {ml-features} include the following rare functions:

[float]
[[ml-rare]]
==== Rare
== Rare

The `rare` function detects values that occur rarely in time or rarely for a
population. It detects anomalies according to the number of distinct rare values.
Expand Down Expand Up @@ -93,7 +93,7 @@ is rare, even if it occurs for that client IP in every bucket.

[float]
[[ml-freq-rare]]
==== Freq_rare
== Freq_rare

The `freq_rare` function detects values that occur rarely for a population.
It detects anomalies according to the number of times (frequency) that rare
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[role="xpack"]
[[ml-sum-functions]]
=== Sum functions
= Sum functions

The sum functions detect anomalies when the sum of a field in a bucket is
anomalous.
Expand All @@ -19,7 +19,7 @@ The {ml-features} include the following sum functions:

[float]
[[ml-sum]]
==== Sum, high_sum, low_sum
== Sum, high_sum, low_sum

The `sum` function detects anomalies where the sum of a field in a bucket is
anomalous.
Expand Down Expand Up @@ -75,7 +75,7 @@ to find users that are abusing internet privileges.

[float]
[[ml-nonnull-sum]]
==== Non_null_sum, high_non_null_sum, low_non_null_sum
== Non_null_sum, high_non_null_sum, low_non_null_sum

The `non_null_sum` function is useful if your data is sparse. Buckets without
values are ignored and buckets with a zero value are analyzed.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[role="xpack"]
[[ml-time-functions]]
=== Time functions
= Time functions

The time functions detect events that happen at unusual times, either of the day
or of the week. These functions can be used to find unusual patterns of behavior,
Expand Down Expand Up @@ -37,7 +37,7 @@ step change in behavior and the new times will be learned quickly.

[float]
[[ml-time-of-day]]
==== Time_of_day
== Time_of_day

The `time_of_day` function detects when events occur that are outside normal
usage patterns. For example, it detects unusual activity in the middle of the
Expand Down Expand Up @@ -73,7 +73,7 @@ its past behavior.

[float]
[[ml-time-of-week]]
==== Time_of_week
== Time_of_week

The `time_of_week` function detects when events occur that are outside normal
usage patterns. For example, it detects login events on the weekend.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[role="xpack"]
[[ml-configuring-aggregation]]
=== Aggregating data for faster performance
= Aggregating data for faster performance

By default, {dfeeds} fetch data from {es} using search and scroll requests.
It can be significantly more efficient, however, to aggregate data in {es}
Expand All @@ -17,7 +17,7 @@ search and scroll behavior.

[discrete]
[[aggs-limits-dfeeds]]
==== Requirements and limitations
== Requirements and limitations

There are some limitations to using aggregations in {dfeeds}. Your aggregation
must include a `date_histogram` aggregation, which in turn must contain a `max`
Expand Down Expand Up @@ -48,7 +48,7 @@ functions, set the interval to the same value as the bucket span.

[discrete]
[[aggs-include-jobs]]
==== Including aggregations in {anomaly-jobs}
== Including aggregations in {anomaly-jobs}

When you create or update an {anomaly-job}, you can include the names of
aggregations, for example:
Expand Down Expand Up @@ -134,7 +134,7 @@ that match values in the job configuration are fed to the job.

[discrete]
[[aggs-dfeeds]]
==== Nested aggregations in {dfeeds}
== Nested aggregations in {dfeeds}

{dfeeds-cap} support complex nested aggregations. This example uses the
`derivative` pipeline aggregation to find the first order derivative of the
Expand Down Expand Up @@ -180,7 +180,7 @@ counter `system.network.out.bytes` for each value of the field `beat.name`.

[discrete]
[[aggs-single-dfeeds]]
==== Single bucket aggregations in {dfeeds}
== Single bucket aggregations in {dfeeds}

{dfeeds-cap} not only supports multi-bucket aggregations, but also single bucket
aggregations. The following shows two `filter` aggregations, each gathering the
Expand Down Expand Up @@ -232,7 +232,7 @@ number of unique entries for the `error` field.

[discrete]
[[aggs-define-dfeeds]]
==== Defining aggregations in {dfeeds}
== Defining aggregations in {dfeeds}

When you define an aggregation in a {dfeed}, it must have the following form:

Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[role="xpack"]
[testenv="platinum"]
[[ml-configuring-categories]]
=== Detecting anomalous categories of data
= Detecting anomalous categories of data

Categorization is a {ml} process that tokenizes a text field, clusters similar
data together, and classifies it into categories. It works best on
Expand Down Expand Up @@ -100,7 +100,7 @@ SQL statement from the categorization algorithm.

[discrete]
[[ml-configuring-analyzer]]
==== Customizing the categorization analyzer
== Customizing the categorization analyzer

Categorization uses English dictionary words to identify log message categories.
By default, it also uses English tokenization rules. For this reason, if you use
Expand Down
Loading

0 comments on commit 0c5b52f

Please sign in to comment.