diff --git a/website/www/site/content/en/blog/ApachePlayground.md b/website/www/site/content/en/blog/ApachePlayground.md
index 1e29481abf4f..bc12aed5def7 100644
--- a/website/www/site/content/en/blog/ApachePlayground.md
+++ b/website/www/site/content/en/blog/ApachePlayground.md
@@ -35,13 +35,13 @@ limitations under the License.
* Displays pipeline execution graph (DAG)
* Code editor to modify examples or try your own custom pipeline with a Direct Runner
* Code editor with code highlighting, flexible layout, color schemes, and other features to provide responsive UX in desktop browsers
-* Embedding a Playground example on a web page prompts the web page readers to try the example pipeline in the Playground - e.g., [Playground Quickstart](https://beam.apache.org/get-started/try-beam-playground/) page
+* Embedding a Playground example on a web page prompts the web page readers to try the example pipeline in the Playground - e.g., [Playground Quickstart](/get-started/try-beam-playground/) page
### **What’s Next**
* Try examples in [Apache Beam Playground](https://play.beam.apache.org/)
* Submit your feedback using “Enjoying Playground?” in Apache Beam Playground or via [this form](https://docs.google.com/forms/d/e/1FAIpQLSd5_5XeOwwW2yjEVHUXmiBad8Lxk-4OtNcgG45pbyAZzd4EbA/viewform?usp=pp_url)
-* Join the Beam [users@](https://beam.apache.org/community/contact-us) mailing list
-* Contribute to the Apache Beam Playground codebase by following a few steps in this [Contribution Guide](https://beam.apache.org/contribute)
+* Join the Beam [users@](/community/contact-us) mailing list
+* Contribute to the Apache Beam Playground codebase by following a few steps in this [Contribution Guide](/contribute)
-Please [reach out](https://beam.apache.org/community/contact-us) if you have any feedback or encounter any issues!
+Please [reach out](/community/contact-us) if you have any feedback or encounter any issues!
diff --git a/website/www/site/content/en/blog/adding-data-sources-to-sql.md b/website/www/site/content/en/blog/adding-data-sources-to-sql.md
index 4cfbf05748d1..d0a45d3ec630 100644
--- a/website/www/site/content/en/blog/adding-data-sources-to-sql.md
+++ b/website/www/site/content/en/blog/adding-data-sources-to-sql.md
@@ -148,7 +148,7 @@ class GenerateSequenceTable extends BaseBeamTable implements Serializable {
Now that we have implemented the two basic classes (a `BaseBeamTable`, and a
`TableProvider`), we can start playing with them. After building the
-[SQL CLI](https://beam.apache.org/documentation/dsls/sql/shell/), we
+[SQL CLI](/documentation/dsls/sql/shell/), we
can now perform selections on the table:
```
diff --git a/website/www/site/content/en/blog/beam-2.21.0.md b/website/www/site/content/en/blog/beam-2.21.0.md
index c23115a44b9b..edaaf7b47d41 100644
--- a/website/www/site/content/en/blog/beam-2.21.0.md
+++ b/website/www/site/content/en/blog/beam-2.21.0.md
@@ -46,9 +46,9 @@ for example usage.
for that function.
More details can be found in
- [Ensuring Python Type Safety](https://beam.apache.org/documentation/sdks/python-type-safety/)
+ [Ensuring Python Type Safety](/documentation/sdks/python-type-safety/)
and the Python SDK Typing Changes
- [blog post](https://beam.apache.org/blog/python-typing/).
+ [blog post](/blog/python-typing/).
* Java SDK: Introducing the concept of options in Beam Schema’s. These options add extra
context to fields and schemas. This replaces the current Beam metadata that is present
diff --git a/website/www/site/content/en/blog/beam-2.25.0.md b/website/www/site/content/en/blog/beam-2.25.0.md
index 403eabafc124..a6e08627e286 100644
--- a/website/www/site/content/en/blog/beam-2.25.0.md
+++ b/website/www/site/content/en/blog/beam-2.25.0.md
@@ -39,9 +39,9 @@ For more information on changes in 2.25.0, check out the
* Support for repeatable fields in JSON decoder for `ReadFromBigQuery` added. (Python) ([BEAM-10524](https://issues.apache.org/jira/browse/BEAM-10524))
* Added an opt-in, performance-driven runtime type checking system for the Python SDK ([BEAM-10549](https://issues.apache.org/jira/browse/BEAM-10549)).
- More details will be in an upcoming [blog post](https://beam.apache.org/blog/python-performance-runtime-type-checking/index.html).
+ More details will be in an upcoming [blog post](/blog/python-performance-runtime-type-checking/index.html).
* Added support for Python 3 type annotations on PTransforms using typed PCollections ([BEAM-10258](https://issues.apache.org/jira/browse/BEAM-10258)).
- More details will be in an upcoming [blog post](https://beam.apache.org/blog/python-improved-annotations/index.html).
+ More details will be in an upcoming [blog post](/blog/python-improved-annotations/index.html).
* Improved the Interactive Beam API where recording streaming jobs now start a long running background recording job. Running ib.show() or ib.collect() samples from the recording ([BEAM-10603](https://issues.apache.org/jira/browse/BEAM-10603)).
* In Interactive Beam, ib.show() and ib.collect() now have "n" and "duration" as parameters. These mean read only up to "n" elements and up to "duration" seconds of data read from the recording ([BEAM-10603](https://issues.apache.org/jira/browse/BEAM-10603)).
* Initial preview of [Dataframes](https://s.apache.org/simpler-python-pipelines-2020#slide=id.g905ac9257b_1_21) support.
diff --git a/website/www/site/content/en/blog/beam-2.32.0.md b/website/www/site/content/en/blog/beam-2.32.0.md
index da252b669448..7c6d297cc5a4 100644
--- a/website/www/site/content/en/blog/beam-2.32.0.md
+++ b/website/www/site/content/en/blog/beam-2.32.0.md
@@ -46,9 +46,9 @@ For more information on changes in 2.32.0, check out the [detailed release notes
## Highlights
* The [Beam DataFrame
- API](https://beam.apache.org/documentation/dsls/dataframes/overview/) is no
+ API](/documentation/dsls/dataframes/overview/) is no
longer experimental! We've spent the time since the [2.26.0 preview
- announcement](https://beam.apache.org/blog/dataframe-api-preview-available/)
+ announcement](/blog/dataframe-api-preview-available/)
implementing the most frequently used pandas operations
([BEAM-9547](https://issues.apache.org/jira/browse/BEAM-9547)), improving
[documentation](https://beam.apache.org/releases/pydoc/current/apache_beam.dataframe.html)
@@ -62,7 +62,7 @@ For more information on changes in 2.32.0, check out the [detailed release notes
Leaving experimental just means that we now have high confidence in the API
and recommend its use for production workloads. We will continue to improve
the API, guided by your
- [feedback](https://beam.apache.org/community/contact-us/).
+ [feedback](/community/contact-us/).
## I/Os
diff --git a/website/www/site/content/en/blog/beam-2.38.0.md b/website/www/site/content/en/blog/beam-2.38.0.md
index 4075b981a1e6..d59f4673abd1 100644
--- a/website/www/site/content/en/blog/beam-2.38.0.md
+++ b/website/www/site/content/en/blog/beam-2.38.0.md
@@ -29,7 +29,7 @@ See the [download page](/get-started/downloads/#2380-2022-04-20) for this releas
For more information on changes in 2.38.0 check out the [detailed release notes](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527&version=12351169).
## I/Os
-* Introduce projection pushdown optimizer to the Java SDK ([BEAM-12976](https://issues.apache.org/jira/browse/BEAM-12976)). The optimizer currently only works on the [BigQuery Storage API](https://beam.apache.org/documentation/io/built-in/google-bigquery/#storage-api), but more I/Os will be added in future releases. If you encounter a bug with the optimizer, please file a JIRA and disable the optimizer using pipeline option `--experiments=disable_projection_pushdown`.
+* Introduce projection pushdown optimizer to the Java SDK ([BEAM-12976](https://issues.apache.org/jira/browse/BEAM-12976)). The optimizer currently only works on the [BigQuery Storage API](/documentation/io/built-in/google-bigquery/#storage-api), but more I/Os will be added in future releases. If you encounter a bug with the optimizer, please file a JIRA and disable the optimizer using pipeline option `--experiments=disable_projection_pushdown`.
* A new IO for Neo4j graph databases was added. ([BEAM-1857](https://issues.apache.org/jira/browse/BEAM-1857)) It has the ability to update nodes and relationships using UNWIND statements and to read data using cypher statements with parameters.
* `amazon-web-services2` has reached feature parity and is finally recommended over the earlier `amazon-web-services` and `kinesis` modules (Java). These will be deprecated in one of the next releases ([BEAM-13174](https://issues.apache.org/jira/browse/BEAM-13174)).
* Long outstanding write support for `Kinesis` was added ([BEAM-13175](https://issues.apache.org/jira/browse/BEAM-13175)).
diff --git a/website/www/site/content/en/blog/beam-2.42.0.md b/website/www/site/content/en/blog/beam-2.42.0.md
index 08b749621179..6d7499df5cc4 100644
--- a/website/www/site/content/en/blog/beam-2.42.0.md
+++ b/website/www/site/content/en/blog/beam-2.42.0.md
@@ -32,7 +32,7 @@ For more information on changes in 2.42.0, check out the [detailed release notes
* Added support for stateful DoFns to the Go SDK.
* Added support for [Batched
- DoFns](https://beam.apache.org/documentation/programming-guide/#batched-dofns)
+ DoFns](/documentation/programming-guide/#batched-dofns)
to the Python SDK.
## New Features / Improvements
diff --git a/website/www/site/content/en/blog/beam-2.8.0.md b/website/www/site/content/en/blog/beam-2.8.0.md
index b2c7163609f6..cd2e91889007 100644
--- a/website/www/site/content/en/blog/beam-2.8.0.md
+++ b/website/www/site/content/en/blog/beam-2.8.0.md
@@ -50,7 +50,7 @@ For more information on changes in 2.8.0, check out the
### Portability
-* [Python on Flink MVP](https://beam.apache.org/roadmap/portability/#python-on-flink) completed.
+* [Python on Flink MVP](/roadmap/portability/#python-on-flink) completed.
### I/Os
diff --git a/website/www/site/content/en/blog/beam-katas-kotlin-release.md b/website/www/site/content/en/blog/beam-katas-kotlin-release.md
index c0faef3c7209..1f84f96bc58a 100644
--- a/website/www/site/content/en/blog/beam-katas-kotlin-release.md
+++ b/website/www/site/content/en/blog/beam-katas-kotlin-release.md
@@ -29,7 +29,7 @@ Today, we are happy to announce a new addition to the Beam Katas family: Kotlin!
-You may remember [a post from last year](https://beam.apache.org/blog/beam-kata-release) that informed everyone of the wonderful Beam Katas available on [Stepik](https://stepik.org)
+You may remember [a post from last year](/blog/beam-kata-release) that informed everyone of the wonderful Beam Katas available on [Stepik](https://stepik.org)
for learning more about writing Apache Beam applications, working with its various APIs and programming model
hands-on, all from the comfort of your favorite IDEs. As of today, you can now work through all of the progressive
exercises to learn about the fundamentals of Beam in Kotlin.
@@ -41,7 +41,7 @@ as one of the most beloved programming languages in the annual Stack Overflow De
just our word for it.
The relationship between Apache Beam and Kotlin isn't a new one. You can find examples scattered across the web
-of engineering teams embracing the two technologies including [a series of samples announced on this very blog](https://beam.apache.org/blog/beam-kotlin/).
+of engineering teams embracing the two technologies including [a series of samples announced on this very blog](/blog/beam-kotlin/).
If you are new to Beam or are an experienced veteran looking for a change of pace, we'd encourage you to give
Kotlin a try.
diff --git a/website/www/site/content/en/blog/beam-sql-with-notebooks.md b/website/www/site/content/en/blog/beam-sql-with-notebooks.md
index 872a2a3004df..4f7c428613a1 100644
--- a/website/www/site/content/en/blog/beam-sql-with-notebooks.md
+++ b/website/www/site/content/en/blog/beam-sql-with-notebooks.md
@@ -22,7 +22,7 @@ limitations under the License.
## Intro
-[Beam SQL](https://beam.apache.org/documentation/dsls/sql/overview/) allows a
+[Beam SQL](/documentation/dsls/sql/overview/) allows a
Beam user to query PCollections with SQL statements.
[Interactive Beam](https://github.com/apache/beam/tree/master/sdks/python/apache_beam/runners/interactive#interactive-beam)
provides an integration between Apache Beam and
@@ -174,7 +174,7 @@ element_type like `BeamSchema_...(id: int32, str: str, flt: float64)`.
PCollection because the `beam_sql` magic always implicitly creates a pipeline to
execute your SQL query. To hold the elements with each field's type info, Beam
automatically creates a
-[schema](https://beam.apache.org/documentation/programming-guide/#what-is-a-schema)
+[schema](/documentation/programming-guide/#what-is-a-schema)
as the `element_type` for the created PCollection. You will learn more about
schema-aware PCollections later.
@@ -221,7 +221,7 @@ always check the content of a PCollection by invoking `ib.show(pcoll_name)` or
The `beam_sql` magic provides the flexibility to seamlessly mix SQL and non-SQL
Beam statements to build pipelines and even run them on Dataflow. However, each
PCollection queried by Beam SQL needs to have a
-[schema](https://beam.apache.org/documentation/programming-guide/#what-is-a-schema).
+[schema](/documentation/programming-guide/#what-is-a-schema).
For the `beam_sql` magic, it’s recommended to use `typing.NamedTuple` when a
schema is desired. You can go through the below example to learn more details
about schema-aware PCollections.
@@ -788,7 +788,7 @@ you to learn Beam SQL and mix Beam SQL into prototyping and productionizing (
e.g., to Dataflow) your Beam pipelines with minimum setups.
For more details about the Beam SQL syntax, check out the Beam Calcite SQL
-[compatibility](https://beam.apache.org/documentation/dsls/sql/calcite/overview/)
+[compatibility](/documentation/dsls/sql/calcite/overview/)
and the Apache Calcite SQL
[syntax](https://calcite.apache.org/docs/reference.html).
diff --git a/website/www/site/content/en/blog/beam-starter-projects.md b/website/www/site/content/en/blog/beam-starter-projects.md
index a1b7e995c97f..9606cd94c8eb 100644
--- a/website/www/site/content/en/blog/beam-starter-projects.md
+++ b/website/www/site/content/en/blog/beam-starter-projects.md
@@ -72,6 +72,6 @@ Here are the starter projects; you can choose your favorite language:
* **[Kotlin]** [github.com/apache/beam-starter-kotlin](https://github.com/apache/beam-starter-kotlin) – Adapted to idiomatic Kotlin
* **[Scala]** [github.com/apache/beam-starter-scala](https://github.com/apache/beam-starter-scala) – Coming soon!
-We have updated the [Java quickstart](https://beam.apache.org/get-started/quickstart/java/) to use the new starter project, and we're working on updating the Python and Go quickstarts as well.
+We have updated the [Java quickstart](/get-started/quickstart/java/) to use the new starter project, and we're working on updating the Python and Go quickstarts as well.
We hope you find this useful. Feedback and contributions are always welcome! So feel free to create a GitHub issue, or open a Pull Request to any of the starter project repositories.
diff --git a/website/www/site/content/en/blog/beam-summit-europe-2019.md b/website/www/site/content/en/blog/beam-summit-europe-2019.md
index 7f5351f9efd2..fcd630c1f258 100644
--- a/website/www/site/content/en/blog/beam-summit-europe-2019.md
+++ b/website/www/site/content/en/blog/beam-summit-europe-2019.md
@@ -54,7 +54,7 @@ and [Stockholm](https://www.meetup.com/Apache-Beam-Stockholm/events/260634514) h
Keep an eye out for a meetup in [Paris](https://www.meetup.com/Paris-Apache-Beam-Meetup).
-If you are interested in starting your own meetup, feel free [to reach out](https://beam.apache.org/community/contact-us)! Good places to start include our Slack channel, the dev and user mailing lists, or the Apache Beam Twitter.
+If you are interested in starting your own meetup, feel free [to reach out](/community/contact-us)! Good places to start include our Slack channel, the dev and user mailing lists, or the Apache Beam Twitter.
Even if you can’t travel to these meetups, you can stay informed on the happenings of the community. The talks and sessions from previous conferences and meetups are archived on the [Apache Beam YouTube channel](https://www.youtube.com/c/ApacheBeamYT). If you want your session added to the channel, don’t hesitate to get in touch!
@@ -63,7 +63,7 @@ The first summit of the year will be held in Berlin:
-You can find more info on the [website](https://beamsummit.org) and read about the inaugural edition of the Beam Summit Europe [here](https://beam.apache.org/blog/2018/10/31/beam-summit-aftermath.html). At these summits, you have the opportunity to meet with other Apache Beam creators and users, get expert advice, learn from the speaker sessions, and participate in workshops.
+You can find more info on the [website](https://beamsummit.org) and read about the inaugural edition of the Beam Summit Europe [here](/blog/2018/10/31/beam-summit-aftermath.html). At these summits, you have the opportunity to meet with other Apache Beam creators and users, get expert advice, learn from the speaker sessions, and participate in workshops.
We strongly encourage you to get involved again this year! You can participate in the following ways for the upcoming summit in Europe:
diff --git a/website/www/site/content/en/blog/dataframe-api-preview-available.md b/website/www/site/content/en/blog/dataframe-api-preview-available.md
index 81ffc5fe6a17..a48f02ff82e3 100644
--- a/website/www/site/content/en/blog/dataframe-api-preview-available.md
+++ b/website/www/site/content/en/blog/dataframe-api-preview-available.md
@@ -23,7 +23,7 @@ limitations under the License.
We're excited to announce that a preview of the Beam Python SDK's new DataFrame
API is now available in [Beam
-2.26.0](https://beam.apache.org/blog/beam-2.26.0/). Much like `SqlTransform`
+2.26.0](/blog/beam-2.26.0/). Much like `SqlTransform`
([Java](https://beam.apache.org/releases/javadoc/current/org/apache/beam/sdk/extensions/sql/SqlTransform.html),
[Python](https://beam.apache.org/releases/pydoc/current/apache_beam.transforms.sql.html#apache_beam.transforms.sql.SqlTransform)),
the DataFrame API gives Beam users a way to express complex
@@ -76,7 +76,7 @@ as much as possible.
## DataFrames as a DSL
You may already be aware of [Beam
-SQL](https://beam.apache.org/documentation/dsls/sql/overview/), which is
+SQL](/documentation/dsls/sql/overview/), which is
a Domain-Specific Language (DSL) built with Beam's Java SDK. SQL is
considered a DSL because it's possible to express a full pipeline, including IOs
and complex operations, entirely with SQL.
@@ -91,7 +91,7 @@ implementations (`pd.read_{csv,parquet,...}` and `pd.DataFrame.to_{csv,parquet,.
Like SQL, it's also possible to embed the DataFrame API into a larger pipeline
by using
-[schemas](https://beam.apache.org/documentation/programming-guide/#what-is-a-schema).
+[schemas](/documentation/programming-guide/#what-is-a-schema).
A schema-aware PCollection can be converted to a DataFrame, processed, and the
result converted back to another schema-aware PCollection. For example, if you
wanted to use traditional Beam IOs rather than one of the DataFrame IOs you
diff --git a/website/www/site/content/en/blog/go-2.40.md b/website/www/site/content/en/blog/go-2.40.md
index 977790019d79..d2615b6b7c60 100644
--- a/website/www/site/content/en/blog/go-2.40.md
+++ b/website/www/site/content/en/blog/go-2.40.md
@@ -29,15 +29,15 @@ some of the biggest changes coming with this important release!
2.40 marks the release of one of our most anticipated feature sets yet:
native streaming Go pipelines. This includes adding support for:
-- [Self Checkpointing](https://beam.apache.org/documentation/programming-guide/#user-initiated-checkpoint)
-- [Watermark Estimation](https://beam.apache.org/documentation/programming-guide/#watermark-estimation)
-- [Pipeline Drain/Truncation](https://beam.apache.org/documentation/programming-guide/#truncating-during-drain)
-- [Bundle Finalization](https://beam.apache.org/documentation/programming-guide/#bundle-finalization) (added in 2.39)
+- [Self Checkpointing](/documentation/programming-guide/#user-initiated-checkpoint)
+- [Watermark Estimation](/documentation/programming-guide/#watermark-estimation)
+- [Pipeline Drain/Truncation](/documentation/programming-guide/#truncating-during-drain)
+- [Bundle Finalization](/documentation/programming-guide/#bundle-finalization) (added in 2.39)
With all of these features, it is now possible to write your own streaming
pipeline source DoFns in Go without relying on cross-language transforms
from Java or Python. We encourage you to try out all of these new features
-in your streaming pipelines! The [programming guide](https://beam.apache.org/documentation/programming-guide/#splittable-dofns)
+in your streaming pipelines! The [programming guide](/documentation/programming-guide/#splittable-dofns)
has additional information on getting started with native Go streaming DoFns.
# Generic Registration (Make Your Pipelines 3x Faster)
@@ -61,7 +61,7 @@ gains, check out the [registration doc page](https://pkg.go.dev/github.com/apach
Moving forward, we remain focused on improving the streaming experience and
leveraging generics to improve the SDK. Specific improvements we are considering
-include adding [State & Timers](https://beam.apache.org/documentation/programming-guide/#state-and-timers)
+include adding [State & Timers](/documentation/programming-guide/#state-and-timers)
support, introducing a Go expansion service so that Go DoFns can be used in other
languages, and wrapping more Java and Python IOs so that they can be easily used
in Go. As always, please let us know what changes you would like to see by
diff --git a/website/www/site/content/en/blog/gsoc-19.md b/website/www/site/content/en/blog/gsoc-19.md
index 4c6c4a431f2c..5ec7801dfd08 100644
--- a/website/www/site/content/en/blog/gsoc-19.md
+++ b/website/www/site/content/en/blog/gsoc-19.md
@@ -49,8 +49,8 @@ I wanted to explore Data Engineering, so for GSoC, I wanted to work on a project
I had already read the [Streaming Systems book](http://streamingsystems.net/). So, I had an idea of the concepts that Beam is built on, but had never actually used Beam.
Before actually submitting a proposal, I went through a bunch of resources to make sure I had a concrete understanding of Beam.
I read the [Streaming 101](https://www.oreilly.com/ideas/the-world-beyond-batch-streaming-101) and [Streaming 102](https://www.oreilly.com/ideas/the-world-beyond-batch-streaming-102) blogs by Tyler Akidau. They are the perfect introduction to Beam’s unified model for Batch and Streaming.
-In addition, I watched all Beam talks on YouTube. You can find them on the [Beam Website](https://beam.apache.org/get-started/resources/videos-and-podcasts/).
-Beam has really good documentation. The [Programming Guide](https://beam.apache.org/documentation/programming-guide/) lays out all of Beam’s concepts really well. [Beam’s execution model](https://beam.apache.org/documentation/runtime/model) is also documented well and is a must-read to understand how Beam processes data.
+In addition, I watched all Beam talks on YouTube. You can find them on the [Beam Website](/get-started/resources/videos-and-podcasts/).
+Beam has really good documentation. The [Programming Guide](/documentation/programming-guide/) lays out all of Beam’s concepts really well. [Beam’s execution model](/documentation/runtime/model) is also documented well and is a must-read to understand how Beam processes data.
[waitingforcode.com](https://www.waitingforcode.com/apache-beam) also has good blog posts about Beam concepts.
To get a better sense of the Beam codebase, I played around with it and worked on some PRs to understand Beam better and got familiar with the test suite and workflows.
diff --git a/website/www/site/content/en/blog/hop-web-cloud.md b/website/www/site/content/en/blog/hop-web-cloud.md
index 34e1aabff780..8da6fc4a6731 100644
--- a/website/www/site/content/en/blog/hop-web-cloud.md
+++ b/website/www/site/content/en/blog/hop-web-cloud.md
@@ -22,7 +22,7 @@ limitations under the License.
Hop is a codeless visual development environment for Apache Beam pipelines that
can run jobs in any Beam runner, such as Dataflow, Flink or Spark. [In a
-previous post](https://beam.apache.org/blog/apache-hop-with-dataflow/), we
+previous post](/blog/apache-hop-with-dataflow/), we
introduced the desktop version of Apache Hop. Hop also has a web environment,
Hop Web, that you can run from a container, so you don't have to install
anything on your computer to use it.
@@ -234,7 +234,7 @@ access your Apache Hop instance.
You are now ready to use Apache Hop in a web browser!
You can try to replicate the example that was given [in a previous
-post](https://beam.apache.org/blog/apache-hop-with-dataflow/) using Hop web, or
+post](/blog/apache-hop-with-dataflow/) using Hop web, or
just try to launch any other project from the samples included with Hop:
![Sample projects in Hop](/images/blog/hop-web-cloud/hop-web-cloud-image5.png)
@@ -300,5 +300,5 @@ nothing, just your favourite web browser.
If you followed the instructions in this post, head over to the post [Running
Apache Hop visual pipelines with Google Cloud
-Dataflow](https://beam.apache.org/blog/apache-hop-with-dataflow/) to run a
+Dataflow](/blog/apache-hop-with-dataflow/) to run a
Dataflow pipeline right from your web browser!
diff --git a/website/www/site/content/en/blog/kafka-to-pubsub-example.md b/website/www/site/content/en/blog/kafka-to-pubsub-example.md
index 31972d95ffd4..df2439e2062d 100644
--- a/website/www/site/content/en/blog/kafka-to-pubsub-example.md
+++ b/website/www/site/content/en/blog/kafka-to-pubsub-example.md
@@ -31,8 +31,8 @@ simple yet powerful pipelines and also provides an out-of-the-box solution that
plug'n'play"_.
This end-to-end example is included
-in [Apache Beam release 2.27](https://beam.apache.org/blog/beam-2.27.0/)
-and can be downloaded [here](https://beam.apache.org/get-started/downloads/#2270-2020-12-22).
+in [Apache Beam release 2.27](/blog/beam-2.27.0/)
+and can be downloaded [here](/get-started/downloads/#2270-2020-12-22).
We hope you will find this example useful for setting up data pipelines between Kafka and Pub/Sub.
@@ -85,5 +85,5 @@ you more understanding on how pipelines work and look like. If you are already u
some code samples in it will be useful for your use cases.
Please
-[let us know](https://beam.apache.org/community/contact-us/) if you encounter any issues.
+[let us know](/community/contact-us/) if you encounter any issues.
diff --git a/website/www/site/content/en/blog/ml-resources.md b/website/www/site/content/en/blog/ml-resources.md
index e4e14b7ba482..3048fff87f1d 100644
--- a/website/www/site/content/en/blog/ml-resources.md
+++ b/website/www/site/content/en/blog/ml-resources.md
@@ -34,10 +34,10 @@ documentation and notebooks to make it easier to use these new features
and to show how Beam can be used to solve common Machine Learning problems.
We're now happy to present this new and improved Beam ML experience!
-To get started, we encourage you to visit Beam's new [AI/ML landing page](https://beam.apache.org/documentation/ml/overview/).
-We've got plenty of content on things like [multi-model pipelines](https://beam.apache.org/documentation/ml/multi-model-pipelines/),
-[performing inference with metrics](https://beam.apache.org/documentation/ml/runinference-metrics/),
-[online training](https://beam.apache.org/documentation/ml/online-clustering/), and much more.
+To get started, we encourage you to visit Beam's new [AI/ML landing page](/documentation/ml/overview/).
+We've got plenty of content on things like [multi-model pipelines](/documentation/ml/multi-model-pipelines/),
+[performing inference with metrics](/documentation/ml/runinference-metrics/),
+[online training](/documentation/ml/online-clustering/), and much more.
Seznam started migrating their key workloads to Apache Beam.
-They decided to merge the [Euphoria API](https://beam.apache.org/documentation/sdks/java/euphoria/)
+They decided to merge the [Euphoria API](/documentation/sdks/java/euphoria/)
as a high-level DSL for Apache Beam Java SDK.
This significant contribution to Apache Beam was a starting point for Seznam’s active participation in the community,
later presenting their unique experience and findings at [Beam Summit Europe 2019](https://www.youtube.com/watch?v=ZIFtmx8nBow)
@@ -121,8 +121,8 @@ Apache Beam enabled Seznam to execute batch and stream jobs much faster without
thus maximizing scalability, performance, and efficiency.
Apache Beam offers a variety of ways to distribute skewed data evenly.
-[Windowing](https://beam.apache.org/documentation/programming-guide/#windowing)
-for processing unbounded and [Partition](https://beam.apache.org/documentation/transforms/java/elementwise/partition/)
+[Windowing](/documentation/programming-guide/#windowing)
+for processing unbounded and [Partition](/documentation/transforms/java/elementwise/partition/)
for bounded data sets transform input into finite
collections of elements that can be reshuffled. Apache Beam provides a byte-based shuffle that can be
executed by Spark runner or Flink runner, without requiring Apache Spark or Apache Flink to deserialize the full key.
@@ -197,7 +197,7 @@ Apache Beam offered a unified model for Seznam’s stream and batch processing t
Apache Beam supported multiple runners, language SDKs, and built-in and custom pluggable I/O transforms,
thus eliminating the need to invest into the development and support of proprietary runners and solutions.
After evaluation, Seznam transitioned their workloads to Apache Beam and integrated
-[Euphoria API](https://beam.apache.org/documentation/sdks/java/euphoria/)
+[Euphoria API](/documentation/sdks/java/euphoria/)
(a fast prototyping framework developed by Seznam), contributing to the Apache Beam open source community.
The Apache Beam abstraction and execution model allowed Seznam to robustly scale their data processing.
diff --git a/website/www/site/content/en/case-studies/snowflake.md b/website/www/site/content/en/case-studies/snowflake.md
index ebcfee1678c1..e9140b3e7356 100644
--- a/website/www/site/content/en/case-studies/snowflake.md
+++ b/website/www/site/content/en/case-studies/snowflake.md
@@ -2,7 +2,7 @@
title: "Snowflake"
icon: /images/logos/powered-by/snowflake.png
hasNav: true
-hasLink: "https://beam.apache.org/documentation/io/built-in/snowflake/"
+hasLink: "/documentation/io/built-in/snowflake/"
---
# RunInference
-In Apache Beam 2.40.0, Beam introduced the RunInference API, which lets you deploy a machine learning model in a Beam pipeline. A `RunInference` transform performs inference on a `PCollection` of examples using a machine learning (ML) model. The transform outputs a PCollection that contains the input examples and output predictions. For more information, see RunInference [here](https://beam.apache.org/documentation/transforms/python/elementwise/runinference/). You can also find [inference examples on GitHub](https://github.com/apache/beam/tree/master/sdks/python/apache_beam/examples/inference).
+In Apache Beam 2.40.0, Beam introduced the RunInference API, which lets you deploy a machine learning model in a Beam pipeline. A `RunInference` transform performs inference on a `PCollection` of examples using a machine learning (ML) model. The transform outputs a PCollection that contains the input examples and output predictions. For more information, see RunInference [here](/documentation/transforms/python/elementwise/runinference/). You can also find [inference examples on GitHub](https://github.com/apache/beam/tree/master/sdks/python/apache_beam/examples/inference).
## Using RunInference with very large models
diff --git a/website/www/site/content/en/documentation/ml/multi-model-pipelines.md b/website/www/site/content/en/documentation/ml/multi-model-pipelines.md
index 8bc9cd0d416b..569a51b8db55 100644
--- a/website/www/site/content/en/documentation/ml/multi-model-pipelines.md
+++ b/website/www/site/content/en/documentation/ml/multi-model-pipelines.md
@@ -23,7 +23,7 @@ into a second model. This page explains how multi-model pipelines work and gives
you need to know to build one.
Before reading this section, it is recommended that you become familiar with the information in
-the [Pipeline development lifecycle](https://beam.apache.org/documentation/pipelines/design-your-pipeline/).
+the [Pipeline development lifecycle](/documentation/pipelines/design-your-pipeline/).
## How to build a Multi-model pipeline with Beam
@@ -33,7 +33,7 @@ all of those steps together by encapsulating them in a single Apache Beam Direct
resilient and scalable end-to-end machine learning systems.
To deploy your machine learning model in an Apache Beam pipeline, use
-the [`RunInferenceAPI`](https://beam.apache.org/documentation/sdks/python-machine-learning/), which
+the [`RunInferenceAPI`](/documentation/sdks/python-machine-learning/), which
facilitates the integration of your model as a `PTransform` step in your DAG. Composing
multiple `RunInference` transforms within a single DAG makes it possible to build a pipeline that consists
of multiple ML models. In this way, Apache Beam supports the development of complex ML systems.
@@ -72,7 +72,7 @@ model_b_predictions = userset_b_traffic | RunInference()
Where `beam.partition` is used to split the data source into 50/50 split partitions. For more
information about data partitioning,
-see [Partition](https://beam.apache.org/documentation/transforms/python/elementwise/partition/).
+see [Partition](/documentation/transforms/python/elementwise/partition/).
### Cascade Pattern
diff --git a/website/www/site/content/en/documentation/ml/online-clustering.md b/website/www/site/content/en/documentation/ml/online-clustering.md
index f4c67bfb0e9d..fa63664fb671 100644
--- a/website/www/site/content/en/documentation/ml/online-clustering.md
+++ b/website/www/site/content/en/documentation/ml/online-clustering.md
@@ -140,7 +140,7 @@ The next sections examine three important pipeline steps:
1. Tokenize the text.
2. Feed the tokenized text to get embedding from a transformer-based language model.
-3. Perform clustering using [stateful processing](https://beam.apache.org/blog/stateful-processing/).
+3. Perform clustering using [stateful processing](/blog/stateful-processing/).
### Get Embedding from a Language Model
@@ -173,7 +173,7 @@ To make better clusters, after getting the embedding for each piece of Twitter t
### StatefulOnlineClustering
-Because the data is streaming, you need to use an iterative clustering algorithm, like BIRCH. And because the algorithm is iterative, you need a mechanism to store the previous state so that when Twitter text arrives, it can be updated. **Stateful processing** enables a `DoFn` to have persistent state, which can be read and written during the processing of each element. For more information about stateful processing, see [Stateful processing with Apache Beam](https://beam.apache.org/blog/stateful-processing/).
+Because the data is streaming, you need to use an iterative clustering algorithm, like BIRCH. And because the algorithm is iterative, you need a mechanism to store the previous state so that when Twitter text arrives, it can be updated. **Stateful processing** enables a `DoFn` to have persistent state, which can be read and written during the processing of each element. For more information about stateful processing, see [Stateful processing with Apache Beam](/blog/stateful-processing/).
In this example, every time a new message is read from Pub/Sub, you retrieve the existing state of the clustering model, update it, and write it back to the state.
diff --git a/website/www/site/content/en/documentation/ml/orchestration.md b/website/www/site/content/en/documentation/ml/orchestration.md
index c1f47320d6ae..6411b0f72442 100644
--- a/website/www/site/content/en/documentation/ml/orchestration.md
+++ b/website/www/site/content/en/documentation/ml/orchestration.md
@@ -26,7 +26,7 @@ Apache Beam is an open source, unified model for defining both batch and streami
![A standalone beam pipeline](/images/standalone-beam-pipeline.svg)
-Defining a pipeline and the corresponding DAG does not mean that data starts flowing through the pipeline. To run the pipeline, you need to deploy it to one of the [supported Beam runners](https://beam.apache.org/documentation/runners/capability-matrix/). These distributed processing backends include Apache Flink, Apache Spark, and Google Cloud Dataflow. To run the pipeline locally on your machine for development and debugging purposes, a [Direct Runner](https://beam.apache.org/documentation/runners/direct/) is also provided. View the [runner capability matrix](https://beam.apache.org/documentation/runners/capability-matrix/) to verify that your chosen runner supports the data processing steps defined in your pipeline, especially when using the Direct Runner.
+Defining a pipeline and the corresponding DAG does not mean that data starts flowing through the pipeline. To run the pipeline, you need to deploy it to one of the [supported Beam runners](/documentation/runners/capability-matrix/). These distributed processing backends include Apache Flink, Apache Spark, and Google Cloud Dataflow. To run the pipeline locally on your machine for development and debugging purposes, a [Direct Runner](/documentation/runners/direct/) is also provided. View the [runner capability matrix](/documentation/runners/capability-matrix/) to verify that your chosen runner supports the data processing steps defined in your pipeline, especially when using the Direct Runner.
## Orchestrating frameworks
diff --git a/website/www/site/content/en/documentation/ml/overview.md b/website/www/site/content/en/documentation/ml/overview.md
index d2737f5fe383..ed77da115c2f 100644
--- a/website/www/site/content/en/documentation/ml/overview.md
+++ b/website/www/site/content/en/documentation/ml/overview.md
@@ -43,9 +43,9 @@ You can use Apache Beam for data validation, data preprocessing, and model deplo
## Data processing
-You can use Apache Beam for data validation and preprocessing by setting up data pipelines that transform your data and output metrics computed from your data. Beam has a rich set of [I/O connectors](https://beam.apache.org/documentation/io/built-in/) for ingesting and writing data, which allows you to integrate it with your existing file system, database, or messaging queue.
+You can use Apache Beam for data validation and preprocessing by setting up data pipelines that transform your data and output metrics computed from your data. Beam has a rich set of [I/O connectors](/documentation/io/built-in/) for ingesting and writing data, which allows you to integrate it with your existing file system, database, or messaging queue.
-When developing your ML model, you can also first explore your data with the [Beam DataFrame API](https://beam.apache.org/documentation/dsls/dataframes/overview/). The DataFrom API lets you identify and implement the required preprocessing steps, making it easier for you to move your pipeline to production.
+When developing your ML model, you can also first explore your data with the [Beam DataFrame API](/documentation/dsls/dataframes/overview/). The DataFrom API lets you identify and implement the required preprocessing steps, making it easier for you to move your pipeline to production.
Steps executed during preprocessing often also need to be applied before running inference, in which case you can use the same Beam implementation twice. Lastly, when you need to do postprocessing after running inference, Apache Beam allows you to incoporate the postprocessing into your model inference pipeline.
@@ -58,7 +58,7 @@ Beam provides different ways to implement inference as part of your pipeline. Yo
### RunInference
-The recommended way to implement inference is by using the [RunInference API](https://beam.apache.org/documentation/sdks/python-machine-learning/). RunInference takes advantage of existing Apache Beam concepts, such as the `BatchElements` transform and the `Shared` class, to enable you to use models in your pipelines to create transforms optimized for machine learning inferences. The ability to create arbitrarily complex workflow graphs also allows you to build multi-model pipelines.
+The recommended way to implement inference is by using the [RunInference API](/documentation/sdks/python-machine-learning/). RunInference takes advantage of existing Apache Beam concepts, such as the `BatchElements` transform and the `Shared` class, to enable you to use models in your pipelines to create transforms optimized for machine learning inferences. The ability to create arbitrarily complex workflow graphs also allows you to build multi-model pipelines.
You can integrate your model in your pipeline by using the corresponding model handlers. A `ModelHandler` is an object that wraps the underlying model and allows you to configure its parameters. Model handlers are available for PyTorch, scikit-learn, and TensorFlow. Examples of how to use RunInference for PyTorch, scikit-learn, and TensorFlow are shown in this [notebook](https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_pytorch_tensorflow_sklearn.ipynb).
diff --git a/website/www/site/content/en/documentation/ml/runinference-metrics.md b/website/www/site/content/en/documentation/ml/runinference-metrics.md
index b31d1bb25705..8bf4d713c3ad 100644
--- a/website/www/site/content/en/documentation/ml/runinference-metrics.md
+++ b/website/www/site/content/en/documentation/ml/runinference-metrics.md
@@ -17,7 +17,7 @@ limitations under the License.
# RunInference Metrics
-This example demonstrates and explains different metrics that are available when using the [RunInference](https://beam.apache.org/documentation/transforms/python/elementwise/runinference/) transform to perform inference using a machine learning model. The example uses a pipeline that reads a list of sentences, tokenizes the text, and uses the transformer-based model `distilbert-base-uncased-finetuned-sst-2-english` with `RunInference` to classify the pieces of text into two classes.
+This example demonstrates and explains different metrics that are available when using the [RunInference](/documentation/transforms/python/elementwise/runinference/) transform to perform inference using a machine learning model. The example uses a pipeline that reads a list of sentences, tokenizes the text, and uses the transformer-based model `distilbert-base-uncased-finetuned-sst-2-english` with `RunInference` to classify the pieces of text into two classes.
When you run the pipeline with the Dataflow runner, different RunInference metrics are available with CPU and with GPU. This example demonstrates both types of metrics.
diff --git a/website/www/site/content/en/documentation/patterns/bqml.md b/website/www/site/content/en/documentation/patterns/bqml.md
index e56802fb6400..1ec70ab23385 100644
--- a/website/www/site/content/en/documentation/patterns/bqml.md
+++ b/website/www/site/content/en/documentation/patterns/bqml.md
@@ -60,7 +60,7 @@ bq extract -m bqml_tutorial.sample_model gs://some/gcs/path
## Create an Apache Beam transform that uses your BigQuery ML model
-In this section we will construct an Apache Beam pipeline that will use the BigQuery ML model we just created and exported. The model can be served using Google Cloud AI Platform Prediction - for this please refer to the [AI Platform patterns](https://beam.apache.org/documentation/patterns/ai-platform/). In this case, we'll be illustrating how to use the tfx_bsl library to do local predictions (on your Apache Beam workers).
+In this section we will construct an Apache Beam pipeline that will use the BigQuery ML model we just created and exported. The model can be served using Google Cloud AI Platform Prediction - for this please refer to the [AI Platform patterns](/documentation/patterns/ai-platform/). In this case, we'll be illustrating how to use the tfx_bsl library to do local predictions (on your Apache Beam workers).
First, the model needs to be downloaded to a local directory where you will be developing the rest of your pipeline (e.g. to `serving_dir/sample_model/1`).
diff --git a/website/www/site/content/en/documentation/patterns/grouping-elements-for-efficient-external-service-calls.md b/website/www/site/content/en/documentation/patterns/grouping-elements-for-efficient-external-service-calls.md
index b0081ee62012..2c8a99b5cca7 100644
--- a/website/www/site/content/en/documentation/patterns/grouping-elements-for-efficient-external-service-calls.md
+++ b/website/www/site/content/en/documentation/patterns/grouping-elements-for-efficient-external-service-calls.md
@@ -26,7 +26,7 @@ State is kept on a per-key and per-windows basis, and as such, the input to your
Examples of use cases are: assigning a unique ID to each element, joining streams of data in 'more exotic' ways, or batching up API calls to external services. In this section we'll go over the last one in particular.
-Make sure to check the [docs](https://beam.apache.org/documentation/programming-guide/#state-and-timers) for deeper understanding on state and timers.
+Make sure to check the [docs](/documentation/programming-guide/#state-and-timers) for deeper understanding on state and timers.
The `GroupIntoBatches`-transform uses state and timers under the hood to allow the user to exercise tight control over the following parameters:
diff --git a/website/www/site/content/en/documentation/programming-guide.md b/website/www/site/content/en/documentation/programming-guide.md
index f6585127a8a5..c67569c41221 100644
--- a/website/www/site/content/en/documentation/programming-guide.md
+++ b/website/www/site/content/en/documentation/programming-guide.md
@@ -2565,7 +2565,7 @@ Timers and States are explained in more detail in the
{{< paragraph class="language-typescript" >}}
**Timer and State:**
This feature isn't yet implemented in the Typescript SDK,
-but we welcome [contributions](https://beam.apache.org/contribute/).
+but we welcome [contributions](/contribute/).
In the meantime, Typescript pipelines wishing to use state and timers can do so
using [cross-language transforms](#use-x-lang-transforms).
{{< /paragraph >}}
@@ -7298,7 +7298,7 @@ You can use the Java class directly from your Python pipeline using a stub trans
Constructor and method parameter types are mapped between Python and Java using a Beam schema. The schema is auto-generated using the object types
provided on the Python side. If the Java class constructor method or builder method accepts any complex object types, make sure that the Beam schema
for these objects is registered and available for the Java expansion service. If a schema has not been registered, the Java expansion service will
-try to register a schema using [JavaFieldSchema](https://beam.apache.org/documentation/programming-guide/#creating-schemas). In Python, arbitrary objects
+try to register a schema using [JavaFieldSchema](/documentation/programming-guide/#creating-schemas). In Python, arbitrary objects
can be represented using `NamedTuple`s, which will be represented as Beam rows in the schema. Here is a Python stub transform that represents the above
mentioned Java transform:
@@ -7503,7 +7503,7 @@ An expansion service can be used with multiple transforms in the same pipeline.
Perform the following steps to start up the default Python expansion service directly:
-1. Create a virtual environment and [install the Apache Beam SDK](https://beam.apache.org/get-started/quickstart-py/).
+1. Create a virtual environment and [install the Apache Beam SDK](/get-started/quickstart-py/).
2. Start the Python SDK’s expansion service with a specified port.
{{< highlight >}}
diff --git a/website/www/site/content/en/documentation/runners/direct.md b/website/www/site/content/en/documentation/runners/direct.md
index ef5c43c42ec3..26e23b4bd09f 100644
--- a/website/www/site/content/en/documentation/runners/direct.md
+++ b/website/www/site/content/en/documentation/runners/direct.md
@@ -81,7 +81,7 @@ If your pipeline uses an unbounded data source or sink, you must set the `stream
### Parallel execution
{{< paragraph class="language-py" >}}
-Python [FnApiRunner](https://beam.apache.org/contribute/runner-guide/#the-fn-api) supports multi-threading and multi-processing mode.
+Python [FnApiRunner](/contribute/runner-guide/#the-fn-api) supports multi-threading and multi-processing mode.
{{< /paragraph >}}
#### Setting parallelism
diff --git a/website/www/site/content/en/documentation/runners/spark.md b/website/www/site/content/en/documentation/runners/spark.md
index b7283f0cbe1b..15cf6cf5ac7c 100644
--- a/website/www/site/content/en/documentation/runners/spark.md
+++ b/website/www/site/content/en/documentation/runners/spark.md
@@ -243,7 +243,7 @@ See [here](/roadmap/portability/#sdk-harness-config) for details.)
### Running on Dataproc cluster (YARN backed)
-To run Beam jobs written in Python, Go, and other supported languages, you can use the `SparkRunner` and `PortableRunner` as described on the Beam's [Spark Runner](https://beam.apache.org/documentation/runners/spark/) page (also see [Portability Framework Roadmap](https://beam.apache.org/roadmap/portability/)).
+To run Beam jobs written in Python, Go, and other supported languages, you can use the `SparkRunner` and `PortableRunner` as described on the Beam's [Spark Runner](/documentation/runners/spark/) page (also see [Portability Framework Roadmap](/roadmap/portability/)).
The following example runs a portable Beam job in Python from the Dataproc cluster's master node with Yarn backed.
diff --git a/website/www/site/content/en/documentation/runtime/model.md b/website/www/site/content/en/documentation/runtime/model.md
index 5078b36ede73..6ed57b64cdc7 100644
--- a/website/www/site/content/en/documentation/runtime/model.md
+++ b/website/www/site/content/en/documentation/runtime/model.md
@@ -50,7 +50,7 @@ ways, such as:
This may allow the runner to avoid serializing elements; instead, the runner
can just pass the elements in memory. This is done as part of an
optimization that is known as
- [fusion](https://beam.apache.org/documentation/glossary/#fusion).
+ [fusion](/documentation/glossary/#fusion).
Some situations where the runner may serialize and persist elements are:
diff --git a/website/www/site/content/en/documentation/sdks/java-multi-language-pipelines.md b/website/www/site/content/en/documentation/sdks/java-multi-language-pipelines.md
index e84dcfdb849b..1ce3f60060bb 100644
--- a/website/www/site/content/en/documentation/sdks/java-multi-language-pipelines.md
+++ b/website/www/site/content/en/documentation/sdks/java-multi-language-pipelines.md
@@ -142,7 +142,7 @@ cases, [start the expansion service](#advanced-start-an-expansion-service)
before running your pipeline.
Before running the pipeline, make sure to perform the
-[runner specific setup](https://beam.apache.org/get-started/quickstart-java/#run-a-pipeline) for your selected Beam runner.
+[runner specific setup](/get-started/quickstart-java/#run-a-pipeline) for your selected Beam runner.
### Run with Dataflow runner using a Maven Archetype (Beam 2.43.0 and later)
@@ -260,7 +260,7 @@ For example, to start the standard expansion service for a Python transform,
follow these steps:
1. Activate a new virtual environment following
-[these instructions](https://beam.apache.org/get-started/quickstart-py/#create-and-activate-a-virtual-environment).
+[these instructions](/get-started/quickstart-py/#create-and-activate-a-virtual-environment).
2. Install Apache Beam with `gcp` and `dataframe` packages.
diff --git a/website/www/site/content/en/documentation/sdks/python-machine-learning.md b/website/www/site/content/en/documentation/sdks/python-machine-learning.md
index 98dc0c6ca839..e24abdf7e0cc 100644
--- a/website/www/site/content/en/documentation/sdks/python-machine-learning.md
+++ b/website/www/site/content/en/documentation/sdks/python-machine-learning.md
@@ -157,7 +157,7 @@ with pipeline as p:
accelerator="type:nvidia-tesla-k80;count:1;install-nvidia-driver")
```
-For more information on resource hints, see [Resource hints](https://beam.apache.org/documentation/runtime/resource-hints/).
+For more information on resource hints, see [Resource hints](/documentation/runtime/resource-hints/).
### Use a keyed ModelHandler
@@ -219,7 +219,7 @@ For detailed instructions explaining how to build and run a pipeline that uses M
## Beam Java SDK support
-The RunInference API is available with the Beam Java SDK versions 2.41.0 and later through Apache Beam's [Multi-language Pipelines framework](https://beam.apache.org/documentation/programming-guide/#multi-language-pipelines). For information about the Java wrapper transform, see [RunInference.java](https://github.com/apache/beam/blob/master/sdks/java/extensions/python/src/main/java/org/apache/beam/sdk/extensions/python/transforms/RunInference.java). To try it out, see the [Java Sklearn Mnist Classification example](https://github.com/apache/beam/tree/master/examples/multi-language).
+The RunInference API is available with the Beam Java SDK versions 2.41.0 and later through Apache Beam's [Multi-language Pipelines framework](/documentation/programming-guide/#multi-language-pipelines). For information about the Java wrapper transform, see [RunInference.java](https://github.com/apache/beam/blob/master/sdks/java/extensions/python/src/main/java/org/apache/beam/sdk/extensions/python/transforms/RunInference.java). To try it out, see the [Java Sklearn Mnist Classification example](https://github.com/apache/beam/tree/master/examples/multi-language).
## Troubleshooting
diff --git a/website/www/site/content/en/documentation/sdks/python-pipeline-dependencies.md b/website/www/site/content/en/documentation/sdks/python-pipeline-dependencies.md
index 330a8af8e449..efef407981cf 100644
--- a/website/www/site/content/en/documentation/sdks/python-pipeline-dependencies.md
+++ b/website/www/site/content/en/documentation/sdks/python-pipeline-dependencies.md
@@ -48,7 +48,7 @@ If your pipeline uses public packages from the [Python Package Index](https://py
## Custom Containers {#custom-containers}
-You can pass a [container](https://hub.docker.com/search?q=apache%2Fbeam&type=image) image with all the dependencies that are needed for the pipeline instead of `requirements.txt`. [Follow the instructions on how to run pipeline with Custom Container images](https://beam.apache.org/documentation/runtime/environments/#running-pipelines).
+You can pass a [container](https://hub.docker.com/search?q=apache%2Fbeam&type=image) image with all the dependencies that are needed for the pipeline instead of `requirements.txt`. [Follow the instructions on how to run pipeline with Custom Container images](/documentation/runtime/environments/#running-pipelines).
1. If you are using a custom container image, we recommend that you install the dependencies from the `--requirements_file` directly into your image at build time. In this case, you do not need to pass `--requirements_file` option at runtime, which will reduce the pipeline startup time.
diff --git a/website/www/site/content/en/documentation/sdks/python-streaming.md b/website/www/site/content/en/documentation/sdks/python-streaming.md
index d2d3e13ca11b..2d0bdfa9500b 100644
--- a/website/www/site/content/en/documentation/sdks/python-streaming.md
+++ b/website/www/site/content/en/documentation/sdks/python-streaming.md
@@ -127,11 +127,11 @@ python -m apache_beam.examples.streaming_wordcount \
{{< /runner >}}
{{< runner flink >}}
-See https://beam.apache.org/documentation/runners/flink/ for more information.
+See /documentation/runners/flink/ for more information.
{{< /runner >}}
{{< runner spark >}}
-See https://beam.apache.org/documentation/runners/spark/ for more information.
+See /documentation/runners/spark/ for more information.
{{< /runner >}}
{{< runner dataflow >}}
diff --git a/website/www/site/content/en/documentation/transforms/java/aggregation/approximatequantiles.md b/website/www/site/content/en/documentation/transforms/java/aggregation/approximatequantiles.md
index 6ab1d5beeccb..3f543a8dea09 100644
--- a/website/www/site/content/en/documentation/transforms/java/aggregation/approximatequantiles.md
+++ b/website/www/site/content/en/documentation/transforms/java/aggregation/approximatequantiles.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/aggregation/approximateunique.md b/website/www/site/content/en/documentation/transforms/java/aggregation/approximateunique.md
index a5e9b59318ab..c0bf79aa3d61 100644
--- a/website/www/site/content/en/documentation/transforms/java/aggregation/approximateunique.md
+++ b/website/www/site/content/en/documentation/transforms/java/aggregation/approximateunique.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/aggregation/cogroupbykey.md b/website/www/site/content/en/documentation/transforms/java/aggregation/cogroupbykey.md
index 4aded7986f4c..90c0984f3df2 100644
--- a/website/www/site/content/en/documentation/transforms/java/aggregation/cogroupbykey.md
+++ b/website/www/site/content/en/documentation/transforms/java/aggregation/cogroupbykey.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/aggregation/combine.md b/website/www/site/content/en/documentation/transforms/java/aggregation/combine.md
index 6daf89a20c61..f40c694692bc 100644
--- a/website/www/site/content/en/documentation/transforms/java/aggregation/combine.md
+++ b/website/www/site/content/en/documentation/transforms/java/aggregation/combine.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/aggregation/combinewithcontext.md b/website/www/site/content/en/documentation/transforms/java/aggregation/combinewithcontext.md
index 573a66e1f3a0..6e78770a3a47 100644
--- a/website/www/site/content/en/documentation/transforms/java/aggregation/combinewithcontext.md
+++ b/website/www/site/content/en/documentation/transforms/java/aggregation/combinewithcontext.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/aggregation/count.md b/website/www/site/content/en/documentation/transforms/java/aggregation/count.md
index fdb855d92fdf..0b84ead8391a 100644
--- a/website/www/site/content/en/documentation/transforms/java/aggregation/count.md
+++ b/website/www/site/content/en/documentation/transforms/java/aggregation/count.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/aggregation/distinct.md b/website/www/site/content/en/documentation/transforms/java/aggregation/distinct.md
index 3a7e6dbf0112..7c5cbd2e9316 100644
--- a/website/www/site/content/en/documentation/transforms/java/aggregation/distinct.md
+++ b/website/www/site/content/en/documentation/transforms/java/aggregation/distinct.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/aggregation/groupbykey.md b/website/www/site/content/en/documentation/transforms/java/aggregation/groupbykey.md
index 6eb389586a39..c9986f72e99f 100644
--- a/website/www/site/content/en/documentation/transforms/java/aggregation/groupbykey.md
+++ b/website/www/site/content/en/documentation/transforms/java/aggregation/groupbykey.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/aggregation/groupintobatches.md b/website/www/site/content/en/documentation/transforms/java/aggregation/groupintobatches.md
index e80682b48bc4..6d1963fd0760 100644
--- a/website/www/site/content/en/documentation/transforms/java/aggregation/groupintobatches.md
+++ b/website/www/site/content/en/documentation/transforms/java/aggregation/groupintobatches.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/aggregation/hllcount.md b/website/www/site/content/en/documentation/transforms/java/aggregation/hllcount.md
index 1f1ec6793d81..89ed66415dce 100644
--- a/website/www/site/content/en/documentation/transforms/java/aggregation/hllcount.md
+++ b/website/www/site/content/en/documentation/transforms/java/aggregation/hllcount.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/aggregation/latest.md b/website/www/site/content/en/documentation/transforms/java/aggregation/latest.md
index 7476c0c591d8..454d39bf14e4 100644
--- a/website/www/site/content/en/documentation/transforms/java/aggregation/latest.md
+++ b/website/www/site/content/en/documentation/transforms/java/aggregation/latest.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/aggregation/max.md b/website/www/site/content/en/documentation/transforms/java/aggregation/max.md
index 9b5cff487042..edc07d2edb55 100644
--- a/website/www/site/content/en/documentation/transforms/java/aggregation/max.md
+++ b/website/www/site/content/en/documentation/transforms/java/aggregation/max.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/aggregation/mean.md b/website/www/site/content/en/documentation/transforms/java/aggregation/mean.md
index d23aecc52c68..88f90d585d22 100644
--- a/website/www/site/content/en/documentation/transforms/java/aggregation/mean.md
+++ b/website/www/site/content/en/documentation/transforms/java/aggregation/mean.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/aggregation/min.md b/website/www/site/content/en/documentation/transforms/java/aggregation/min.md
index 71490e42e73a..e5ecf67cd5d0 100644
--- a/website/www/site/content/en/documentation/transforms/java/aggregation/min.md
+++ b/website/www/site/content/en/documentation/transforms/java/aggregation/min.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/aggregation/sample.md b/website/www/site/content/en/documentation/transforms/java/aggregation/sample.md
index 79eb73d0fd36..e3328af66f06 100644
--- a/website/www/site/content/en/documentation/transforms/java/aggregation/sample.md
+++ b/website/www/site/content/en/documentation/transforms/java/aggregation/sample.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/aggregation/sum.md b/website/www/site/content/en/documentation/transforms/java/aggregation/sum.md
index 72d807165919..2c49cbe4c039 100644
--- a/website/www/site/content/en/documentation/transforms/java/aggregation/sum.md
+++ b/website/www/site/content/en/documentation/transforms/java/aggregation/sum.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/aggregation/top.md b/website/www/site/content/en/documentation/transforms/java/aggregation/top.md
index dbf8fe26a724..018154437681 100644
--- a/website/www/site/content/en/documentation/transforms/java/aggregation/top.md
+++ b/website/www/site/content/en/documentation/transforms/java/aggregation/top.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/elementwise/filter.md b/website/www/site/content/en/documentation/transforms/java/elementwise/filter.md
index 9735c7b78a26..8bdce38b05ac 100644
--- a/website/www/site/content/en/documentation/transforms/java/elementwise/filter.md
+++ b/website/www/site/content/en/documentation/transforms/java/elementwise/filter.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/elementwise/flatmapelements.md b/website/www/site/content/en/documentation/transforms/java/elementwise/flatmapelements.md
index 3b0e2fca7bb0..bfbc3e1f88b0 100644
--- a/website/www/site/content/en/documentation/transforms/java/elementwise/flatmapelements.md
+++ b/website/www/site/content/en/documentation/transforms/java/elementwise/flatmapelements.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/elementwise/keys.md b/website/www/site/content/en/documentation/transforms/java/elementwise/keys.md
index f194c069c0bd..c62efd30abb7 100644
--- a/website/www/site/content/en/documentation/transforms/java/elementwise/keys.md
+++ b/website/www/site/content/en/documentation/transforms/java/elementwise/keys.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/elementwise/kvswap.md b/website/www/site/content/en/documentation/transforms/java/elementwise/kvswap.md
index 5d028bc68ec5..b0f8b5eb4b57 100644
--- a/website/www/site/content/en/documentation/transforms/java/elementwise/kvswap.md
+++ b/website/www/site/content/en/documentation/transforms/java/elementwise/kvswap.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/elementwise/mapelements.md b/website/www/site/content/en/documentation/transforms/java/elementwise/mapelements.md
index b0505e091dd5..5b900baf9690 100644
--- a/website/www/site/content/en/documentation/transforms/java/elementwise/mapelements.md
+++ b/website/www/site/content/en/documentation/transforms/java/elementwise/mapelements.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/elementwise/pardo.md b/website/www/site/content/en/documentation/transforms/java/elementwise/pardo.md
index 905f17a7f522..05b1990ffdef 100644
--- a/website/www/site/content/en/documentation/transforms/java/elementwise/pardo.md
+++ b/website/www/site/content/en/documentation/transforms/java/elementwise/pardo.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/elementwise/partition.md b/website/www/site/content/en/documentation/transforms/java/elementwise/partition.md
index 5234dc97781c..66c27019b5fd 100644
--- a/website/www/site/content/en/documentation/transforms/java/elementwise/partition.md
+++ b/website/www/site/content/en/documentation/transforms/java/elementwise/partition.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/elementwise/regex.md b/website/www/site/content/en/documentation/transforms/java/elementwise/regex.md
index ff554db26446..60545f26e597 100644
--- a/website/www/site/content/en/documentation/transforms/java/elementwise/regex.md
+++ b/website/www/site/content/en/documentation/transforms/java/elementwise/regex.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/elementwise/reify.md b/website/www/site/content/en/documentation/transforms/java/elementwise/reify.md
index 706dc7a1d7ef..4c708f8eebf8 100644
--- a/website/www/site/content/en/documentation/transforms/java/elementwise/reify.md
+++ b/website/www/site/content/en/documentation/transforms/java/elementwise/reify.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/elementwise/tostring.md b/website/www/site/content/en/documentation/transforms/java/elementwise/tostring.md
index 33edf7d005d7..fd5329ff1c81 100644
--- a/website/www/site/content/en/documentation/transforms/java/elementwise/tostring.md
+++ b/website/www/site/content/en/documentation/transforms/java/elementwise/tostring.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/elementwise/values.md b/website/www/site/content/en/documentation/transforms/java/elementwise/values.md
index 6dbd654c9d88..5e6f1cb0975f 100644
--- a/website/www/site/content/en/documentation/transforms/java/elementwise/values.md
+++ b/website/www/site/content/en/documentation/transforms/java/elementwise/values.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/elementwise/withkeys.md b/website/www/site/content/en/documentation/transforms/java/elementwise/withkeys.md
index 1ecbf0fa6f32..c6281b6ddf93 100644
--- a/website/www/site/content/en/documentation/transforms/java/elementwise/withkeys.md
+++ b/website/www/site/content/en/documentation/transforms/java/elementwise/withkeys.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/elementwise/withtimestamps.md b/website/www/site/content/en/documentation/transforms/java/elementwise/withtimestamps.md
index 37606a72a2fc..b2595d8bc36a 100644
--- a/website/www/site/content/en/documentation/transforms/java/elementwise/withtimestamps.md
+++ b/website/www/site/content/en/documentation/transforms/java/elementwise/withtimestamps.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/other/create.md b/website/www/site/content/en/documentation/transforms/java/other/create.md
index c318ae127699..13bdd0789b36 100644
--- a/website/www/site/content/en/documentation/transforms/java/other/create.md
+++ b/website/www/site/content/en/documentation/transforms/java/other/create.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/other/flatten.md b/website/www/site/content/en/documentation/transforms/java/other/flatten.md
index d99e5b9cf61d..ffb2d0573d54 100644
--- a/website/www/site/content/en/documentation/transforms/java/other/flatten.md
+++ b/website/www/site/content/en/documentation/transforms/java/other/flatten.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/other/passert.md b/website/www/site/content/en/documentation/transforms/java/other/passert.md
index 0830657d54fd..95c62f213b20 100644
--- a/website/www/site/content/en/documentation/transforms/java/other/passert.md
+++ b/website/www/site/content/en/documentation/transforms/java/other/passert.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/other/view.md b/website/www/site/content/en/documentation/transforms/java/other/view.md
index fc70fba297d9..a4a31efb8f56 100644
--- a/website/www/site/content/en/documentation/transforms/java/other/view.md
+++ b/website/www/site/content/en/documentation/transforms/java/other/view.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/java/other/window.md b/website/www/site/content/en/documentation/transforms/java/other/window.md
index c96275c62263..439f484697f8 100644
--- a/website/www/site/content/en/documentation/transforms/java/other/window.md
+++ b/website/www/site/content/en/documentation/transforms/java/other/window.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Javadoc
diff --git a/website/www/site/content/en/documentation/transforms/python/elementwise/pardo.md b/website/www/site/content/en/documentation/transforms/python/elementwise/pardo.md
index 19157d2c70ad..9c54a83dd24e 100644
--- a/website/www/site/content/en/documentation/transforms/python/elementwise/pardo.md
+++ b/website/www/site/content/en/documentation/transforms/python/elementwise/pardo.md
@@ -86,7 +86,7 @@ A [`DoFn`](https://beam.apache.org/releases/pydoc/current/apache_beam.transforms
can be customized with a number of methods that can help create more complex behaviors.
You can customize what a worker does when it starts and shuts down with `setup` and `teardown`.
You can also customize what to do when a
-[*bundle of elements*](https://beam.apache.org/documentation/runtime/model/#bundling-and-persistence)
+[*bundle of elements*](/documentation/runtime/model/#bundling-and-persistence)
starts and finishes with `start_bundle` and `finish_bundle`.
* [`DoFn.setup()`](https://beam.apache.org/releases/pydoc/current/apache_beam.transforms.core.html#apache_beam.transforms.core.DoFn.setup):
diff --git a/website/www/site/content/en/documentation/transforms/python/other/create.md b/website/www/site/content/en/documentation/transforms/python/other/create.md
index 53a6f91f839e..0ad28d022dc4 100644
--- a/website/www/site/content/en/documentation/transforms/python/other/create.md
+++ b/website/www/site/content/en/documentation/transforms/python/other/create.md
@@ -18,7 +18,7 @@ limitations under the License.
-
Pydoc
diff --git a/website/www/site/content/en/documentation/transforms/python/other/flatten.md b/website/www/site/content/en/documentation/transforms/python/other/flatten.md
index a150841c5005..d76b5b817ec9 100644
--- a/website/www/site/content/en/documentation/transforms/python/other/flatten.md
+++ b/website/www/site/content/en/documentation/transforms/python/other/flatten.md
@@ -19,7 +19,7 @@ limitations under the License.
-
Pydoc
diff --git a/website/www/site/content/en/documentation/transforms/python/other/reshuffle.md b/website/www/site/content/en/documentation/transforms/python/other/reshuffle.md
index d2264f039b45..dd8c1f311406 100644
--- a/website/www/site/content/en/documentation/transforms/python/other/reshuffle.md
+++ b/website/www/site/content/en/documentation/transforms/python/other/reshuffle.md
@@ -19,7 +19,7 @@ limitations under the License.
-
Pydoc
diff --git a/website/www/site/content/en/documentation/transforms/python/other/windowinto.md b/website/www/site/content/en/documentation/transforms/python/other/windowinto.md
index 035e34ad4384..121d5e4551ae 100644
--- a/website/www/site/content/en/documentation/transforms/python/other/windowinto.md
+++ b/website/www/site/content/en/documentation/transforms/python/other/windowinto.md
@@ -19,7 +19,7 @@ limitations under the License.
diff --git a/website/www/site/content/en/get-started/from-spark.md b/website/www/site/content/en/get-started/from-spark.md
index b1659b02cfca..26a615304b3c 100644
--- a/website/www/site/content/en/get-started/from-spark.md
+++ b/website/www/site/content/en/get-started/from-spark.md
@@ -87,7 +87,7 @@ closed.
> it implicitly calls `pipeline.run()` which triggers the computation to happen.
The pipeline is then sent to your
-[runner of choice](https://beam.apache.org/documentation/runners/capability-matrix/)
+[runner of choice](/documentation/runners/capability-matrix/)
and it processes the data.
> ℹ️ The pipeline can run locally with the _DirectRunner_,
diff --git a/website/www/site/content/en/get-started/quickstart/java.md b/website/www/site/content/en/get-started/quickstart/java.md
index 101586d8ca5f..9fcf1fa6e7ae 100644
--- a/website/www/site/content/en/get-started/quickstart/java.md
+++ b/website/www/site/content/en/get-started/quickstart/java.md
@@ -168,13 +168,13 @@ process any data yet. To process data, you run the pipeline:
pipeline.run().waitUntilFinish();
```
-A Beam [runner](https://beam.apache.org/documentation/basics/#runner) runs a
+A Beam [runner](/documentation/basics/#runner) runs a
Beam pipeline on a specific platform. This example uses the
[Direct Runner](https://beam.apache.org/releases/javadoc/2.3.0/org/apache/beam/runners/direct/DirectRunner.html),
which is the default runner if you don't specify one. The Direct Runner runs
the pipeline locally on your machine. It is meant for testing and development,
rather than being optimized for efficiency. For more information, see
-[Using the Direct Runner](https://beam.apache.org/documentation/runners/direct/).
+[Using the Direct Runner](/documentation/runners/direct/).
For production workloads, you typically use a distributed runner that runs the
pipeline on a big data processing system such as Apache Flink, Apache Spark, or
diff --git a/website/www/site/content/en/get-started/resources/learning-resources.md b/website/www/site/content/en/get-started/resources/learning-resources.md
index 689da7d60ddf..e435a07b2874 100644
--- a/website/www/site/content/en/get-started/resources/learning-resources.md
+++ b/website/www/site/content/en/get-started/resources/learning-resources.md
@@ -30,23 +30,23 @@ If you have additional material that you would like to see here, please let us k
### Quickstart
-* **[Java Quickstart](https://beam.apache.org/get-started/quickstart-java/)** - How to set up and run a WordCount pipeline on the Java SDK.
-* **[Python Quickstart](https://beam.apache.org/get-started/quickstart-py/)** - How to set up and run a WordCount pipeline on the Python SDK.
-* **[Go Quickstart](https://beam.apache.org/get-started/quickstart-go/)** - How to set up and run a WordCount pipeline on the Go SDK.
+* **[Java Quickstart](/get-started/quickstart-java/)** - How to set up and run a WordCount pipeline on the Java SDK.
+* **[Python Quickstart](/get-started/quickstart-py/)** - How to set up and run a WordCount pipeline on the Python SDK.
+* **[Go Quickstart](/get-started/quickstart-go/)** - How to set up and run a WordCount pipeline on the Go SDK.
* **[Java Development Environment](https://medium.com/google-cloud/setting-up-a-java-development-environment-for-apache-beam-on-google-cloud-platform-ec0c6c9fbb39)** - Setting up a Java development environment for Apache Beam using IntelliJ and Maven.
* **[Python Development Environment](https://medium.com/google-cloud/python-development-environments-for-apache-beam-on-google-cloud-platform-b6f276b344df)** - Setting up a Python development environment for Apache Beam using PyCharm.
### Learning the Basics
-* **[WordCount](https://beam.apache.org/get-started/wordcount-example/)** - Walks you through the code of a simple WordCount pipeline. This is a very basic pipeline intended to show the most basic concepts of data processing. WordCount is the "Hello World" for data processing.
-* **[Mobile Gaming](https://beam.apache.org/get-started/mobile-gaming-example/)** - Introduces how to consider time while processing data, user defined transforms, windowing, filtering data, streaming pipelines, triggers, and session analysis. This is a great place to start once you get the hang of WordCount.
+* **[WordCount](/get-started/wordcount-example/)** - Walks you through the code of a simple WordCount pipeline. This is a very basic pipeline intended to show the most basic concepts of data processing. WordCount is the "Hello World" for data processing.
+* **[Mobile Gaming](/get-started/mobile-gaming-example/)** - Introduces how to consider time while processing data, user defined transforms, windowing, filtering data, streaming pipelines, triggers, and session analysis. This is a great place to start once you get the hang of WordCount.
### Fundamentals
-* **[Programming Guide](https://beam.apache.org/documentation/programming-guide/)** - The Programming Guide contains more in-depth information on most topics in the Apache Beam SDK. These include descriptions on how everything works as well as code snippets to see how to use every part. This can be used as a reference guidebook.
+* **[Programming Guide](/documentation/programming-guide/)** - The Programming Guide contains more in-depth information on most topics in the Apache Beam SDK. These include descriptions on how everything works as well as code snippets to see how to use every part. This can be used as a reference guidebook.
* **[The world beyond batch: Streaming 101](https://www.oreilly.com/ideas/the-world-beyond-batch-streaming-101)** - Covers some basic background information, terminology, time domains, batch processing, and streaming.
* **[The world beyond batch: Streaming 102](https://www.oreilly.com/ideas/the-world-beyond-batch-streaming-102)** - Tour of the unified batch and streaming programming model in Beam, alongside with an example to explain many of the concepts.
-* **[Apache Beam Execution Model](https://beam.apache.org/documentation/runtime/model)** - Explanation on how runners execute an Apache Beam pipeline. This includes why serialization is important, and how a runner might distribute the work in parallel to multiple machines.
+* **[Apache Beam Execution Model](/documentation/runtime/model)** - Explanation on how runners execute an Apache Beam pipeline. This includes why serialization is important, and how a runner might distribute the work in parallel to multiple machines.
### Common Patterns
@@ -76,8 +76,8 @@ If you have additional material that you would like to see here, please let us k
### Advanced Concepts
* **[Running on AppEngine](https://amygdala.github.io/dataflow/app_engine/2017/10/24/gae_dataflow.html)** - Use a Dataflow template to launch a pipeline from Google AppEngine, and how to run the pipeline periodically via a cron job.
-* **[Stateful Processing](https://beam.apache.org/blog/2017/02/13/stateful-processing.html)** - Learn how to access a persistent mutable state while processing input elements, this allows for _side effects_ in a `DoFn`. This can be used for arbitrary-but-consistent index assignment, if you want to assign a unique incrementing index to each incoming element where order doesn't matter.
-* **[Timely and Stateful Processing](https://beam.apache.org/blog/2017/08/28/timely-processing.html)** - An example on how to do batched RPC calls. The call requests are stored in a mutable state as they are received. Once there are either enough requests or a certain time has passed, the batch of requests is triggered to be sent.
+* **[Stateful Processing](/blog/2017/02/13/stateful-processing.html)** - Learn how to access a persistent mutable state while processing input elements, this allows for _side effects_ in a `DoFn`. This can be used for arbitrary-but-consistent index assignment, if you want to assign a unique incrementing index to each incoming element where order doesn't matter.
+* **[Timely and Stateful Processing](/blog/2017/08/28/timely-processing.html)** - An example on how to do batched RPC calls. The call requests are stored in a mutable state as they are received. Once there are either enough requests or a certain time has passed, the batch of requests is triggered to be sent.
* **[Running External Libraries](https://cloud.google.com/blog/products/gcp/running-external-libraries-with-cloud-dataflow-for-grid-computing-workloads)** - Call an external library written in a language that does not have a native SDK in Apache Beam such as C++.
## Books {#books}
@@ -148,20 +148,20 @@ complexity. Beam Katas are available for both Java and Python SDKs.
* [Beam Playground](https://play.beam.apache.org) is an interactive environment to try out Beam transforms and examples without having to install Apache Beam in your environment.
You can try the available Apache Beam examples at [Beam Playground](https://play.beam.apache.org).
-* Learn more about how to add an Apache Beam example/test/kata into Beam Playground catalog [here](https://beam.apache.org/get-started/try-beam-playground/#how-to-add-new-examples).
+* Learn more about how to add an Apache Beam example/test/kata into Beam Playground catalog [here](/get-started/try-beam-playground/#how-to-add-new-examples).
## API Reference {#api-reference}
-* **[Java API Reference](https://beam.apache.org/documentation/sdks/javadoc/)** - Official API Reference for the Java SDK.
-* **[Python API Reference](https://beam.apache.org/documentation/sdks/pydoc/)** - Official API Reference for the Python SDK.
+* **[Java API Reference](/documentation/sdks/javadoc/)** - Official API Reference for the Java SDK.
+* **[Python API Reference](/documentation/sdks/pydoc/)** - Official API Reference for the Python SDK.
* **[Go API Reference](https://pkg.go.dev/github.com/apache/beam/sdks/v2/go/pkg/beam)** - Official API Reference for the Go SDK.
## Feedback and Suggestions {#feedback-and-suggestions}
-We are open for feedback and suggestions, you can find different ways to reach out to the community in the [Contact Us](https://beam.apache.org/community/contact-us/) page.
+We are open for feedback and suggestions, you can find different ways to reach out to the community in the [Contact Us](/community/contact-us/) page.
If you have a bug report or want to suggest a new feature, you can let us know by [submitting a new issue](https://github.com/apache/beam/issues/new/choose).
## How to Contribute {#how-to-contribute}
-We welcome contributions from everyone! To learn more on how to contribute, check our [Contribution Guide](https://beam.apache.org/contribute/).
+We welcome contributions from everyone! To learn more on how to contribute, check our [Contribution Guide](/contribute/).
diff --git a/website/www/site/content/en/get-started/tour-of-beam.md b/website/www/site/content/en/get-started/tour-of-beam.md
index b2f1484e0d5b..80dcb7eb21de 100644
--- a/website/www/site/content/en/get-started/tour-of-beam.md
+++ b/website/www/site/content/en/get-started/tour-of-beam.md
@@ -54,7 +54,7 @@ We introduce the `GlobalWindow`, `FixedWindows`, `SlidingWindows`, and `Sessions
Beam DataFrames provide a pandas-like [DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html)
API to declare Beam pipelines.
To learn more about Beam DataFrames, take a look at the
-[Beam DataFrames overview](https://beam.apache.org/documentation/dsls/dataframes/overview) page.
+[Beam DataFrames overview](/documentation/dsls/dataframes/overview) page.
{{< button-colab url="https://colab.research.google.com/github/apache/beam/blob/master/examples/notebooks/tour-of-beam/dataframes.ipynb" >}}
diff --git a/website/www/site/content/en/get-started/wordcount-example.md b/website/www/site/content/en/get-started/wordcount-example.md
index 8d2d5d5521aa..332473d367dd 100644
--- a/website/www/site/content/en/get-started/wordcount-example.md
+++ b/website/www/site/content/en/get-started/wordcount-example.md
@@ -400,7 +400,7 @@ python -m apache_beam.examples.wordcount --input /path/to/inputfile \
{{< runner flinkCluster >}}
# Running Beam Python on a distributed Flink cluster requires additional configuration.
-# See https://beam.apache.org/documentation/runners/flink/ for more information.
+# See /documentation/runners/flink/ for more information.
{{< /runner >}}
{{< runner spark >}}
diff --git a/website/www/site/content/en/roadmap/connectors-multi-sdk.md b/website/www/site/content/en/roadmap/connectors-multi-sdk.md
index 69a00a02b015..3a404b22becf 100644
--- a/website/www/site/content/en/roadmap/connectors-multi-sdk.md
+++ b/website/www/site/content/en/roadmap/connectors-multi-sdk.md
@@ -21,7 +21,7 @@ Connector-related efforts that will benefit multiple SDKs.
Splittable DoFn is the next generation sources framework for Beam that will
replace current frameworks for developing bounded and unbounded sources.
Splittable DoFn is being developed along side current Beam portability
-efforts. See [Beam portability framework roadmap](https://beam.apache.org/roadmap/portability/) for more details.
+efforts. See [Beam portability framework roadmap](/roadmap/portability/) for more details.
# Cross-language transforms
@@ -35,7 +35,7 @@ As an added benefit of Beam portability effort, we are able to utilize Beam tran
+ Go SDK, will be able to utilize connectors currently available for Java and Python SDKs.
* Ease of developing and maintaining Beam transforms - in general, with cross-language transforms, Beam transform authors will be able to implement new Beam transforms using a
language of choice and utilize these transforms from other languages reducing the maintenance and support overheads.
-* [Beam SQL](https://beam.apache.org/documentation/dsls/sql/overview/), that is currently only available to Java SDK, will become available to Python and Go SDKs.
+* [Beam SQL](/documentation/dsls/sql/overview/), that is currently only available to Java SDK, will become available to Python and Go SDKs.
* [Beam TFX transforms](https://www.tensorflow.org/tfx/transform/get_started), that are currently only available to Beam Python SDK pipelines will become available to Java and Go SDKs.
## Completed and Ongoing Efforts