Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Python] Update tensorflow doc #25954

Merged
merged 5 commits into from
Mar 25, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion CHANGES.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@
* Schema'd PTransforms can now be directly applied to Beam dataframes just like PCollections.
(Note that when doing multiple operations, it may be more efficient to explicitly chain the operations
like `df | (Transform1 | Transform2 | ...)` to avoid excessive conversions.)
* The Go SDK adds new transforms periodic.Impulse and periodic.Sequence that extends support
* The Go SDK adds new transforms periodic.Impulse and periodic.Sequence that extends support
for slowly updating side input patterns. ([#23106](https://github.com/apache/beam/issues/23106))

## Breaking Changes
Expand Down
23 changes: 15 additions & 8 deletions website/www/site/content/en/documentation/ml/about-ml.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,14 +112,21 @@ that illustrates running Scikit-learn models with Apache Beam.

#### TensorFlow

To use TensorFlow with the RunInference API, you need to do the following:

* Use `tfx_bsl` version 1.10.0 or later.
* Create a model handler using `tfx_bsl.public.beam.run_inference.CreateModelHandler()`.
* Use the model handler with the [`apache_beam.ml.inference.base.RunInference`](/releases/pydoc/current/apache_beam.ml.inference.base.html) transform.

See [this notebook](https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_tensorflow.ipynb)
that illustrates running TensorFlow models with Apache Beam and tfx-bsl.
To use TensorFlow with the RunInference API, you have two options:
riteshghorse marked this conversation as resolved.
Show resolved Hide resolved

1. Use the built-in TensorFlow Model Handlers in Apache Beam SDK - `TFModelHandlerNumpy` and `TFModelHandlerTensor`.
* Depending on the type of input for your model, use `TFModelHandlerNumpy` for `numpy` input and `TFModelHandlerTensor` for `tf.Tensor` input respectively.
* Use tensorflow 2.7 or later.
* Pass the path of the model to the TensorFlow `ModelHandler` by using `model_uri=<path_to_trained_model>`.
* Alternatively, you can pass the path to saved weights of the trained model, a function to build the model using `create_model_fn=<function>`, and set the `model_type=ModelType.SAVED_WEIGHTS`.
See [this notebook](https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_tensorflow_with_tensorflowhub.ipynb) that illustrates running Tensorflow models with Built-in model handlers.
2. Using `tfx_bsl`.
* Use this approach if your model input is of type `tf.Example`.
* Use `tfx_bsl` version 1.10.0 or later.
* Create a model handler using `tfx_bsl.public.beam.run_inference.CreateModelHandler()`.
* Use the model handler with the [`apache_beam.ml.inference.base.RunInference`](/releases/pydoc/current/apache_beam.ml.inference.base.html) transform.
See [this notebook](https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_tensorflow.ipynb)
that illustrates running TensorFlow models with Apache Beam and tfx-bsl.

## Custom Inference

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -120,14 +120,21 @@ that illustrates running Scikit-learn models with Apache Beam.

### TensorFlow

To use TensorFlow with the RunInference API, you need to do the following:

* Use `tfx_bsl` version 1.10.0 or later.
* Create a model handler using `tfx_bsl.public.beam.run_inference.CreateModelHandler()`.
* Use the model handler with the [`apache_beam.ml.inference.base.RunInference`](/releases/pydoc/current/apache_beam.ml.inference.base.html) transform.

See [this notebook](https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_tensorflow.ipynb)
that illustrates running TensorFlow models with Apache Beam and tfx-bsl.
To use TensorFlow with the RunInference API, you have two options:

1. Use the built-in TensorFlow Model Handlers in Apache Beam SDK - `TFModelHandlerNumpy` and `TFModelHandlerTensor`.
* Depending on the type of input for your model, use `TFModelHandlerNumpy` for `numpy` input and `TFModelHandlerTensor` for `tf.Tensor` input respectively.
* Use tensorflow 2.7 or later.
* Pass the path of the model to the TensorFlow `ModelHandler` by using `model_uri=<path_to_trained_model>`.
* Alternatively, you can pass the path to saved weights of the trained model, a function to build the model using `create_model_fn=<function>`, and set the `model_type=ModelType.SAVED_WEIGHTS`.
See [this notebook](https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_tensorflow_with_tensorflowhub.ipynb) that illustrates running Tensorflow models with Built-in model handlers.
2. Using `tfx_bsl`.
* Use this approach if your model input is of type `tf.Example`.
* Use `tfx_bsl` version 1.10.0 or later.
* Create a model handler using `tfx_bsl.public.beam.run_inference.CreateModelHandler()`.
* Use the model handler with the [`apache_beam.ml.inference.base.RunInference`](/releases/pydoc/current/apache_beam.ml.inference.base.html) transform.
See [this notebook](https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_tensorflow.ipynb)
that illustrates running TensorFlow models with Apache Beam and tfx-bsl.

## Use custom models

Expand Down