Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Python] Added Tensorflow Model Handler #25368

Merged
merged 53 commits into from
Feb 15, 2023
Merged
Show file tree
Hide file tree
Changes from 34 commits
Commits
Show all changes
53 commits
Select commit Hold shift + click to select a range
758de33
go lints
riteshghorse Dec 7, 2022
7368f4c
Merge branch 'master' of https://github.com/apache/beam
riteshghorse Dec 12, 2022
a2c36ae
Merge branch 'master' of github.com:riteshghorse/beam
riteshghorse Dec 14, 2022
f397a04
Merge branch 'apache:master' into master
riteshghorse Dec 14, 2022
531867a
Merge branch 'master' of github.com:riteshghorse/beam
riteshghorse Feb 7, 2023
cb284ec
added tf model handler and tests
riteshghorse Feb 7, 2023
8193283
lint and formatting changes
riteshghorse Feb 7, 2023
d1eb67c
correct lints
riteshghorse Feb 7, 2023
5fb5cbb
more lints and formats
riteshghorse Feb 7, 2023
e1ec168
auto formatted with yapf
riteshghorse Feb 7, 2023
e7b5cf0
rm spare lines
riteshghorse Feb 7, 2023
70edea4
add readme file
riteshghorse Feb 7, 2023
3ed3160
test requirement file
riteshghorse Feb 8, 2023
800cc3a
add test to gradle
riteshghorse Feb 8, 2023
1bc4adf
add test tasks for tf
riteshghorse Feb 8, 2023
70b5a2b
unit test
riteshghorse Feb 8, 2023
f62c366
lints
riteshghorse Feb 8, 2023
eef7a25
updated inferenceFn type
riteshghorse Feb 8, 2023
1169246
add tox info for py38
riteshghorse Feb 8, 2023
520e192
pylint
riteshghorse Feb 8, 2023
4c43cc1
lints
riteshghorse Feb 8, 2023
8017a4d
using tfhub
riteshghorse Feb 10, 2023
1d98cdb
added tf model handler and tests
riteshghorse Feb 7, 2023
5b56a2f
lint and formatting changes
riteshghorse Feb 7, 2023
3ada016
correct lints
riteshghorse Feb 7, 2023
e8cee7b
more lints and formats
riteshghorse Feb 7, 2023
7a2c1a1
auto formatted with yapf
riteshghorse Feb 7, 2023
ee905ee
rm spare lines
riteshghorse Feb 7, 2023
b54436f
merge master
riteshghorse Feb 10, 2023
dd7c49d
test requirement file
riteshghorse Feb 8, 2023
86d7329
add test to gradle
riteshghorse Feb 8, 2023
8ca2a1d
add test tasks for tf
riteshghorse Feb 8, 2023
613068f
unit test
riteshghorse Feb 8, 2023
0fd2b30
lints
riteshghorse Feb 8, 2023
1e80e70
updated inferenceFn type
riteshghorse Feb 8, 2023
38210fc
add tox info for py38
riteshghorse Feb 8, 2023
521bd78
pylint
riteshghorse Feb 8, 2023
029cc95
lints
riteshghorse Feb 8, 2023
efec494
using tfhub
riteshghorse Feb 10, 2023
40568d4
tfhub example
riteshghorse Feb 13, 2023
4fe8a1d
update doc
riteshghorse Feb 13, 2023
ccf0422
Merge branch 'master' of https://github.com/apache/beam into tf-model…
riteshghorse Feb 13, 2023
df61e8c
Merge branch 'master' into tf-model-handler
riteshghorse Feb 13, 2023
0958576
merge master
riteshghorse Feb 13, 2023
368d87d
sort imports
riteshghorse Feb 13, 2023
a557fad
resolve pydoc,precommit
riteshghorse Feb 14, 2023
1b21874
resolve conflict
riteshghorse Feb 14, 2023
46fbde9
fix import
riteshghorse Feb 14, 2023
34e4505
fix lint
riteshghorse Feb 14, 2023
0fbb3d9
address comments
riteshghorse Feb 14, 2023
d298e42
fix optional inference args
riteshghorse Feb 15, 2023
2556534
change to ml bucket
riteshghorse Feb 15, 2023
627fdd9
fix doc
riteshghorse Feb 15, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
76 changes: 75 additions & 1 deletion sdks/python/apache_beam/examples/inference/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -401,6 +401,34 @@ True Price 31000000.0, Predicted Price 25654277.256461
...
```

---
## Sentiment classification using ONNX version of RoBERTa
[`onnx_sentiment_classification.py`](./onnx_sentiment_classification.py) contains an implementation for a RunInference pipeline that performs sentiment classification on movie reviews.

The pipeline reads rows of txt files corresponding to movie reviews, performs basic preprocessing, passes the pixels to the ONNX version of RoBERTa via RunInference, and then writes the predictions (0 for negative, 1 for positive) to a text file.

### Dataset and model for sentiment classification
We assume you already have a trained model in onnx format. In our example, we use RoBERTa from https://github.com/SeldonIO/seldon-models/blob/master/pytorch/moviesentiment_roberta/pytorch-roberta-onnx.ipynb.

For input data, you can generate your own movie reviews (separated by line breaks) or use IMDB reviews online (https://ai.stanford.edu/~amaas/data/sentiment/).

The output will be a text file, with a binary label (0 for negative, 1 for positive) appended to the review, separated by a semicolon.

### Running the pipeline
To run locally, you can use the following command:
```sh
python -m apache_beam.examples.inference.onnx_sentiment_classification.py \
--input_file [input file path] \
--output [output file path] \
--model_uri [path to onnx model]
```

This writes the output to the output file path with contents like:
```
A comedy-drama of nearly epic proportions rooted in a sincere performance by the title character undergoing midlife crisis .;1
```

---
## MNIST digit classification with Tensorflow
[`tensorflow_mnist_classification.py`](./tensorflow_mnist_classification.py) contains an implementation for a RunInference pipeline that performs image classification on handwritten digits from the [MNIST](https://en.wikipedia.org/wiki/MNIST_database) database.

Expand All @@ -410,7 +438,7 @@ The pipeline reads rows of pixels corresponding to a digit, performs basic prepr

To use this transform, you need a dataset and model for language modeling.

1. Create a file named `INPUT.csv` that contains labels and pixels to feed into the model. Each row should have comma-separated elements. The first element is the label. All other elements are pixel values. The csv should not have column headers. The content of the file should be similar to the following example:
1. Create a file named [`INPUT.csv`](gs://apache-beam-ml/testing/inputs/it_mnist_data.csv) that contains labels and pixels to feed into the model. Each row should have comma-separated elements. The first element is the label. All other elements are pixel values. The csv should not have column headers. The content of the file should be similar to the following example:
```
1,0,0,0...
0,0,0,0...
Expand Down Expand Up @@ -449,3 +477,49 @@ This writes the output to the `predictions.txt` with contents like:
...
```
Each line has data separated by a comma ",". The first item is the actual label of the digit. The second item is the predicted label of the digit.

---
## Image segmentation with Tensorflow and TensorflowHub

[`tensorflow_imagenet_segmentation.py`](./tensorflow_imagenet_segmentation.py) contains an implementation for a RunInference pipeline that performs image segementation using the [`mobilenet_v2`]("https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/4") architecture from the tensorflow hub.

The pipeline reads images, performs basic preprocessing, passes the images to the Tensorflow implementation of RunInference, and then writes predictions to a text file.

### Dataset and model for image segmentation

To use this transform, you need a dataset and model for image segmentation.

1. Create a directory named `IMAGE_DIR`. Create or download images and put them in this directory. We
will use the [example image]("https://storage.googleapis.com/download.tensorflow.org/example_images/") on tensorflow.
2. Create a file named `IMAGE_FILE_NAMES.txt` that names of each of the images in `IMAGE_DIR` that you want to use to run image segmentation. For example:
```
grace_hopper.jpg
```
3. A tensorflow `MODEL_PATH`, we will use the [mobilenet]("https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/4") model.
4. Note the path to the `OUTPUT` file. This file is used by the pipeline to write the predictions.

### Running `tensorflow_image_segmentation.py`

To run the image segmentation pipeline locally, use the following command:
```sh
python -m apache_beam.examples.inference.tensorflow_image_segmentation \
--input IMAGE_FILE_NAMES \
--image_dir IMAGES_DIR \
--output OUTPUT \
--model_path MODEL_PATH
```

For example, if you've followed the naming conventions recommended above:
```sh
python -m apache_beam.examples.inference.tensorflow_image_segmentation \
--input IMAGE_FILE_NAMES.txt \
--image_dir "https://storage.googleapis.com/download.tensorflow.org/example_images/"
--output predictions.txt \
--model_path "https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/4"
```
This writes the output to the `predictions.txt` with contents like:
```
background
...
```
Each line has a list of predicted label.
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

import argparse
import logging
from typing import Iterable
from typing import Iterator

import numpy
from PIL import Image

import apache_beam as beam
import tensorflow as tf
from apache_beam.ml.inference.base import PredictionResult
from apache_beam.ml.inference.base import RunInference
from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerTensor
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
from apache_beam.runners.runner import PipelineResult


class PostProcessor(beam.DoFn):
"""Process the PredictionResult to get the predicted label.
Returns predicted label.
"""
def process(self, element: PredictionResult) -> Iterable[str]:
predicted_class = numpy.argmax(element.inference[0], axis=-1)
labels_path = tf.keras.utils.get_file(
'ImageNetLabels.txt',
'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt' # pylint: disable=line-too-long
riteshghorse marked this conversation as resolved.
Show resolved Hide resolved
)
imagenet_labels = numpy.array(open(labels_path).read().splitlines())
predicted_class_name = imagenet_labels[predicted_class]
return predicted_class_name.title()


def parse_known_args(argv):
"""Parses args for the workflow."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--input',
dest='input',
required=True,
help='Path to the text file containing image names.')
parser.add_argument(
'--output',
dest='output',
required=True,
help='Path to save output predictions.')
parser.add_argument(
'--model_path',
dest='model_path',
required=True,
help='Path to load the Tensorflow model for Inference.')
parser.add_argument(
'--image_dir', help='Path to the directory where images are stored.')
return parser.parse_known_args(argv)


def filter_empty_lines(text: str) -> Iterator[str]:
if len(text.strip()) > 0:
yield text


def read_image(image_name, image_dir):
img = tf.keras.utils.get_file(image_name, image_dir + image_name)
img = Image.open(img).resize((224, 224))
img = numpy.array(img) / 255.0
img_tensor = tf.cast(tf.convert_to_tensor(img[...]), dtype=tf.float32)
return img_tensor


def run(
argv=None, save_main_session=True, test_pipeline=None) -> PipelineResult:
"""
Args:
argv: Command line arguments defined for this example.
save_main_session: Used for internal testing.
test_pipeline: Used for internal testing.
"""
known_args, pipeline_args = parse_known_args(argv)
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = save_main_session

# In this example we will use the TensorflowHub model URL.
model_loader = TFModelHandlerTensor(model_uri=known_args.model_path)

pipeline = test_pipeline
if not test_pipeline:
pipeline = beam.Pipeline(options=pipeline_options)

image = (
pipeline
| 'ReadImageNames' >> beam.io.ReadFromText(known_args.input)
| 'FilterEmptyLines' >> beam.ParDo(filter_empty_lines)
| "PreProcessInputs" >>
beam.Map(lambda image_name: read_image(image_name, known_args.image_dir)))

predictions = (
image
| "RunInference" >> RunInference(model_loader)
| "PostProcessOutputs" >> beam.ParDo(PostProcessor()))

_ = predictions | "WriteOutput" >> beam.io.WriteToText(
known_args.output, shard_name_template='', append_trailing_newlines=False)

result = pipeline.run()
result.wait_until_finish()
return result


if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
Original file line number Diff line number Diff line change
Expand Up @@ -16,14 +16,20 @@
#

import argparse
from typing import Iterable, Tuple
import logging
from typing import Iterable
from typing import Tuple

import numpy

import apache_beam as beam
from apache_beam.ml.inference.base import KeyedModelHandler, PredictionResult, RunInference
from apache_beam.ml.inference.base import KeyedModelHandler
from apache_beam.ml.inference.base import PredictionResult
from apache_beam.ml.inference.base import RunInference
from apache_beam.ml.inference.tensorflow_inference import ModelType
from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerNumpy
from apache_beam.options.pipeline_options import PipelineOptions, SetupOptions
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
from apache_beam.runners.runner import PipelineResult


Expand Down Expand Up @@ -82,7 +88,8 @@ def run(
# In this example we pass keyed inputs to RunInference transform.
# Therefore, we use KeyedModelHandler wrapper over TFModelHandlerNumpy.
model_loader = KeyedModelHandler(
TFModelHandlerNumpy(model_uri=known_args.model_path))
TFModelHandlerNumpy(
model_uri=known_args.model_path, model_type=ModelType.SAVED_MODEL))

pipeline = test_pipeline
if not test_pipeline:
Expand Down
30 changes: 19 additions & 11 deletions sdks/python/apache_beam/ml/inference/tensorflow_inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,17 +18,19 @@
# pytype: skip-file

import enum
from typing import Any, Union
import sys
from typing import Any
from typing import Callable
from typing import Dict
from typing import Iterable
from typing import Optional
from typing import Sequence
from typing import Union

import numpy
import sys
import tensorflow as tf

import tensorflow as tf
import tensorflow_hub as hub
from apache_beam.ml.inference import utils
from apache_beam.ml.inference.base import ModelHandler
from apache_beam.ml.inference.base import PredictionResult
Expand All @@ -54,7 +56,7 @@ class ModelType(enum.Enum):

def _load_model(model_uri, model_type):
if model_type == ModelType.SAVED_MODEL:
return tf.keras.models.load_model(model_uri)
return tf.keras.models.load_model(hub.resolve(model_uri))
damccorm marked this conversation as resolved.
Show resolved Hide resolved
else:
raise AssertionError('Unsupported model type for loading.')

Expand All @@ -65,8 +67,11 @@ def default_numpy_inference_fn(
inference_args: Optional[Dict[str, Any]] = None,
damccorm marked this conversation as resolved.
Show resolved Hide resolved
model_id: Optional[str] = None) -> Iterable[PredictionResult]:
vectorized_batch = numpy.stack(batch, axis=0)
return utils._convert_to_result(
batch, model.predict(vectorized_batch, **inference_args), model_id)
if inference_args:
predictions = model(vectorized_batch, **inference_args)
else:
predictions = model(vectorized_batch)
return utils._convert_to_result(batch, predictions, model_id)


def default_tensor_inference_fn(
Expand All @@ -75,8 +80,11 @@ def default_tensor_inference_fn(
inference_args: Optional[Dict[str, Any]] = None,
model_id: Optional[str] = None) -> Iterable[PredictionResult]:
vectorized_batch = tf.stack(batch, axis=0)
return utils._convert_to_result(
batch, model.predict(vectorized_batch, **inference_args), model_id)
if inference_args:
predictions = model(vectorized_batch, **inference_args)
else:
predictions = model(vectorized_batch)
return utils._convert_to_result(batch, predictions, model_id)


class TFModelHandlerNumpy(ModelHandler[numpy.ndarray,
Expand All @@ -100,11 +108,11 @@ def __init__(
model_uri (str): path to the trained model.
model_type (ModelType): type of model to be loaded.
Defaults to SAVED_MODEL.
inference_fn (TensorInferenceFn, optional): inference function to use
inference_fn (TensorInferenceFn, Optional): inference function to use
during RunInference. Defaults to default_numpy_inference_fn.

**Supported Versions:** RunInference APIs in Apache Beam have been tested
with Tensorflow 2.11.
with Tensorflow 2.9, 2.10, 2.11.
"""
self._model_uri = model_uri
self._model_type = model_type
Expand Down Expand Up @@ -183,7 +191,7 @@ def __init__(
model_uri (str): path to the trained model.
model_type (ModelType): type of model to be loaded.
Defaults to SAVED_MODEL.
inference_fn (TensorInferenceFn, optional): inference function to use
inference_fn (TensorInferenceFn, Optional): inference function to use
during RunInference. Defaults to default_numpy_inference_fn.

**Supported Versions:** RunInference APIs in Apache Beam have been tested
Expand Down
Loading