Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add output_saved_model flag #717

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

kwikwag
Copy link
Contributor

@kwikwag kwikwag commented Oct 25, 2024

1. Content and background

Added output_saved_model flag to allow one to optionally only save TFLite models.
The motivation is that on certain models with grouped convolution (namely YOLOv6-Lite) the saved model output fails in a way that prevents conversion from proceeding to the TFLite models. The flag is a simple way to circumvent this.

2. Summary of corrections

A simple if/else.

3. Before/After (If there is an operating log that can be used as a reference)

TBD

Copy link
Owner

@PINTO0309 PINTO0309 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks.
Please also fix the main() function and all relevant parts of the README.

@kwikwag
Copy link
Contributor Author

kwikwag commented Oct 25, 2024

Done. It was a good call too because I noticed that all flags are False by default so I adjusted mine to be so too.

BTW interestingly when running via the CLI the SavedModel is saved successfully. I suspect this is because in the CLI, --output_signaturedefs has a default value of True (which actually kinds of beats the purpose of action='store_true') -- but the default for the convert() function is output_signaturedefs=False...

Also I'm adding the diff for the export log, with/without the flag:

> WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
> W0000 00:00:1729826158.909244   10695 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format.
> W0000 00:00:1729826158.909296   10695 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency.
> WARNING: If you want tflite input OP name and output OP name to match ONNX input and output names, convert them after installing "flatc".  Also, do not use symbols such as slashes in input/output OP names. To install flatc, run the following command:
> wget https://github.com/PINTO0309/onnx2tf/releases/download/1.16.31/flatc.tar.gz && tar -zxvf flatc.tar.gz && sudo chmod +x flatc && sudo mv flatc /usr/bin/
> Float32 tflite output complete!
> W0000 00:00:1729826161.558175   10695 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format.
> W0000 00:00:1729826161.558215   10695 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency.
> WARNING: If you want tflite input OP name and output OP name to match ONNX input and output names, convert them after installing "flatc".  Also, do not use symbols such as slashes in input/output OP names. To install flatc, run the following command:
> wget https://github.com/PINTO0309/onnx2tf/releases/download/1.16.31/flatc.tar.gz && tar -zxvf flatc.tar.gz && sudo chmod +x flatc && sudo mv flatc /usr/bin/
> Float16 tflite output complete!
---
< saved_model output started ==========================================================
< ERROR: Generation of saved_model failed because the OP name does not match the following pattern. ^[A-Za-z0-9.][A-Za-z0-9_.\\/>-]*$       
< ERROR: /backbone/lite_effiblock_4/lite_effiblock_4.0/conv_dw_1/block/conv/Conv/kernel
< ERROR: Please convert again with the `-osd` or `--output_signaturedefs` option.

@kwikwag kwikwag requested a review from PINTO0309 October 25, 2024 10:45
@PINTO0309
Copy link
Owner

PINTO0309 commented Oct 26, 2024

There doesn't seem to be a problem with the fix itself.

Since grouped convolution can be output to saved_model from TensorFlow v2.17.0, I did not set this flag.

First off, we need to be clear: what version of TensorFlow are you using? The TensorFlow API has undergone significant changes since v2.15.0 to support Keras V3, so I don't want to support older versions.

@kwikwag
Copy link
Contributor Author

kwikwag commented Oct 26, 2024

I was using v2.16.2. When I tried exporting with v2.17 I had problems using it in the TFLite Flutter plugin, which is based on v2.11. The main idea behind this flag is not to fix TF version support, but rather work around it in cases where TFLite output is possible. Since they are constructed in an independent manner, it seems reasonable to have flags to control this behavious.

Copy link
Owner

@PINTO0309 PINTO0309 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I checked the operation, I found one problem, so I would like it to be fixed.

except ValueError as e:
msg_list = [s for s in e.args if isinstance(s, str)]
if len(msg_list) > 0:
if not not_output_saved_model:
Copy link
Owner

@PINTO0309 PINTO0309 Oct 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Due to a bug in the Keras and TFlLite_Converter API, you need to generate a saved_model only when you use INT8 quantization. Therefore, please fix it as follows. I understood from the beginning that saved_model is not necessary, but I had no choice but to output saved_model uniformly to avoid a quantization bug in TensorFlow.

if not not_output_saved_model or output_integer_quantized_tflite:

Repository owner deleted a comment from github-actions bot Nov 1, 2024
@PINTO0309 PINTO0309 added the TODO TODO label Nov 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
TODO TODO
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants