-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add output_saved_model flag #717
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks.
Please also fix the main()
function and all relevant parts of the README.
Done. It was a good call too because I noticed that all flags are BTW interestingly when running via the CLI the SavedModel is saved successfully. I suspect this is because in the CLI, Also I'm adding the diff for the export log, with/without the flag: > WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
> W0000 00:00:1729826158.909244 10695 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format.
> W0000 00:00:1729826158.909296 10695 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency.
> WARNING: If you want tflite input OP name and output OP name to match ONNX input and output names, convert them after installing "flatc". Also, do not use symbols such as slashes in input/output OP names. To install flatc, run the following command:
> wget https://github.com/PINTO0309/onnx2tf/releases/download/1.16.31/flatc.tar.gz && tar -zxvf flatc.tar.gz && sudo chmod +x flatc && sudo mv flatc /usr/bin/
> Float32 tflite output complete!
> W0000 00:00:1729826161.558175 10695 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format.
> W0000 00:00:1729826161.558215 10695 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency.
> WARNING: If you want tflite input OP name and output OP name to match ONNX input and output names, convert them after installing "flatc". Also, do not use symbols such as slashes in input/output OP names. To install flatc, run the following command:
> wget https://github.com/PINTO0309/onnx2tf/releases/download/1.16.31/flatc.tar.gz && tar -zxvf flatc.tar.gz && sudo chmod +x flatc && sudo mv flatc /usr/bin/
> Float16 tflite output complete!
---
< saved_model output started ==========================================================
< ERROR: Generation of saved_model failed because the OP name does not match the following pattern. ^[A-Za-z0-9.][A-Za-z0-9_.\\/>-]*$
< ERROR: /backbone/lite_effiblock_4/lite_effiblock_4.0/conv_dw_1/block/conv/Conv/kernel
< ERROR: Please convert again with the `-osd` or `--output_signaturedefs` option. |
There doesn't seem to be a problem with the fix itself. Since First off, we need to be clear: what version of TensorFlow are you using? The TensorFlow API has undergone significant changes since |
I was using v2.16.2. When I tried exporting with v2.17 I had problems using it in the TFLite Flutter plugin, which is based on v2.11. The main idea behind this flag is not to fix TF version support, but rather work around it in cases where TFLite output is possible. Since they are constructed in an independent manner, it seems reasonable to have flags to control this behavious. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I checked the operation, I found one problem, so I would like it to be fixed.
except ValueError as e: | ||
msg_list = [s for s in e.args if isinstance(s, str)] | ||
if len(msg_list) > 0: | ||
if not not_output_saved_model: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Due to a bug in the Keras and TFlLite_Converter API, you need to generate a saved_model
only when you use INT8 quantization. Therefore, please fix it as follows. I understood from the beginning that saved_model
is not necessary, but I had no choice but to output saved_model
uniformly to avoid a quantization bug in TensorFlow.
if not not_output_saved_model or output_integer_quantized_tflite:
1. Content and background
Added
output_saved_model
flag to allow one to optionally only save TFLite models.The motivation is that on certain models with grouped convolution (namely YOLOv6-Lite) the saved model output fails in a way that prevents conversion from proceeding to the TFLite models. The flag is a simple way to circumvent this.
2. Summary of corrections
A simple if/else.
3. Before/After (If there is an operating log that can be used as a reference)
TBD