Skip to content
This repository has been archived by the owner on Nov 11, 2023. It is now read-only.

Refactoring for use as importable package #130

Closed
lillekemiker opened this issue Sep 26, 2022 · 3 comments
Closed

Refactoring for use as importable package #130

lillekemiker opened this issue Sep 26, 2022 · 3 comments
Labels
enhancement New feature or request freature_request

Comments

@lillekemiker
Copy link

Issue Type

Feature Request

OS

Other

OS architecture

Other

Programming Language

Python

Framework

PyTorch

Download URL for ONNX / OpenVINO IR

N/A

Convert Script

openvino2tensorflow

Description

First of all, thank you for your fantastic work on this. It blows my mind that yours is the only solution out there for converting NCWH to NWHC during conversion to tensorflow. Or at least, I have not been able to find any other solutions.

Considering that your code is regular python, a point of frustration for me, though, is that I am not able to simply import it into to my own code and run the parts I need and switch out parts that I need to work differently.

Would you be interested in refactoring your code and making it an importable package? I would happily volunteer my time to help out with this if you want. An added bonus would be that you can add unit testing and similar much more easily

Relevant Log Output

ModuleNotFoundError: No module named 'openvino2tensorflow'

Source code for simple inference testing code

import openvino2tensorflow

@PINTO0309
Copy link
Owner

PINTO0309 commented Sep 26, 2022

Thanks for the suggestion.

I would like to refactor the code. For example, as you suggest, I would like to add an interface to call from Script or clean up the redundant and buggy code. However, I maintain a lot of other OSS and cannot allocate enough time to work on replacing this tool right now.

My main focus and maintenance right now is this tool that directly processes ONNX.
https://github.com/PINTO0309/simple-onnx-processing-tools

I understand that using onnx-tf will significantly degrade the performance of the TensorFlow model due to the large amount of useless Transpose OPs extrapolated. However, since I can see no real advantage other than running TensorFlow Lite on Android Phone, I am considering onnxruntime as my mainstay runtime, noting that it offers significantly greater operational flexibility by switching between various backends.

e.g.

  • TensorRT
  • TensorFlow Lite
  • OpenVINO (CPU/GPU)
  • MNN
  • TVM
  • CUDA
  • Barracuda

Therefore, it is very hard for me to re-enhance this tool by myself, which has become bloated after two years of continually enhancing its functions little by little.

Pull requests that improve the usefulness of the tool are very welcome. However, I think it will take a very long time to review.
And now I feel it would be much more efficient to create my own onnx-tf, which is different from the onnx-tf maintained by Microsoft.

@PINTO0309 PINTO0309 added enhancement New feature or request freature_request labels Sep 26, 2022
@PINTO0309 PINTO0309 pinned this issue Sep 26, 2022
@lillekemiker
Copy link
Author

My use case is actually running TFLite on an Android (and iOS) phone.
Is there currently a different workflow you would recommend for this?

@PINTO0309
Copy link
Owner

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request freature_request
Projects
None yet
Development

No branches or pull requests

2 participants