Skip to content

Commit

Permalink
Squashed commit of the following:
Browse files Browse the repository at this point in the history
commit eaee851
Author: ddavis-2015 <[email protected]>
Date:   Fri Oct 18 17:48:48 2024 -0700

    Squashed commit of the following:

    commit 4894265
    Author: ddavis-2015 <[email protected]>
    Date:   Fri Oct 18 17:48:05 2024 -0700

        pre-merge empty commit

    commit a110e41
    Author: ddavis-2015 <[email protected]>
    Date:   Fri Oct 18 16:17:13 2024 -0700

        fix C++ bitwidth 6 & 7 decompression

    commit efedcc2
    Author: ddavis-2015 <[email protected]>
    Date:   Fri Oct 18 10:18:50 2024 -0700

        working decompression unit test

    commit 81ecf2e
    Author: ddavis-2015 <[email protected]>
    Date:   Thu Oct 17 18:17:06 2024 -0700

        decompression unit test improvements

    commit b318421
    Author: ddavis-2015 <[email protected]>
    Date:   Wed Oct 16 17:34:09 2024 -0700

        add decompression unit test

    commit 9bb2b63
    Author: ddavis-2015 <[email protected]>
    Date:   Sun Oct 13 18:34:01 2024 -0700

        cleanup

    commit 77bb05d
    Author: ddavis-2015 <[email protected]>
    Date:   Sun Oct 13 18:29:33 2024 -0700

        align compressed tensor data as per schema

    commit ad2b1c3
    Author: ddavis-2015 <[email protected]>
    Date:   Sat Oct 12 22:35:54 2024 -0700

        reduce HIFI5 decompression code size

    commit 99c6e35
    Author: ddavis-2015 <[email protected]>
    Date:   Fri Oct 11 14:02:58 2024 -0700

        revert to original Cadence bit width 4 code

    commit 2388549
    Author: ddavis-2015 <[email protected]>
    Date:   Thu Oct 10 17:50:29 2024 -0700

        refactor decompression code into reference and platform specific
        Apply some Xtensa acceleration code changes

    commit b84853c
    Author: ddavis-2015 <[email protected]>
    Date:   Tue Oct 8 16:08:55 2024 -0700

        testing

commit c107f42
Author: Ryan Kuester <[email protected]>
Date:   Thu Oct 17 14:31:03 2024 -0500

    refactor: move misplaced TF_LITE_REMOVE_VIRTUAL_DELETEs to private:

    Move several TF_LITE_REMOVE_VIRTUAL_DELETE declarations that are
    wrongly in a public section of their classes. To have the intended
    effect, as documented in t/l/m/compatibility.h, these must be in a
    private section.

commit 7b3a2bd
Author: Ryan Kuester <[email protected]>
Date:   Thu Oct 17 12:36:46 2024 -0500

    build(bazel): always build with TF_LITE_STATIC_MEMORY

    Add TF_LITE_STATIC_MEMORY to the defines set globally for TFLM builds in
    Bazel. TFLM always builds with this set in Make, and it appears to have
    been an oversight that it wasn't set during Bazel builds. Not having it
    set in Bazel caused some unit tests to pass under Bazel that failed
    under Make.

    At the same time, add -fno-exceptions. This flag is also always set in
    Make builds. Without it, setting TF_LITE_STATIC_MEMORY breaks the build.
    TF_LITE_STATIC_MEMORY triggers TF_LITE_REMOVE_VIRTUAL_DELETE in
    t/l/m/compatibility.h, which makes operator delete private in certain
    classes. When exceptions are enabled, a placement new with those classes
    is allowed to throw an exception, and operator delete is implicitly
    called during the unwind. The build breaks because operator delete can't
    be called if it's private. Disabling exceptions eliminates the unwind
    code that calls operator delete implicitly, and thus the build succeeds.

    In any case, -fno-exceptions should have been used in Bazel builds,
    matching the flags used in Make and the no-exceptions design requirement
    of the TFLM project.

commit 1eb4e0d
Author: Ryan Kuester <[email protected]>
Date:   Thu Oct 17 11:05:45 2024 -0500

    feat(python): don't check .sparsity in interpreter

    Remove the check for sparse tensors in the Python interpreter wrapper.
    This fixes a broken build when TF_LITE_STATIC_MEMORY is set, which
    should always be the case in TFLM. TfLiteTensor objects don't have a
    .sparsity member when TF_LITE_STATIC_MEMORY is set.

    This prepares for an upcoming commit setting TF_LITE_STATIC_MEMORY
    during Bazel builds. This hasn't caused build failures in Make builds,
    which have always set TF_LITE_STATIC_MEMORY, because Make builds don't
    build the Python interpreter wrapper.

commit 7217095
Author: Ryan Kuester <[email protected]>
Date:   Wed Oct 16 14:03:25 2024 -0500

    fix(memory_arena_threshold): with TF_LITE_STATIC_MEMORY

    Fix the broken build due to redefinition of the threshold when
    TF_LITE_STATIC_MEMORY is set. Apparently this case isn't triggered in
    any Bazel test, only in Make.

    Simplify the threshold specification by only depending on whether
    compression is enabled and not also on whether TF_LITE_STATIC_MEMORY is
    in use.

commit 8e4e55e
Author: Ryan Kuester <[email protected]>
Date:   Thu Oct 10 12:38:03 2024 -0500

    build(bazel): disable codegen when building --//:with_compression

    The codegen prototype code is not compatible with the changes which
    implement model compression made to the core TFLM components. For now,
    disable codegen targets when building with compression enabled.

commit 884a234
Author: Ryan Kuester <[email protected]>
Date:   Tue Oct 15 18:31:01 2024 -0500

    build(bazel): compile in compression when --//:with_compression

    Conditionally compile in support for compressed tensors when the option
    --//:with_compression is given.

commit a1d459b
Author: Ryan Kuester <[email protected]>
Date:   Thu Oct 10 12:28:39 2024 -0500

    build(bazel): add --//with_compression build setting

    Add a --//with_compression user-defined build setting and a
    corresponding configuration setting.

commit 4edc564
Author: Ryan Kuester <[email protected]>
Date:   Thu Oct 10 12:24:53 2024 -0500

    build(bazel): fix compression-related dependencies of micro_allocator

commit a52f97f
Author: Ryan Kuester <[email protected]>
Date:   Tue Oct 15 17:28:09 2024 -0500

    build(bazel): replace cc_* with tflm_cc_* in remaining TFLM code

    Replace cc_* targets remaining in TFLM code with tflm_cc_* targets.
    These are targets which did not formerly use the common copts. Avoid
    changing imported TFLite code, if for no other reason than to avoid
    merge conflicts during the automatic sync with upstream TFLite.

commit a6368f4
Author: Ryan Kuester <[email protected]>
Date:   Fri Oct 11 16:08:34 2024 -0500

    build(bazel): introduce tflm_cc_* macros, refactoring away micro_copts

    Remove micro_copts() by replacing every cc_* target that used
    them with a tflm_cc_* equivalent, and setting those common copts in one
    place, inside the tflm_cc_* macro.

    This is the first of several commits introducing tflm_cc_* macros in
    place of cc_binary, cc_library, and cc_test. Motivated by the upcoming
    need to support conditional compilation, the objective is to centralize
    build configuration rather than requiring (and remembering that) each
    cc_* target in the project add the same common attributes such as
    compiler options and select()ed #defines.

    Alternatives such as setting global options on the command line or in
    .bazelrc, even if simplified with a --config option, fail to preserve
    flags and hooks for configuration in the case TFLM is used as an
    external repository by an application project. Nor is it easy in that
    case for individual targets to override an otherwise global setting.

commit 1518422
Author: Ryan Kuester <[email protected]>
Date:   Thu Oct 10 23:56:49 2024 -0500

    chore: remove obsolete ci/temp_patches

    Remove ci/temp_patches, which was obsoleted in 23f608f once it
    was no longer used by the sync script. It should have been
    deleted then.

    Remove it not only to clean up dead code, but because it contains
    a reference to `micro_copts`, which is about to be refactored
    away, and we don't want to leave stray references to it in the
    tree.

commit 18ef080
Author: Ryan Kuester <[email protected]>
Date:   Tue Oct 8 17:58:12 2024 -0500

    refactor: use metadata_saved.h instead of metadata_generated.h

    Use the generated file metadata_saved.h instead of metadata_generated.h
    for the reasons explained in t/l/m/compression/BUILD:metadata_saved.
    Delete metadata_generated.h from the source tree as it is not
    maintained.

commit 5a02e30
Author: Ryan Kuester <[email protected]>
Date:   Thu Oct 10 13:46:46 2024 -0500

    test(memory_arena_threshold): adjust expected value with compression

    Fix a test failure by setting a different expected value for the
    persistent buffer allocation when compression is configured in. The
    allocation was allowed to vary by 3%; however, compression adds ~10%.
    Set the expected value to the measured value when compression is
    configured in.

commit 01bc582
Author: Ryan Kuester <[email protected]>
Date:   Thu Oct 10 13:35:10 2024 -0500

    test(memory_arena_threshold): don't expect exact allocation values

    Remove the check for allocation sizes to exactly match expected values.
    This check immediately followed--and thus rendered pointless---a check
    that sizes are within a certain percentage, which seems to be the true
    intent of the test.

commit e0aae77
Merge: e328029 e86d97b
Author: Ryan Kuester <[email protected]>
Date:   Wed Oct 16 13:39:56 2024 -0500

    Merge branch 'main' into compress-testing

commit e328029
Author: Ryan Kuester <[email protected]>
Date:   Mon Oct 7 12:52:23 2024 -0500

    build(bazel): fix dependencies in work-in-progress compression code

    In the Bazel build, add dependencies needed by the code added to
    t/l/m:micro_context for decompression. The Bazel build with or without
    compression was broken without this.

commit e86d97b
Author: RJ Ascani <[email protected]>
Date:   Mon Oct 7 10:36:26 2024 -0700

    Replace rascani with suleshahid on OWNERS (tensorflow#2715)

    BUG=none

commit b773428
Author: Ryan Kuester <[email protected]>
Date:   Fri Oct 4 09:59:10 2024 -0500

    feat(compression): add work-in-progress compression and viewer tools

commit f6bd486
Merge: 487c17a e3f6dc1
Author: Ryan Kuester <[email protected]>
Date:   Fri Oct 4 09:36:24 2024 -0500

    Merge branch 'main' into compress-prerelease

commit e3f6dc1
Author: David Davis <[email protected]>
Date:   Thu Oct 3 10:45:00 2024 -0700

    Compression documentation (tensorflow#2711)

    @tensorflow/micro

    Add documentation describing some compression/decompression internals and makefile build procedures.

    bug=tensorflow#2710

commit b3967a9
Author: Ryan Kuester <[email protected]>
Date:   Wed Oct 2 13:36:01 2024 -0500

    style: add .style.yapf to control yapf styling of Python code (tensorflow#2709)

    Add a .style.yapf file so yapf can be used to style Python code without
    passing the project's style via command line option. Remove the
    corresponding patch to pigweed's call to yapf, used by CI, and instead
    let it too rely on .style.yapf. Remove the developer documentation's
    instruction to use the command line option.

    BUG=description

commit d249577
Author: Ryan Kuester <[email protected]>
Date:   Tue Oct 1 16:16:45 2024 -0500

    build(codegen): suppress noise in console output (tensorflow#2708)

    Add a --quiet option to the code_generator binary so that when it's used
    within the build system, it doesn't print unexpected, distracting noise
    to the console. Generally, compiler or generator commands don't print
    output unless there's an error.

    BUG=description
  • Loading branch information
ddavis-2015 committed Oct 19, 2024
1 parent 4894265 commit 3d765e6
Show file tree
Hide file tree
Showing 65 changed files with 952 additions and 677 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/sync.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,5 +62,5 @@ jobs:
author: TFLM-bot <[email protected]>
body: "BUG=automated sync from upstream\nNO_CHECK_TFLITE_FILES=automated sync from upstream"
labels: bot:sync-tf, ci:run
reviewers: rascani
reviewers: suleshahid

3 changes: 3 additions & 0 deletions .style.yapf
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
[style]
based_on_style = pep8
indent_width = 2
14 changes: 14 additions & 0 deletions BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -7,3 +7,17 @@ refresh_compile_commands(
name = "refresh_compile_commands",
targets = ["//..."],
)

load("@bazel_skylib//rules:common_settings.bzl", "bool_flag")

bool_flag(
name = "with_compression",
build_setting_default = False,
)

config_setting(
name = "with_compression_enabled",
flag_values = {
":with_compression": "True",
},
)
4 changes: 2 additions & 2 deletions CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
* @tensorflow/micro

/.github/ @advaitjain @rockyrhodes @rascani
/ci/ @advaitjain @rockyrhodes @rascani
/.github/ @advaitjain @rockyrhodes @suleshahid
/ci/ @advaitjain @rockyrhodes @suleshahid
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,7 @@ Below are some tips that might be useful and improve the development experience.

```
pip install yapf
yapf log_parser.py -i --style='{based_on_style: pep8, indent_width: 2}'
yapf log_parser.py -i'
```

* Add a git hook to check for code style etc. prior to creating a pull request:
Expand Down
34 changes: 0 additions & 34 deletions ci/temp_patches/tf_update_visibility.patch

This file was deleted.

11 changes: 7 additions & 4 deletions codegen/build_def.bzl
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
""" Build rule for generating ML inference code from TFLite model. """

load("//tensorflow/lite/micro:build_def.bzl", "micro_copts")
load("//tensorflow/lite/micro:build_def.bzl", "tflm_cc_library")

def tflm_inference_library(
name,
Expand All @@ -20,12 +20,12 @@ def tflm_inference_library(
srcs = [tflite_model],
outs = [name + ".h", name + ".cc"],
tools = ["//codegen:code_generator"],
cmd = "$(location //codegen:code_generator) " +
cmd = "$(location //codegen:code_generator) --quiet " +
"--model=$< --output_dir=$(RULEDIR) --output_name=%s" % name,
visibility = ["//visibility:private"],
)

native.cc_library(
tflm_cc_library(
name = name,
hdrs = [name + ".h"],
srcs = [name + ".cc"],
Expand All @@ -39,6 +39,9 @@ def tflm_inference_library(
"//tensorflow/lite/micro:micro_common",
"//tensorflow/lite/micro:micro_context",
],
copts = micro_copts(),
target_compatible_with = select({
"//conditions:default": [],
"//:with_compression_enabled": ["@platforms//:incompatible"],
}),
visibility = visibility,
)
22 changes: 20 additions & 2 deletions codegen/code_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,14 +15,14 @@
""" Generates C/C++ source code capable of performing inference for a model. """

import os
import pathlib

from absl import app
from absl import flags
from collections.abc import Sequence

from tflite_micro.codegen import inference_generator
from tflite_micro.codegen import graph
from tflite_micro.tensorflow.lite.tools import flatbuffer_utils

# Usage information:
# Default:
Expand All @@ -48,15 +48,33 @@
"'model' basename."),
required=False)

_QUIET = flags.DEFINE_bool(
name="quiet",
default=False,
help="Suppress informational output (e.g., for use in for build system)",
required=False)


def main(argv: Sequence[str]) -> None:
if _QUIET.value:
restore = os.environ.get("TF_CPP_MIN_LOG_LEVEL", "0")
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
from tflite_micro.tensorflow.lite.tools import flatbuffer_utils
os.environ["TF_CPP_MIN_LOG_LEVEL"] = restore
else:
from tflite_micro.tensorflow.lite.tools import flatbuffer_utils

output_dir = _OUTPUT_DIR.value or os.path.dirname(_MODEL_PATH.value)
output_name = _OUTPUT_NAME.value or os.path.splitext(
os.path.basename(_MODEL_PATH.value))[0]

model = flatbuffer_utils.read_model(_MODEL_PATH.value)

print("Generating inference code for model: {}".format(_MODEL_PATH.value))
if not _QUIET.value:
print("Generating inference code for model: {}".format(_MODEL_PATH.value))
output_path = pathlib.Path(output_dir) / output_name
print(f"Generating {output_path}.h")
print(f"Generating {output_path}.cc")

inference_generator.generate(output_dir, output_name,
graph.OpCodeTable([model]), graph.Graph(model))
Expand Down
1 change: 0 additions & 1 deletion codegen/inference_generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,6 @@ class ModelData(TypedDict):

def _render(output_file: pathlib.Path, template_file: pathlib.Path,
model_data: ModelData) -> None:
print("Generating {}".format(output_file))
t = template.Template(filename=str(template_file))
with output_file.open('w+') as file:
file.write(t.render(**model_data))
Expand Down
5 changes: 2 additions & 3 deletions codegen/runtime/BUILD
Original file line number Diff line number Diff line change
@@ -1,12 +1,11 @@
load("//tensorflow/lite/micro:build_def.bzl", "micro_copts")
load("//tensorflow/lite/micro:build_def.bzl", "tflm_cc_library")

package(default_visibility = ["//visibility:public"])

cc_library(
tflm_cc_library(
name = "micro_codegen_context",
srcs = ["micro_codegen_context.cc"],
hdrs = ["micro_codegen_context.h"],
copts = micro_copts(),
deps = [
"//tensorflow/lite/c:common",
"//tensorflow/lite/kernels:op_macros",
Expand Down
5 changes: 2 additions & 3 deletions python/tflite_micro/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ load("@rules_python//python:packaging.bzl", "py_package", "py_wheel")
load("@tflm_pip_deps//:requirements.bzl", "requirement")
load(
"//tensorflow/lite/micro:build_def.bzl",
"micro_copts",
"tflm_cc_library",
)
load(
"//tensorflow:extra_rules.bzl",
Expand All @@ -24,15 +24,14 @@ package_group(
packages = tflm_python_op_resolver_friends(),
)

cc_library(
tflm_cc_library(
name = "python_ops_resolver",
srcs = [
"python_ops_resolver.cc",
],
hdrs = [
"python_ops_resolver.h",
],
copts = micro_copts(),
visibility = [
":op_resolver_friends",
"//tensorflow/lite/micro/integration_tests:__subpackages__",
Expand Down
5 changes: 0 additions & 5 deletions python/tflite_micro/interpreter_wrapper.cc
Original file line number Diff line number Diff line change
Expand Up @@ -104,11 +104,6 @@ bool CheckTensor(const TfLiteTensor* tensor) {
return false;
}

if (tensor->sparsity != nullptr) {
PyErr_SetString(PyExc_ValueError, "TFLM doesn't support sparse tensors");
return false;
}

int py_type_num = TfLiteTypeToPyArrayType(tensor->type);
if (py_type_num == NPY_NOTYPE) {
PyErr_SetString(PyExc_ValueError, "Unknown tensor type.");
Expand Down
5 changes: 2 additions & 3 deletions signal/micro/kernels/BUILD
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
load(
"//tensorflow/lite/micro:build_def.bzl",
"micro_copts",
"tflm_cc_library",
)

package(licenses = ["notice"])

cc_library(
tflm_cc_library(
name = "register_signal_ops",
srcs = [
"delay.cc",
Expand All @@ -31,7 +31,6 @@ cc_library(
"irfft.h",
"rfft.h",
],
copts = micro_copts(),
visibility = [
"//tensorflow/lite/micro",
],
Expand Down
9 changes: 6 additions & 3 deletions tensorflow/compiler/mlir/lite/core/api/BUILD
Original file line number Diff line number Diff line change
@@ -1,15 +1,18 @@
load("//tensorflow/lite:build_def.bzl", "tflite_copts")
load("//tensorflow/lite/micro:build_def.bzl", "micro_copts")
load("//tensorflow/lite/micro:build_def.bzl",
"tflm_cc_library",
"tflm_copts",
)

package(
default_visibility = ["//visibility:public"],
licenses = ["notice"],
)

cc_library(
tflm_cc_library(
name = "error_reporter",
srcs = ["error_reporter.cc"],
hdrs = ["error_reporter.h"],
copts = tflite_copts() + micro_copts(),
copts = tflm_copts() + tflite_copts(),
deps = [],
)
13 changes: 8 additions & 5 deletions tensorflow/lite/core/api/BUILD
Original file line number Diff line number Diff line change
@@ -1,12 +1,15 @@
load("//tensorflow/lite:build_def.bzl", "tflite_copts")
load("//tensorflow/lite/micro:build_def.bzl", "micro_copts")
load("//tensorflow/lite/micro:build_def.bzl",
"tflm_cc_library",
"tflm_copts",
)

package(
default_visibility = ["//visibility:private"],
licenses = ["notice"],
)

cc_library(
tflm_cc_library(
name = "api",
srcs = [
"flatbuffer_conversions.cc",
Expand All @@ -17,7 +20,7 @@ cc_library(
"flatbuffer_conversions.h",
"tensor_utils.h",
],
copts = tflite_copts() + micro_copts(),
copts = tflm_copts() + tflite_copts(),
visibility = ["//visibility:public"],
deps = [
":error_reporter",
Expand All @@ -33,13 +36,13 @@ cc_library(
# also exported by the "api" target, so that targets which only want to depend
# on these small abstract base class modules can express more fine-grained
# dependencies without pulling in tensor_utils and flatbuffer_conversions.
cc_library(
tflm_cc_library(
name = "error_reporter",
hdrs = [
"error_reporter.h",
"//tensorflow/compiler/mlir/lite/core/api:error_reporter.h",
],
copts = tflite_copts() + micro_copts(),
copts = tflm_copts() + tflite_copts(),
visibility = [
"//visibility:public",
],
Expand Down
12 changes: 6 additions & 6 deletions tensorflow/lite/experimental/microfrontend/lib/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ cc_test(
name = "filterbank_test",
srcs = ["filterbank_test.cc"],
# Setting copts for experimental code to [], but this code should be fixed
# to build with the default copts (micro_copts())
# to build with the default copts
copts = [],
deps = [
":filterbank",
Expand All @@ -156,7 +156,7 @@ cc_test(
name = "frontend_test",
srcs = ["frontend_test.cc"],
# Setting copts for experimental code to [], but this code should be fixed
# to build with the default copts (micro_copts())
# to build with the default copts
copts = [],
deps = [
":frontend",
Expand All @@ -168,7 +168,7 @@ cc_test(
name = "log_scale_test",
srcs = ["log_scale_test.cc"],
# Setting copts for experimental code to [], but this code should be fixed
# to build with the default copts (micro_copts())
# to build with the default copts
copts = [],
deps = [
":log_scale",
Expand All @@ -180,7 +180,7 @@ cc_test(
name = "noise_reduction_test",
srcs = ["noise_reduction_test.cc"],
# Setting copts for experimental code to [], but this code should be fixed
# to build with the default copts (micro_copts())
# to build with the default copts
copts = [],
deps = [
":noise_reduction",
Expand All @@ -192,7 +192,7 @@ cc_test(
name = "pcan_gain_control_test",
srcs = ["pcan_gain_control_test.cc"],
# Setting copts for experimental code to [], but this code should be fixed
# to build with the default copts (micro_copts())
# to build with the default copts
copts = [],
deps = [
":pcan_gain_control",
Expand All @@ -204,7 +204,7 @@ cc_test(
name = "window_test",
srcs = ["window_test.cc"],
# Setting copts for experimental code to [], but this code should be fixed
# to build with the default copts (micro_copts())
# to build with the default copts
copts = [],
deps = [
":window",
Expand Down
9 changes: 6 additions & 3 deletions tensorflow/lite/kernels/BUILD
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
load("//tensorflow/lite:build_def.bzl", "tflite_copts")
load("//tensorflow/lite/micro:build_def.bzl", "micro_copts")
load("//tensorflow/lite/micro:build_def.bzl",
"tflm_cc_library",
"tflm_copts",
)

package(
default_visibility = [
Expand All @@ -17,15 +20,15 @@ cc_library(
deps = ["//tensorflow/lite/micro:micro_log"],
)

cc_library(
tflm_cc_library(
name = "kernel_util",
srcs = [
"kernel_util.cc",
],
hdrs = [
"kernel_util.h",
],
copts = tflite_copts() + micro_copts(),
copts = tflm_copts() + tflite_copts(),
deps = [
"//tensorflow/lite:array",
"//tensorflow/lite:kernel_api",
Expand Down
Loading

0 comments on commit 3d765e6

Please sign in to comment.