Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(compression): implement tensor decompression in op concatenation #3014

Merged
merged 2 commits into from
Dec 16, 2024

Conversation

rkuester
Copy link
Contributor

Implement tensor decompression in op concatenation. Extend
tests to validate operation on compressed tensors.

BUG=part of #2636

Implement tensor decompression in op concatenation. Extend
tests to validate operation on compressed tensors.

BUG=part of tensorflow#2636
@rkuester rkuester requested a review from a team as a code owner December 15, 2024 01:09
@rkuester rkuester requested a review from suleshahid December 15, 2024 01:10
output_tensor->params.zero_point);
} else if (input_type == kTfLiteInt16) {
// Make sure that all Int16 inputs have a null zero-point.
TF_LITE_ENSURE_EQ(context, input->params.zero_point, 0);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason why the scale is not checked in the int16 case? AFAICS the (at least the reference) concatenation kernel does not take into account any scaling/quantization at all.

inline void Concatenation(const ConcatenationParams& params,

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with your conclusion, however this is a copy-paste of the TfLite code.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, also is there are reason why it has to be zero? Seems that concatenation in general shouldn't care about quantization parameters, as long as they match in and out.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tinskip This same check is performed by the LiteRT (TfLite) reference implementation.

@mergify mergify bot merged commit 50e7e5d into tensorflow:main Dec 16, 2024
98 of 99 checks passed
@rkuester rkuester deleted the feat-compression-b branch December 16, 2024 17:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants