Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes for DORY #11

Open
wants to merge 4 commits into
base: devel
Choose a base branch
from
Open

Fixes for DORY #11

wants to merge 4 commits into from

Conversation

lukamac
Copy link

@lukamac lukamac commented Dec 1, 2023

No description provided.

da-gazzi and others added 3 commits July 20, 2023 15:01
* remove 'graphs' editing package as it's not used and buggy

* remove remaining references to old 'graphs' package
@lukamac lukamac requested review from Scheremo and da-gazzi December 1, 2023 15:34
@@ -112,8 +112,8 @@ def forward(ctx, x, mul, add, div, signed, n_levels_out, cmsis_requant):
# division. Division is with flooring.
else:
y = x * mul + add
# Avoid round to even behaviour, friggin pytorch
y = torch.floor((y / div) + 0.5)
# LMACAN: Dory doesn't like the `+ 0.5` fix
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please motivate this more, are you sure you are not making a mistake at some other point?
This is critical code for many applications.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is hard for me to motivate this more with my limited knowledge of quantlib. Do you have an alternative way to handle this?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This creates an extra addition layer in the produced onnx graph and DORY expects a precise pattern of layers to recognize requantization.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will definitely break compatibility with the RQS strategy in Deeploy where we do rounding by default. I suggest that you make arithmetic rounding in the RequantShift layer configurable and disable it in flows targetting DORY as the backend.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am unfortunately not very familiar with DORY but for Deeploy we (or at least I) export fused RQS nodes directly.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably talk with @da-gazzi as well - I know that Georg implemented rounding by adding "half the shift" to the bias; it seems to me like adding .5 here does pretty much the same. We should disentangle this a bit before merging, but if there are multiple places where rounding biases are added we should fold that into one spot.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I concur with @Scheremo the only "good" solution is to fuse the rounding with the bias value and not expose this +0.5 here. I do not know in Deeploy how that is handled, but as this is anyways an addition of 0.5 happening after requantization, it can not really represent an integer op in a bit-true fashion.
Fusing this inside QuantLib avoids any confusion.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems also related to this issue

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I concur with @Scheremo the only "good" solution is to fuse the rounding with the bias value and not expose this +0.5 here. I do not know in Deeploy how that is handled, but as this is anyways an addition of 0.5 happening after requantization, it can not really represent an integer op in a bit-true fashion. Fusing this inside QuantLib avoids any confusion.

Agree - the idea of the "RequantShift" layer is that it represents the integer operations performed on the device 1:1. The activation rounding is handled by statically adding half an eps to the bias value; adding 0.5 here would achieve the same thing but it breaks the exported net if you don't use custom nodes. Is there something keeping us from just using the "integer rounding" approach in all cases? It is already configurable; i.e., you can turn it on/off as desired with the rounding flag to PACTActivation classes.

@lukamac lukamac changed the title Small fixes for DORY Fixes for DORY Dec 1, 2023
@Scheremo Scheremo changed the base branch from main to devel February 1, 2024 14:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants