-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixes for DORY #11
base: devel
Are you sure you want to change the base?
Fixes for DORY #11
Conversation
@@ -112,8 +112,8 @@ def forward(ctx, x, mul, add, div, signed, n_levels_out, cmsis_requant): | |||
# division. Division is with flooring. | |||
else: | |||
y = x * mul + add | |||
# Avoid round to even behaviour, friggin pytorch | |||
y = torch.floor((y / div) + 0.5) | |||
# LMACAN: Dory doesn't like the `+ 0.5` fix |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please motivate this more, are you sure you are not making a mistake at some other point?
This is critical code for many applications.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is hard for me to motivate this more with my limited knowledge of quantlib. Do you have an alternative way to handle this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This creates an extra addition layer in the produced onnx graph and DORY expects a precise pattern of layers to recognize requantization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will definitely break compatibility with the RQS strategy in Deeploy where we do rounding by default. I suggest that you make arithmetic rounding in the RequantShift layer configurable and disable it in flows targetting DORY as the backend.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am unfortunately not very familiar with DORY but for Deeploy we (or at least I) export fused RQS nodes directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should probably talk with @da-gazzi as well - I know that Georg implemented rounding by adding "half the shift" to the bias; it seems to me like adding .5 here does pretty much the same. We should disentangle this a bit before merging, but if there are multiple places where rounding biases are added we should fold that into one spot.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I concur with @Scheremo the only "good" solution is to fuse the rounding with the bias value and not expose this +0.5
here. I do not know in Deeploy how that is handled, but as this is anyways an addition of 0.5 happening after requantization, it can not really represent an integer op in a bit-true fashion.
Fusing this inside QuantLib avoids any confusion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems also related to this issue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I concur with @Scheremo the only "good" solution is to fuse the rounding with the bias value and not expose this
+0.5
here. I do not know in Deeploy how that is handled, but as this is anyways an addition of 0.5 happening after requantization, it can not really represent an integer op in a bit-true fashion. Fusing this inside QuantLib avoids any confusion.
Agree - the idea of the "RequantShift" layer is that it represents the integer operations performed on the device 1:1. The activation rounding is handled by statically adding half an eps to the bias value; adding 0.5 here would achieve the same thing but it breaks the exported net if you don't use custom nodes. Is there something keeping us from just using the "integer rounding" approach in all cases? It is already configurable; i.e., you can turn it on/off as desired with the rounding
flag to PACTActivation
classes.
No description provided.