You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have used PTQ for int8 export from pytorch model and despite attempts at calibration, there is a significant drop in detection accuracy.
I am moving to quantization aware training to improve the accuracy, to improve the quantized int8 model, is pytorch_quantization the best tool for that?
The end result is to have .trt or engine file inferencing at int8 precision with best possible detection metrics.
TIA
The text was updated successfully, but these errors were encountered:
I am moving to quantization aware training to improve the accuracy, to improve the quantized int8 model, is pytorch_quantization the best tool for that?
pytorch_quantizaton will be deprecated, please use AMMO now.
I have used PTQ for int8 export from pytorch model and despite attempts at calibration, there is a significant drop in detection accuracy.
I am moving to quantization aware training to improve the accuracy, to improve the quantized int8 model, is pytorch_quantization the best tool for that?
The end result is to have .trt or engine file inferencing at int8 precision with best possible detection metrics.
TIA
The text was updated successfully, but these errors were encountered: