-
Notifications
You must be signed in to change notification settings - Fork 546
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support align_corners for Resize operator #418
Conversation
hi @allenling if coordinate_transformation_mode is "align_corners", We should give the size which you want to resize your input to, rather than the scale. You can use following code to generate resize_align_corners.onnx and use the changed onnx2trt to generate tensorrt engine, and comapre their results.
|
sry, i posted the wrong issue, it should be NVIDIA/TensorRT#273 if you would check out this issue, i would really appreciate |
@Bohrhh I have problem when convert model from pytorch to trt with Interpolate Op too, and even I use the latested code from onnx-trt ,I still get an error like "Assertion failed: ctx->tensors().count(inputName)" So I used your docker image (bohrhh/tensorrt:7.0) , It' worked,but the output is so different between onnx model and trt model (onnx result is the same as pytorch) . It is strange that when I convert my model without loadding weights to onnx then convert it to trt engine use onnx2trt command in your docker image, all the model result is same....... what's your onnx-trt version in your docker image? It seems there is no other way to convert my model expect use trt network api ? |
在docker镜像里面onnx2trt是和tensorrt对应的,也就是7.0版。你的情况好诡异,onnx模型权重对转化之后的tensorrt模型有如此明显影响的情况,我是没遇到过。 |
哈哈原谅我蹩脚的英语。今天早上我测试了下,看来这问题跟你的环境没关系,你的环境是正确的:我把前向传播里,最后一个带有align_corners 的 Interpolate( mode='bilinear', align_corners=True) 计算删除了,由于大小与输入结点本来就一样,所以其实这层在这里没变化。 这样模型里的其他Interpolate 层(mode = nearest)就可以用opset=10导出onnx了(size参数指定了常量值),转成onnx后与原模型输出一样。然后可以用官方的tensorRT镜像转模型,这次转出来的trt模型与用你的环境里的onnx2trt结果是一样的,但他们都和原模型的结果不一样。加载完权重就出现不一样的情况实在是太诡异了,都不知道从哪找问题。要说用trt的api 再重新定义网络结构吧,一来原结构太复杂,二来也没太多demo,恐怕需要好多时间啊。另外不知道为啥,我自己按官网步骤打的7.1的tensorrt镜像,安装了最新的onnx2trt转换时就报"Assertion failed: ctx->tensors().count(inputName)".... |
I have tested this code on a deeplabv3 model,pytorch ==> onnx ==> tensorrt,it worked. The tensorrt model's output is same as pytorch's |
Closing in favor of #538. If any people on this thread still have issues with Resize, feel free to open an issue. |
I found that after interpolate,the resulted trt engine becomes a fixed batchsize model. It makes it impossible to create a trt plan file which support dynamic batching. A very simple example could be :
then create a plan file with trtexec :
when load the trt model with trtserver, it outputs:
Facing NVIDIA/TensorRT#996 |
handle onnx node below
input: "358"
input: "360"
input: "360"
input: "367"
output: "368"
op_type: "Resize"
attribute {
name: "coordinate_transformation_mode"
s: "align_corners"
type: STRING
}
attribute {
name: "cubic_coeff_a"
f: -0.75
type: FLOAT
}
attribute {
name: "mode"
s: "linear"
type: STRING
}
attribute {
name: "nearest_mode"
s: "floor"
type: STRING
}