- https://github.com/shouxieai/tensorRT_cpp
- 新版本支持了最新版的tensorRT、yolov5,替换了新的解析器,模型编译报错更少
http://zifuture.com:1556/fs/16.std/release_tensorRT_yolov5.zip
- Support pytorch onnx plugin(DCN、HSwish ... etc.)
- Simpler inference and plugin APIs
install protobuf == 3.11.4 (or >= 3.8.x, But it's more troublesome)
bash scripts/getALL.sh
make run -j32
auto engine = TRTInfer::loadEngine("models/efficientnet-b0.fp32.trtmodel");
float mean[3] = {0.485, 0.456, 0.406};
float std[3] = {0.229, 0.224, 0.225};
Mat image = imread("img.jpg");
auto input = engine->input();
// multi batch sample
input->resize(2);
input->setNormMatGPU(0, image, mean, std);
input->setNormMatGPU(1, image, mean, std);
engine->forward();
// get result and copy to cpu
engine->output(0)->cpu<float>();
engine->tensor("hm")->cpu<float>();
- tensorRT7.0 or tensorRT6.0
- opencv3.4.6
- cudnn7.6.3
- cuda10.0
- protobuf v3.8.x
- Visual Studio 2017
- lean-windows.zip (include tensorRT、opencv、cudnn、cuda、protobuf)
- Pytorch export ONNX: plugin_onnx_export.py
- MReLU.cu 、HSwish.cu、DCNv2.cu