-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Issues: NVIDIA/TensorRT
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Can I implement block quantization through tensorflow-quantization?
#4283
opened Dec 15, 2024 by
lqq-feel
[Feature request] allow uint8 output without an ICastLayer before
#4282
opened Dec 12, 2024 by
QMassoz
TensorRT 10.5.0 -- CPU Memory leak while using nvinfer1::createInferBuilder on 4060
#4281
opened Dec 12, 2024 by
Iridium771110
Conversion to TRT failure of TensorRT 8.6.1.6 when converting CO-DETR model on GPU RTX 4090
#4280
opened Dec 12, 2024 by
edwardnguyen1705
INT8 Quantization of dinov2 TensorRT Model is Not Faster than FP16 Quantization
#4273
opened Dec 6, 2024 by
mr-lz
how to fuse QuantizeLinear Node with my custom op when convert onnx to trtengine
#4270
opened Dec 5, 2024 by
AnnaTrainingG
codes inside "quickstart/common" are outdated
triaged
Issue has been triaged by maintainers
#4265
opened Nov 30, 2024 by
hamrah-cluster
How to make 4bit pytorch_quantization model export to .engine model?
triaged
Issue has been triaged by maintainers
#4262
opened Nov 26, 2024 by
StarryAzure
EfficientNMS_TRT plugin compiled by myself can not work
triaged
Issue has been triaged by maintainers
#4261
opened Nov 26, 2024 by
pango99
error in ms_deformable_im2col_cuda: invalid configuration argument
triaged
Issue has been triaged by maintainers
#4260
opened Nov 25, 2024 by
nainaigetuide
The trt model is used on different graphics cards
triaged
Issue has been triaged by maintainers
#4259
opened Nov 23, 2024 by
wahaha
Cuda Runtime (out of memory) failure of TensorRT 10.3.0 when running trtexec on GPU RTX4060/jetson/etc
#4258
opened Nov 22, 2024 by
zargooshifar
[ONNXParser] TensorRT Fails to Load ONNX Checkpoints with Separated Weight and Bias Files
#4257
opened Nov 22, 2024 by
theanh-ktmt
Polygraphy: how to compare the precision layer by layer with TensorRT if I have a custom operator in my onnx (and a corresponding plugin in TensorRT)?
Plugins
Issues when using TensorRT plugins
Tools: Polygraphy
Issues with Polygraphy
triaged
Issue has been triaged by maintainers
#4256
opened Nov 22, 2024 by
MyraYu2022
Error Code 10: Internal Error (Could not find any implementation for node failure of TensorRT 8.5 when running on GPU Jetson Xavier NX
question
Further information is requested
triaged
Issue has been triaged by maintainers
#4255
opened Nov 20, 2024 by
fettahyildizz
User allocator error allocating 86114304000-byte buffer failure of TensorRT 10.6 when running demo_img2vid. py on GPU rtx4090
Demo: Diffusion
Issues regarding demoDiffusion
triaged
Issue has been triaged by maintainers
#4254
opened Nov 19, 2024 by
kolyh
!config.getFlag(BuilderFlag::kFP16). ) failure of TensorRT 10.6 when running demo_txt2img_flux.py on GPU h20
Demo: Diffusion
Issues regarding demoDiffusion
triaged
Issue has been triaged by maintainers
#4253
opened Nov 19, 2024 by
yja1
Linux env TRT_LIBPATH in wrong format when building TensorRT 10.6 OSS
OSS Build
Issues building open source code
triaged
Issue has been triaged by maintainers
#4251
opened Nov 18, 2024 by
Synapsess
Different output with same input between dynamic and static shape model
Accuracy
Output mismatch between TensorRT and other frameworks
internal-bug-tracked
Tracked internally, will be fixed in a future release.
triaged
Issue has been triaged by maintainers
#4250
opened Nov 15, 2024 by
smarttowel
Previous Next
ProTip!
no:milestone will show everything without a milestone.