Triton Performance Analyzer is CLI tool which can help you optimize the inference performance of models running on Triton Inference Server by measuring changes in performance as you experiment with different optimization strategies.
-
Concurrency Mode simlulates load by maintaining a specific concurrency of outgoing requests to the server
-
Request Rate Mode simulates load by sending consecutive requests at a specific rate to the server
-
Custom Interval Mode simulates load by sending consecutive requests at specific intervals to the server
-
Time Windows Mode measures model performance repeatedly over a specific time interval until performance has stabilized
-
Count Windows Mode measures model performance repeatedly over a specific number of requests until performance has stabilized
-
Sequence Models, Ensemble Models, and Decoupled Models can be profiled in addition to standard/stateless/coupled models
-
Input Data to model inferences can be auto-generated or specified as well as verifying output
-
TensorFlow Serving and TorchServe can be used as the inference server in addition to the default Triton server
The steps below will guide you on how to start using Perf Analyzer.
export RELEASE=<yy.mm> # e.g. to use the release from the end of February of 2023, do `export RELEASE=23.02`
docker pull nvcr.io/nvidia/tritonserver:${RELEASE}-py3
docker run --gpus all --rm -it --net host nvcr.io/nvidia/tritonserver:${RELEASE}-py3
# inside triton container
git clone --depth 1 https://github.com/triton-inference-server/server
mkdir model_repository ; cp -r server/docs/examples/model_repository/simple model_repository
# inside triton container
tritonserver --model-repository $(pwd)/model_repository &> server.log &
# confirm server is ready, look for 'HTTP/1.1 200 OK'
curl -v localhost:8000/v2/health/ready
# detach (CTRL-p CTRL-q)
docker pull nvcr.io/nvidia/tritonserver:${RELEASE}-py3-sdk
docker run --gpus all --rm -it --net host nvcr.io/nvidia/tritonserver:${RELEASE}-py3-sdk
# inside sdk container
perf_analyzer -m simple
See the full quick start guide for additional tips on how to analyze output.
Contributions to Triton Perf Analyzer are more than welcome. To contribute please review the contribution guidelines, then fork and create a pull request.
We appreciate any feedback, questions or bug reporting regarding this project. When help with code is needed, follow the process outlined in the Stack Overflow (https://stackoverflow.com/help/mcve) document. Ensure posted examples are:
-
minimal - use as little code as possible that still produces the same problem
-
complete - provide all parts needed to reproduce the problem. Check if you can strip external dependency and still show the problem. The less time we spend on reproducing problems the more time we have to fix it
-
verifiable - test the code you're about to provide to make sure it reproduces the problem. Remove all other problems that are not related to your request/question.