Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add LLaMA end-to-end benchmarking #19985

Merged
merged 18 commits into from
Mar 22, 2024

Conversation

kunal-vaishnavi
Copy link
Contributor

Description

This PR adds a benchmarking script to measure end-to-end performance and saves the results in a CSV file.

Motivation and Context

With this PR, end-to-end performance can be easily measured for many large-language models such as LLaMA-2. The performance numbers for LLaMA-2 are located here.

kunal-vaishnavi added a commit to microsoft/onnxruntime-inference-examples that referenced this pull request Mar 20, 2024
### Description

This PR updates the end-to-end benchmarking numbers for LLaMA-2.

### Motivation and Context

The numbers were gathered with the end-to-end benchmarking script in [this PR](microsoft/onnxruntime#19985).
@kunal-vaishnavi kunal-vaishnavi merged commit 6238e9c into microsoft:main Mar 22, 2024
90 of 94 checks passed
YUNQIUGUO pushed a commit that referenced this pull request Mar 25, 2024
### Description

This PR adds a benchmarking script to measure end-to-end performance and
saves the results in a CSV file.

### Motivation and Context

With this PR, end-to-end performance can be easily measured for many
large-language models such as LLaMA-2. The performance numbers for
LLaMA-2 are located
[here](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/python/models/llama).
TedThemistokleous pushed a commit to TedThemistokleous/onnxruntime that referenced this pull request May 7, 2024
### Description

This PR adds a benchmarking script to measure end-to-end performance and
saves the results in a CSV file.

### Motivation and Context

With this PR, end-to-end performance can be easily measured for many
large-language models such as LLaMA-2. The performance numbers for
LLaMA-2 are located
[here](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/python/models/llama).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants