Skip to content

Commit

Permalink
Github: update issue templates [no ci]
Browse files Browse the repository at this point in the history
  • Loading branch information
JohannesGaessler committed Nov 25, 2024
1 parent 5a89877 commit ca30560
Show file tree
Hide file tree
Showing 3 changed files with 30 additions and 20 deletions.
12 changes: 8 additions & 4 deletions .github/ISSUE_TEMPLATE/010-bug-compilation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,8 @@ body:
- type: dropdown
id: operating-system
attributes:
label: Which operating systems do you know to be affected?
label: Operating systems
description: Which operating systems do you know to be affected?
multiple: true
options:
- Linux
Expand All @@ -41,14 +42,17 @@ body:
description: Which GGML backends do you know to be affected?
options: [AMX, BLAS, CPU, CUDA, HIP, Kompute, Metal, Musa, RPC, SYCL, Vulkan]
multiple: true
validations:
required: true
- type: textarea
id: steps_to_reproduce
id: info
attributes:
label: Steps to Reproduce
label: Problem description & steps to reproduce
description: >
Please tell us how to reproduce the bug and any additional information that you think could be useful for fixing it.
Please give us a summary of the problem and tell us how to reproduce it.
If you can narrow down the bug to specific compile flags, that information would be very much appreciated by us.
placeholder: >
I'm trying to compile llama.cpp with CUDA support on a fresh install of Ubuntu and get error XY.
Here are the exact commands that I used: ...
validations:
required: true
Expand Down
15 changes: 9 additions & 6 deletions .github/ISSUE_TEMPLATE/011-bug-results.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,8 @@ body:
- type: dropdown
id: operating-system
attributes:
label: Which operating systems do you know to be affected?
label: Operating systems
description: Which operating systems do you know to be affected?
multiple: true
options:
- Linux
Expand All @@ -43,6 +44,8 @@ body:
description: Which GGML backends do you know to be affected?
options: [AMX, BLAS, CPU, CUDA, HIP, Kompute, Metal, Musa, RPC, SYCL, Vulkan]
multiple: true
validations:
required: true
- type: textarea
id: hardware
attributes:
Expand All @@ -55,20 +58,20 @@ body:
- type: textarea
id: model
attributes:
label: Model
label: Models
description: >
Which model at which quantization were you using when encountering the bug?
Which model(s) at which quantization were you using when encountering the bug?
If you downloaded a GGUF file off of Huggingface, please provide a link.
placeholder: >
e.g. Meta LLaMA 3.1 Instruct 8b q4_K_M
validations:
required: false
- type: textarea
id: steps_to_reproduce
id: info
attributes:
label: Steps to Reproduce
label: Problem description & steps to reproduce
description: >
Please tell us how to reproduce the bug and any additional information that you think could be useful for fixing it.
Please give us a summary of the problem and tell us how to reproduce it.
If you can narrow down the bug to specific hardware, compile flags, or command line arguments,
that information would be very much appreciated by us.
placeholder: >
Expand Down
23 changes: 13 additions & 10 deletions .github/ISSUE_TEMPLATE/019-bug-misc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ body:
id: version
attributes:
label: Name and Version
description: Which version of our software are you running? (use `--version` to get a version string)
description: Which version of our software is affected? (You can use `--version` to get a version string.)
placeholder: |
$./llama-cli --version
version: 2999 (42b4109e)
Expand All @@ -24,7 +24,8 @@ body:
- type: dropdown
id: operating-system
attributes:
label: Which operating systems do you know to be affected?
label: Operating systems
description: Which operating systems do you know to be affected?
multiple: true
options:
- Linux
Expand All @@ -33,36 +34,38 @@ body:
- BSD
- Other? (Please let us know in description)
validations:
required: true
required: false
- type: dropdown
id: module
attributes:
label: Which llama.cpp modules do you know to be affected?
multiple: true
options:
- Documentation/Github
- libllama (core library)
- llama-cli
- llama-server
- llama-bench
- llama-quantize
- Python/Bash scripts
- Test code
- Other (Please specify in the next section)
validations:
required: true
required: false
- type: textarea
id: steps_to_reproduce
id: info
attributes:
label: Steps to Reproduce
label: Problem description & steps to reproduce
description: >
Please tell us how to reproduce the bug and any additional information that you think could be useful for fixing it.
Please give us a summary of the problem and tell us how to reproduce it (if applicable).
validations:
required: true
- type: textarea
id: first_bad_commit
attributes:
label: First Bad Commit
description: >
If the bug was not present on an earlier version: when did it start appearing?
If the bug was not present on an earlier version and it's not trivial to track down: when did it start appearing?
If possible, please do a git bisect and identify the exact commit that introduced the bug.
validations:
required: false
Expand All @@ -71,8 +74,8 @@ body:
attributes:
label: Relevant log output
description: >
Please copy and paste any relevant log output, including the command that you entered and any generated text.
If applicable, please copy and paste any relevant log output, including the command that you entered and any generated text.
This will be automatically formatted into code, so no need for backticks.
render: shell
validations:
required: true
required: false

0 comments on commit ca30560

Please sign in to comment.