Skip to content

Issues: qwopqwop200/GPTQ-for-LLaMa

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

add support for minicpm
#289 opened Jul 4, 2024 by LDLINGLINGLING
GPTQ vs bitsandbytes
#288 opened Apr 5, 2024 by iaoxuesheng
Error when load GPTQ model
#287 opened Feb 12, 2024 by KyrieCui
Support Mistral.
#284 opened Oct 14, 2023 by nbollman
neox.py needs to add "import math"
#282 opened Aug 14, 2023 by StudyingShao
LoRa and diff with bitsandbytes
#281 opened Aug 3, 2023 by RonanKMcGovern
Issue with GPTQ
#274 opened Jul 5, 2023 by d0lphin
llama_inference 4bits error
#270 opened Jun 26, 2023 by gjm441
SqueezeLLM support?
#264 opened Jun 15, 2023 by nikshepsvn
What is the right perplexity number?
#263 opened Jun 15, 2023 by JianbangZ
Finetuning Quantized LLaMA
#259 opened Jun 10, 2023 by Qifeng-Wu99
ProTip! Add no:assignee to see everything that’s not assigned.