-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revamp llama.cpp docs #1214
Revamp llama.cpp docs #1214
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! Could we mention here that they can run any model from the Hub the same way here as long as they pass the hf-repo
and hf-file
I would also add it at the beginning of the main README? |
Co-authored-by: Victor Muštar <[email protected]>
Co-authored-by: Victor Muštar <[email protected]>
8. [Deploying to a HF Space](#deploying-to-a-hf-space) | ||
9. [Building](#building) | ||
|
||
## Quickstart |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe add a link to llama.cpp server docs somewhere. (it's quite nice)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have link here to hf docs
Line 36 in 50edca7
You can quickly start a locally running chat-ui & LLM text-generation server thanks to chat-ui's [llama.cpp server support](https://huggingface.co/docs/chat-ui/configuration/models/providers/llamacpp). |
chat-ui/docs/source/configuration/models/providers/llamacpp.md
Lines 21 to 22 in 50edca7
A local LLaMA.cpp HTTP Server will start on `http://localhost:8080` (to change the port or any other default options, please find [LLaMA.cpp HTTP Server readme](https://github.com/ggerganov/llama.cpp/tree/master/examples/server)). | |
Love it! Thanks! |
lets make changes in the follow up PR if needed |
* Revamp llama.cpp docs * format * update readme * update index page * update readme * bertter fomratting * Update README.md Co-authored-by: Victor Muštar <[email protected]> * Update README.md Co-authored-by: Victor Muštar <[email protected]> * fix hashlink * document llama hf args * format --------- Co-authored-by: Victor Muštar <[email protected]>
Revamp llama.cpp server docs