-
I have llama.cpp working locally on a windows machine. I am running ggml-vic7b-uncensored-q5_0.bin and ggml-vic13b-uncensored-q5_0.bin some times i want to give it input like Is this even possible? How can i do this? it simply says it cannot read from path even when i run it as Administrator |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
On its own llama is a text model that can do nothing but generate text. What you're looking for is probably something like auto-GPT. Be warned that these types of applications don't work very well even with the gpt4 model, so using llama with it probably won't accomplish much. If you really want to try it some UIs like text-generation-webui can emulate the OpenAI API so that would be a good place to start. |
Beta Was this translation helpful? Give feedback.
-
Llama.cpp can only run language models. It's sole purpose is to generate text and do no have access to your computer's file system. There are projects which uses the language model's reasoning skills to do what you're asking and expose the user file system to the program such as langchain and autogpts. |
Beta Was this translation helpful? Give feedback.
-
Thank you for your suggestions. I will research more in that direction |
Beta Was this translation helpful? Give feedback.
On its own llama is a text model that can do nothing but generate text. What you're looking for is probably something like auto-GPT. Be warned that these types of applications don't work very well even with the gpt4 model, so using llama with it probably won't accomplish much. If you really want to try it some UIs like text-generation-webui can emulate the OpenAI API so that would be a good place to start.