You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Support for local / self-hosted LLMs such as llama.
There should be configuration where you provide the API endpoint for your LLM.
This could be an OpenAI style API and if so I would highly recommend using LiteLLM for this as it's a quick and easy solution that's being widely adopted.
FYI there are ways you can actually use open source models leveraging OpenAI npm library/API specs. See for example this and this. I hope you find this useful!
Is your feature request related to a problem? Please describe.
I went to use this project and found that it doesn't seem to actually use or support llama despite the name.
It appears to be locked into only using OpenAI's proprietary SaaS product.
e.g. https://github.com/run-llama/chat-llamaindex/blob/main/.env.template#L1
Describe the solution you'd like
Describe alternatives you've considered
Maybe rename the project to chat-openai-index or similar if it hasn't got anything to do with Llama as it may confuse folks.
Additional context
N/A
The text was updated successfully, but these errors were encountered: