中文 | English
A bot plugin for LLM chat services with multi-model integration, extensibility, and various output formats.
Project Status: Steadily iterating towards the official 1.0 release (currently in Release Candidate stage)
Preset | Plugin Mode & Streaming Output | Image Rendering |
---|---|---|
- 🚀 Highly extensible (LangChain & Koishi APIs)
- 🎭 Custom conversation presets
- 🛡️ Rate limiting & blacklist system
- 🎨 Multi-format output (text, voice, image, mixed)
- 🧠 Context-aware with long-term memory
- 🔀 Three modes: chat, browsing, plugin
- 🔒 Content moderation via Koishi censor
- Room-based conversation system
- Content moderation
- TTS support (via vits service)
- Image rendering for replies
- Multi-model integration
- Preset system
-
Conversation import/export(abandoned) - v1 refactoring
- Streaming responses
- i18n support
Install the plugin in Koishi for basic functionality. For detailed setup, see our docs.
Model/Platform | Integration | Features | Notes |
---|---|---|---|
OpenAI | Official API | Customizable, plugin/browsing modes | Paid API |
Azure OpenAI | Official API | Similar to OpenAI | Paid API |
Google Gemini | Official API | Fast, outperforms GPT-3.5 | Requires account, may be charged |
Claude API | Official API | Large context, often beats GPT-3.5 | Paid, no Function Call |
Tongyi Qianwen | Official API | Free quota available | Slightly better than Zhipu |
Zhipu | Official API | Free tokens for new users | Better than Xunfei Spark |
Xunfei Spark | Official API | Free quota for new users | Similar to GPT-3.5 |
Wenxin Yiyan | Official API | Baidu's model | Slightly worse than Xunfei Spark |
Hunyuan | Official API | Tencent's model | Better than Wenxin Yiyan |
Ollama | Self-hosted | Open-source, CPU/GPU support | Requires backend setup |
GPT Free | Unofficial | Uses other websites' GPT models | Unstable, may fail |
ChatGLM | Self-hosted | Can be self-hosted | Requires backend, suboptimal performance |
RWKV | Self-hosted | Open-source model | Requires backend setup |
- Google (API)
- Bing (API & Web)
- DuckDuckGO (Lite)
- Tavily (API)
From 1.0.0-alpha.10
, we use YAML for more customizable presets. Default preset: catgirl.yml
Preset folder: <koishi_dir>/data/chathub/presets
For more info, see preset system docs.
Clone the repo:
# yarn
yarn clone ChatLunaLab/chatluna
# npm
npm run clone ChatLunaLab/chatluna
Update tsconfig.json
:
{
"extends": "./tsconfig.base",
"compilerOptions": {
"baseUrl": ".",
"paths": {
"koishi-plugin-chatluna-*": ["external/chatluna/packages/*/src"]
}
}
}
Build the project:
# yarn
yarn workspace @root/chatluna-koishi build
# npm
npm run build -w @root/chatluna-koishi
Start development with yarn dev
or npm run dev
.
Note: HMR may not be fully compatible. If issues occur, rebuild and restart.
We need help with:
- Web UI
- HTTP Server
- Project Documentation
PRs and discussions are welcome!
Developed by ChatLunaLab.
ChatLuna is an LLM-based chatbot framework. We collaborate with the open-source community to advance LLM technology. Users must comply with open-source agreements and avoid using this project for potentially harmful purposes or unevaluated services.
This project doesn't provide AI services directly. Users must obtain API access from AI service providers.
Users are responsible for complying with local laws and using locally available AI services.
The project isn't responsible for algorithm-generated results. All results and operations are the user's responsibility.
Users configure their own data storage. The project doesn't provide direct data storage.
The project isn't liable for user-caused data security issues, public opinion risks, or model misuse.
Thanks to these projects: