Skip to content

kesamet/ai-assistant

Repository files navigation

ai-assistant

🔧 Getting Started

You will need to set up your development environment using conda, which you can install directly.

conda env create --name assistant python=3.11
conda activate assistant
pip install -r requirements.txt

Tracing

We shall use Phoenix for LLM tracing. Phoenix is an open-source observability library designed for experimentation, evaluation, and troubleshooting. Before running the app, start a phoenix server

python3 -m phoenix.server.main serve

💻 Vision Assistant App

Download ggml-model and mmprog-model from mys/ggml_llava-v1.5-7b and save them in models/llava-7b/. Update CLIP_MODEL_PATH and LLAVA_MODEL_PATH in config.yaml accordingly.

Deploy LLAvA model as an endpoint.

python -m serve_llava

Run Streamlit app and select Vision Assistant.

streamlit run app.py

screenshot

💻 ReAct Agent App

This app demostrates using agent to implement the ReAct logic. We shall use tools like Tavily, Wikipedia, News API and Wolfram Alpha. The LLM is Gemini-Pro. The following API keys are required:

  • Google: GOOGLE_API_KEY
  • Tavily: TAVILY_API_KEY
  • News API: NEWSAPI_API_KEY
  • Wolfram Alpha: WOLFRAM_ALPHA_APPID Save these keys in .env.

Run Streamlit app and select ReAct Agent.

streamlit run app.py

About

LLM-powered virtual assistants

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published