Skip to content

Local LLM helper using Ollama for simple prompts, web searches and asking question on your own pdfs

Notifications You must be signed in to change notification settings

eliottwantz/OwnLlama

Repository files navigation

OwnLlama

Welcome to your OwnLlama! To get started with your llama, check out the Getting started section.

Getting started

Prerequisites

You must have Docker installed.

  1. Create the containers
docker run --rm -d --name rag-qdrant -p 6333:6333 -p 6334:6334 -v $(pwd)/qdrant_storage:/qdrant/storage qdrant/qdrant

docker run -d -p 3000:3000 \
    --add-host=host.docker.internal:host-gateway \
    --name OwnLlama \
    --restart always \
    ghcr.io/eliottwantz/ownllama:latest
  1. Open your browser and navigate to http://localhost:3000

Development

Prerequisites

You must have Bun installed.

  1. Clone this repo
git clone https://github.com/eliottwantz/OwnLlama
  1. Create the containers
docker run --rm -d --name rag-qdrant -p 6333:6333 -p 6334:6334 -v $(pwd)/qdrant_storage:/qdrant/storage qdrant/qdrant
  1. Run the development server
bun run dev

About

Local LLM helper using Ollama for simple prompts, web searches and asking question on your own pdfs

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages