Ollama Copilot
is a UI for Ollama on Windows that uses Windows Forms.
Copilot responses can be automatically forward to other applications just like other paid copilots.
The Ollama Copilot
has other features like speech to text, text to speech, and OCR all using free open-source software.
Check out Releases for the latest installer.
Overview of Ollama Copilot
Ollama Copilot v1.0.0
Youtube Transcripts v1.0.1
Speech to Text v1.0.2
Text to Speech v1.0.3
Optical Character Recognition v1.0.4
-
Ollama Copilot Installer (Windows) - Ollama Copilot v1.0.4
-
Note you can skip the Visual Studio Build dependencies if you used the
Ollama Copilot Installer
. -
Open
WinForm_Ollama_Copilot.sln
in Visual Studio 2022.
- The project uses Newtonsoft JSON so right-click the solution in solution explorer to select
Restore NuGet Packages
- Build and run the application
-
Install the Ollama Windows preview
-
Install a model to enable the Chat API:
-
Install the
llama3
model
ollama run llama3
- Install the
llama2
model
ollama run llama2
- Install the
qwen
model
ollama run qwen:4b
- Install the
llava
model
ollama run llava
- Install the
Phi 3
model
ollama run phi3
- Install the
gemma
model (7B default)
ollama run gemma
- You can remove the
gemma
model (7B)
ollama rm gemma
- to install the smaller
gemma
2B model
ollama run gemma:2b
- Install Docker Desktop
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
- Install the
llama2
model to enable the Chat API.
docker exec -it ollama ollama run llama2
- Install the
llava
model
docker exec -it ollama ollama run llava
- Install the
gemma
model
docker exec -it ollama ollama run gemma
- Install the
mixtral
model (requires 48GB of VRAM)
docker exec -it ollama ollama run mixtral
-
Install Ubuntu 22.04.3 LTS with WSL2
-
Setup Ubuntu for hosting the local Whisper server
sudo apt-get update
sudo apt install python3-pip
sudo apt install uvicorn
pip3 install FastAPI[all]
pip3 install uvloop
pip3 install numpy
sudo apt-get install curl
sudo apt-get install ffmpeg
pip3 install ffmpeg
pip3 install scipy
pip3 install git+https://github.com/openai/whisper.git
- Run the server
python3 -m uvicorn WhisperServer:app --reload --port 11437
- Source: Whisper
There are five model sizes, four with English-only versions, offering speed and accuracy tradeoffs. Below are the names of the available models and their approximate memory requirements and inference speed relative to the large model; actual speed may vary depending on many factors including the available hardware.
Size | Parameters | English-only model | Multilingual model | Required VRAM | Relative speed |
---|---|---|---|---|---|
tiny | 39 M | tiny.en |
tiny |
~1 GB | ~32x |
base | 74 M | base.en |
base |
~1 GB | ~16x |
small | 244 M | small.en |
small |
~2 GB | ~6x |
medium | 769 M | medium.en |
medium |
~5 GB | ~2x |
large | 1550 M | N/A | large |
~10 GB | 1x |
python3 WhisperTest.py audio.mp3
- Install Python from the Microsoft Store app on the Windows host machine which has access to the sound card.
- Open the Windows command prompt to install dependencies
pip3 install uvicorn
pip3 install FastAPI[all]
pip3 install pyttsx3
- Launch the Pyttsx3 Server in the Windows command prompt
python3 -m uvicorn Pyttsx3Server:app --reload --port 11438
-
"Prompt clear" - Clears the prompt text area
-
"Prompt submit" - Submits the prompt
-
"Response play" - Speaks the response
- "Response clear" - Clears the response text area
- Install
pytesseract
pip3 install uvicorn
pip3 install FastAPI[all]
pip install pytesseract
-
Install Tesseract-OCR - Installation
-
Windows Installer - Tesseract at UB Mannheim
-
Add Tesseract to your path:
C:\Program Files\Tesseract-OCR
-
Run the server
python3 -m uvicorn TesseractOCRServer:app --reload --port 11439 --log-level error