This guide documents how to run Ollama in Open WebUI(https://github.com/open-webui/open-webui).

Tested Environment: macOS Sequoia (Apple M4/M2)

Install Open WebUI using Docker

This is an easier way since I don’t want to mess up my Python environment.

If Ollama is on your computer:

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

If Ollama is on a Different Server (change the OLLAMA_BASE_URL to the server’s URL):

docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

With Nvidia GPU support:

docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda

Make sure to include the “-v open-webui:/app/backend/data” in the command to ensure your database is properly mounted.

Install Open WebUI with Bundled Ollama

With Nvidia GPU support:

docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama

For CPU Only:

docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama

References

Open Web UI