Ollama. Install LLM locally with web interface OpenWebUI.

Ollama is a piece of software that allows to install and use large language models directly in cli. In addition to it web interface can be installed to have user experience like ChatGPT, as well as saved history of prompts etc.

This is the list of supported models.

Llama 3	8B 4.7GB
Llama 370B 40GB
Phi-3 3.8B 2.3GB
Mistral	7B 4.1GB
Neural Chat 7B 4.1GB
Starling 7B 4.1GB
Code Llama 7B 3.8GB
Llama 2 Uncensored 7B 3.8GB
LLaVA 7B
Gemma 2B
Gemma 7B
Solar 10.7B 6.1GB

At first I’ve tried to install it on my OrangePi3B 4-core CPU and 8gb ram. It works but event Llama3 8B took too much time to answer simple question.

Installation and usage on my MacBook pro M1 with Llama3 8B was pretty quickly. Decision to get larger model and downloading Llama3 70B unfortunately was resulted to unusable usage. But anyway, now I have something to explore and local personal assistant which is look like this.

There is a simple step o have a web interface. When you’ll have Docker installed run this

docker run -d -p 3080:8080 --add-host=host.docker.internal:host-gateway -v ./open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

And it will run the interface locally on http://127.0.0.1:3080. Voila!

Couple of simple steps to have a personal assistant. Good luck.

UPDATE. Solution for laziest users, wrap everything in one docker compose file and launch with one click:

version: '3.8'
services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    ports:
      - "11434:11434"
    volumes:
      - ./ollama:/root/.ollama
    environment:
      - OLLAMA_HOST=0.0.0.0:11434
    restart: unless-stopped
  open-webui:
    image: ghcr.io/open-webui/open-webui:main
    container_name: open-webui
    ports:
      - "3000:8080"
    environment:
      - OLLAMA_BASE_URL=http://ollama:11434
    volumes:
      - ./open-webui:/app/backend/data
    depends_on:
      - ollama
    restart: unless-stopped
volumes:
  ollama:
  open-webui:

Next step is to launch in detach mode and that’t it.

docker compose up -d

But I need to mention that all the information founded here is just for education and you’ll do all of this on your own risk.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *