AI PIN NAV

Published

- 3 min read

ollama

img of ollama

Introduction to Ollama

Ollama is an innovative platform that allows users to launch and run large language models locally. It offers a simple and easy-to-use content generation interface, similar to OpenAI, but allows direct interaction with the model without any development experience. Ollama supports hot-swapping models, providing users with flexibility and diversity.

Installing Ollama

To install Ollama, visit the official website’s download page: Ollama Download Page. Here, you can choose the appropriate version for your operating system. Currently, Ollama supports macOS 11 Big Sur or higher.

For macOS Users

For macOS users, you can directly click the download link to get the Ollama zip package: Download for macOS.

For Windows Users

For Windows users, you can follow the steps in the link above to install. During the installation process, you can register to receive notifications of new updates.

Using Ollama

After installation, you can view Ollama’s available commands through the command line. For example, in Windows PowerShell, type ollama to see help information and available commands.

   PS C:\Users\Admin> ollama
Usage:
  ollama [flags]
  ollama [command]

Available Commands:
  serve       Start ollama
  create      Create a model from a Modelfile
  show        Show information for a model
  run         Run a model
  pull        Pull a model from a registry
  push        Push a model to a registry
  list        List models
  cp          Copy a model
  rm          Remove a model
  help        Help about any command

Flags:
  -h, --help      help for ollama
  -v, --version   Show version information

Use "ollama [command] --help" for more information about a command.
PS C:\Users\Admin>

Downloading and Using Large Models

Ollama’s model library offers a variety of large language models for users to choose from. You can find and download the model you need by visiting Ollama Model Library.

Viewing Installed Models

After installing the models, use the ollama list command to view the list of installed models.

   PS C:\Users\Admin> ollama list
NAME            ID              SIZE    MODIFIED
gemma:2b        b50d6c999e59    1.7 GB  About an hour ago
llama2:latest   78e26419b446    3.8 GB  9 hours ago
qwen:latest     d53d04290064    2.3 GB  8 hours ago
PS C:\Users\Admin>

Running Models

You can run a specific model using the ollama run command. For example, ollama run qwen will start the qwen model.

Introduction to OpenWebUI

OpenWebUI is a scalable, feature-rich, and user-friendly self-hosted WebUI that supports full offline operation and is compatible with both Ollama and OpenAI’s API. It provides a visual interface that makes interacting with large language models more intuitive and convenient.

Installing OpenWebUI

  • If you have Ollama installed on your computer, use the following command:
   docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
  • If Ollama is on a different server, use the following command:
  • To connect to Ollama on another server, change OLLAMA_BASE_URL to the server’s URL:
   docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

At this point, you will see 【Select a model】where you can choose the model you just downloaded

This gives us a GPT-like visual interface

And it can also add multiple models at once for a comparative dialogue use

Related Posts

There are no related posts yet. 😢