Open WebUI+Ollama ทำ AI ส่วนตัวสำหรับนักพัฒนา

4 min read 1 day ago
Published on Jan 04, 2025 This response is partially generated with the help of AI. It may contain inaccuracies.

Table of Contents

Introduction

This tutorial will guide you through the installation and setup of Open WebUI and Ollama, two powerful tools for developers looking to create personal AI applications. With the rise of AI technologies, understanding how to implement them effectively is crucial. This guide is designed to simplify the process, making it accessible even for those with limited experience.

Step 1: Installation on macOS

To install Open WebUI and Ollama on macOS, follow these steps:

  1. Download the necessary files:

    • Visit the official Open WebUI and Ollama websites to download the installation files.
  2. Install like a normal application:

    • Open the downloaded file and drag the application to your Applications folder.
    • Follow any on-screen instructions to complete the installation.
  3. Verify installation:

    • Open Terminal and type the following command to check if the installation was successful:
      open-webui --version
      

Step 2: Installation on Windows

For Windows users, the installation process is similar to that of typical software:

  1. Download the installer:

    • Go to the Open WebUI and Ollama websites to download the Windows installer.
  2. Run the installer:

    • Double-click the installer file and follow the prompts to install the software.
  3. Check successful installation:

    • Open Command Prompt and input:
      open-webui --version
      

Step 3: Installation on Linux

Installing on Linux involves using the command line:

  1. Use curl for installation:

    • Open a terminal and run the following command to install Open WebUI:
      curl -sSL https://example.com/install.sh | bash
      
  2. Access via Remote SSH:

    • If you're using a remote server, make sure you have SSH access to run the installation commands.
  3. Confirm installation:

    • Check the installation by running:
      open-webui --version
      

Step 4: Using Docker Compose for Open WebUI and Ollama

To install both Open WebUI and Ollama using Docker Compose:

  1. Create a docker-compose.yml file:

    • Open a text editor and create a file named docker-compose.yml with the following content:
      version: '3'
      services:
        open-webui:
          image: open-webui:latest
          ports:
            - "80:80"
        ollama:
          image: ollama:latest
      
  2. Run Docker Compose:

    • In the terminal, navigate to the directory containing the docker-compose.yml file and run:
      docker-compose up -d
      
  3. Check if the services are running:

    • Use the command:
      docker ps
      

Step 5: Installing Models

After setting up Open WebUI and Ollama, you can install AI models:

  1. Select a model:

    • If your system is limited in resources, start with smaller models such as llama3.2:1b or gemma2:2b.
  2. Install the model:

    • Use the following command to install a model:
      ollama pull llama3.2:1b
      
  3. Verify the installation:

    • Check installed models with:
      ollama list
      

Step 6: Using the Chat Functionality

Once the models are installed, you can start interacting with them:

  1. Launch the chat feature:

    • Open your web browser and go to the Open WebUI address (usually http://localhost).
  2. Start chatting:

    • Input your queries and observe the responses generated by the AI model.

Step 7: Exploring the API

To integrate the AI capabilities into your applications, familiarize yourself with the API:

  1. Access the API documentation:

    • Look for the Swagger UI in the Open WebUI to understand available endpoints.
  2. Test API calls:

    • Use a REST client to send requests and verify the responses.
  3. Example API call:

    • Here's a sample of how to generate a response:
      POST http://localhost/api/generate
      Content-Type: application/json
      
      {
        "prompt": "What is the capital of France?"
      }
      

Conclusion

In this tutorial, you learned how to install Open WebUI and Ollama on various operating systems, set up models, and use the chat features and API. This foundational knowledge will allow you to explore AI development further. As a next step, consider experimenting with different models and integrating the API into your own applications to enhance your projects with AI capabilities.