Open WebUI+Ollama ทำ AI ส่วนตัวสำหรับนักพัฒนา
Table of Contents
Introduction
This tutorial will guide you through the installation and setup of Open WebUI and Ollama, two powerful tools for developers looking to create personal AI applications. With the rise of AI technologies, understanding how to implement them effectively is crucial. This guide is designed to simplify the process, making it accessible even for those with limited experience.
Step 1: Installation on macOS
To install Open WebUI and Ollama on macOS, follow these steps:
-
Download the necessary files:
- Visit the official Open WebUI and Ollama websites to download the installation files.
-
Install like a normal application:
- Open the downloaded file and drag the application to your Applications folder.
- Follow any on-screen instructions to complete the installation.
-
Verify installation:
- Open Terminal and type the following command to check if the installation was successful:
open-webui --version
- Open Terminal and type the following command to check if the installation was successful:
Step 2: Installation on Windows
For Windows users, the installation process is similar to that of typical software:
-
Download the installer:
- Go to the Open WebUI and Ollama websites to download the Windows installer.
-
Run the installer:
- Double-click the installer file and follow the prompts to install the software.
-
Check successful installation:
- Open Command Prompt and input:
open-webui --version
- Open Command Prompt and input:
Step 3: Installation on Linux
Installing on Linux involves using the command line:
-
Use curl for installation:
- Open a terminal and run the following command to install Open WebUI:
curl -sSL https://example.com/install.sh | bash
- Open a terminal and run the following command to install Open WebUI:
-
Access via Remote SSH:
- If you're using a remote server, make sure you have SSH access to run the installation commands.
-
Confirm installation:
- Check the installation by running:
open-webui --version
- Check the installation by running:
Step 4: Using Docker Compose for Open WebUI and Ollama
To install both Open WebUI and Ollama using Docker Compose:
-
Create a docker-compose.yml file:
- Open a text editor and create a file named
docker-compose.yml
with the following content:version: '3' services: open-webui: image: open-webui:latest ports: - "80:80" ollama: image: ollama:latest
- Open a text editor and create a file named
-
Run Docker Compose:
- In the terminal, navigate to the directory containing the
docker-compose.yml
file and run:docker-compose up -d
- In the terminal, navigate to the directory containing the
-
Check if the services are running:
- Use the command:
docker ps
- Use the command:
Step 5: Installing Models
After setting up Open WebUI and Ollama, you can install AI models:
-
Select a model:
- If your system is limited in resources, start with smaller models such as
llama3.2:1b
orgemma2:2b
.
- If your system is limited in resources, start with smaller models such as
-
Install the model:
- Use the following command to install a model:
ollama pull llama3.2:1b
- Use the following command to install a model:
-
Verify the installation:
- Check installed models with:
ollama list
- Check installed models with:
Step 6: Using the Chat Functionality
Once the models are installed, you can start interacting with them:
-
Launch the chat feature:
- Open your web browser and go to the Open WebUI address (usually
http://localhost
).
- Open your web browser and go to the Open WebUI address (usually
-
Start chatting:
- Input your queries and observe the responses generated by the AI model.
Step 7: Exploring the API
To integrate the AI capabilities into your applications, familiarize yourself with the API:
-
Access the API documentation:
- Look for the Swagger UI in the Open WebUI to understand available endpoints.
-
Test API calls:
- Use a REST client to send requests and verify the responses.
-
Example API call:
- Here's a sample of how to generate a response:
POST http://localhost/api/generate Content-Type: application/json { "prompt": "What is the capital of France?" }
- Here's a sample of how to generate a response:
Conclusion
In this tutorial, you learned how to install Open WebUI and Ollama on various operating systems, set up models, and use the chat features and API. This foundational knowledge will allow you to explore AI development further. As a next step, consider experimenting with different models and integrating the API into your own applications to enhance your projects with AI capabilities.