สอนวิธีรัน LLM ฟรีในเครื่องตัวเองด้วย Ollama และ Ollamac ภายใน 5 นาที
Table of Contents
Introduction
This tutorial will guide you through the process of running a free open-source LLM (Large Language Model) on your local machine using Ollama and Ollamac in just five minutes. By the end of this guide, you'll be able to install various LLMs such as Llama, Phi, and DeepSeek, and use them through the Ollamac user interface.
Step 1: Install Ollama
- Visit the Ollama website: Go to ollama.com.
- Download the installation package:
- Choose the appropriate version for your operating system (Windows, macOS, or Linux).
- Run the installer:
- Follow the on-screen instructions to complete the installation.
- Verify installation:
- Open your terminal or command prompt and run:
ollama --version
- This command should display the installed version of Ollama, confirming a successful installation.
- Open your terminal or command prompt and run:
Step 2: Install LLM Models
- Choose your desired model: Decide which LLM you want to install (Llama, Phi, or DeepSeek).
- Open your terminal:
- Ensure you're in the correct directory where you want to install the models.
- Run the installation command: Use the following command to install the desired model:
ollama pull <model_name>
- Replace
<model_name>
withllama
,phi
, ordeepseek
as per your choice.
- Replace
Step 3: Install Ollamac
- Visit the Ollamac GitHub page: Go to Ollamac GitHub.
- Download the project:
- Click on the "Code" button and download the ZIP file or clone the repository using:
git clone https://github.com/kevinhermawan/Ollamac.git
- Click on the "Code" button and download the ZIP file or clone the repository using:
- Install dependencies:
- Navigate to the Ollamac directory in your terminal and run:
npm install
- Navigate to the Ollamac directory in your terminal and run:
- Run Ollamac:
- Start the application by executing:
npm start
- Start the application by executing:
Step 4: Use the LLM through Ollamac
- Access the Ollamac UI:
- Open your web browser and go to
http://localhost:3000
(or the specified port if different).
- Open your web browser and go to
- Select your model:
- From the UI, choose the installed LLM model you wish to use.
- Input your queries:
- Type your questions or prompts into the provided text box and hit enter to get responses from the LLM.
Conclusion
In this tutorial, you learned how to set up and run open-source LLMs using Ollama and Ollamac. Key steps included installing Ollama, pulling the desired models, setting up Ollamac, and using the models through a user-friendly interface. Now that you have everything set up, explore the capabilities of the LLMs and integrate them into your projects! For further learning, consider following the channel for more tutorials or checking out additional resources linked in the description.