สอนวิธีรัน LLM ฟรีในเครื่องตัวเองด้วย Ollama และ Ollamac ภายใน 5 นาที

3 min read 3 days ago
Published on Aug 31, 2025 This response is partially generated with the help of AI. It may contain inaccuracies.

Table of Contents

Introduction

This tutorial will guide you through the process of running a free open-source LLM (Large Language Model) on your local machine using Ollama and Ollamac in just five minutes. By the end of this guide, you'll be able to install various LLMs such as Llama, Phi, and DeepSeek, and use them through the Ollamac user interface.

Step 1: Install Ollama

  1. Visit the Ollama website: Go to ollama.com.
  2. Download the installation package:
    • Choose the appropriate version for your operating system (Windows, macOS, or Linux).
  3. Run the installer:
    • Follow the on-screen instructions to complete the installation.
  4. Verify installation:
    • Open your terminal or command prompt and run:
      ollama --version
      
    • This command should display the installed version of Ollama, confirming a successful installation.

Step 2: Install LLM Models

  1. Choose your desired model: Decide which LLM you want to install (Llama, Phi, or DeepSeek).
  2. Open your terminal:
    • Ensure you're in the correct directory where you want to install the models.
  3. Run the installation command: Use the following command to install the desired model:
    ollama pull <model_name>
    
    • Replace <model_name> with llama, phi, or deepseek as per your choice.

Step 3: Install Ollamac

  1. Visit the Ollamac GitHub page: Go to Ollamac GitHub.
  2. Download the project:
    • Click on the "Code" button and download the ZIP file or clone the repository using:
      git clone https://github.com/kevinhermawan/Ollamac.git
      
  3. Install dependencies:
    • Navigate to the Ollamac directory in your terminal and run:
      npm install
      
  4. Run Ollamac:
    • Start the application by executing:
      npm start
      

Step 4: Use the LLM through Ollamac

  1. Access the Ollamac UI:
    • Open your web browser and go to http://localhost:3000 (or the specified port if different).
  2. Select your model:
    • From the UI, choose the installed LLM model you wish to use.
  3. Input your queries:
    • Type your questions or prompts into the provided text box and hit enter to get responses from the LLM.

Conclusion

In this tutorial, you learned how to set up and run open-source LLMs using Ollama and Ollamac. Key steps included installing Ollama, pulling the desired models, setting up Ollamac, and using the models through a user-friendly interface. Now that you have everything set up, explore the capabilities of the LLMs and integrate them into your projects! For further learning, consider following the channel for more tutorials or checking out additional resources linked in the description.