Run Deepseek Locally for Free!

2 min read 3 hours ago
Published on Jan 31, 2025 This response is partially generated with the help of AI. It may contain inaccuracies.

Table of Contents

Introduction

In this tutorial, you'll learn how to run Deepseek locally for free on your Windows machine. This guide will walk you through installing necessary software such as Ollama, Docker Desktop, and Open-WebUI, enabling you to utilize various large language models (LLMs) like Deepseek R1 and Llama 3.2/3.3. The entire process takes about 10-15 minutes and can be done even on modest hardware.

Step 1: Check Hardware Requirements

Ensure your system meets the following requirements:

  • Windows 10 or later
  • At least 8 GB of RAM
  • Sufficient disk space for installations and models

Step 2: Install Windows Subsystem for Linux (WSL) 2

  1. Open PowerShell as an administrator.
  2. Run the following command to enable WSL:
    wsl --install
    
  3. Restart your computer when prompted.

Step 3: Install Ollama

  1. Visit the Ollama website or GitHub page to download the installer.
  2. Run the installer and follow the on-screen instructions.
  3. After installation, verify by running the command:
    ollama --version
    
  4. Ensure Ollama is correctly installed by checking the output.

Step 4: Install Docker Desktop

  1. Go to the Docker Desktop website and download the installer.
  2. Run the installer and follow the prompts to complete the installation.
  3. After installation, start Docker Desktop and ensure it runs in the background.
  4. Verify installation by running:
    docker --version
    

Step 5: Install Open-WebUI

  1. Open a command prompt or terminal window.
  2. Clone the Open-WebUI repository using Git:
    git clone https://github.com/Open-WebUI/Open-WebUI.git
    
  3. Navigate to the cloned directory:
    cd Open-WebUI
    
  4. Run the installation script:
    ./install.sh
    

Step 6: Download Models

  1. Choose the models you want to download (e.g., Deepseek R1, Llama 3.2/3.3).
  2. Use Ollama to pull the models by running commands like:
    ollama pull deepseek-r1
    ollama pull llama-3.2
    

Step 7: Run Deepseek R1

  1. Start the model by running:
    ollama run deepseek-r1
    
  2. Follow on-screen instructions to interact with the model.

Step 8: Explore Uncensored Models

  1. If interested in uncensored models, research available options online.
  2. Install and run them similarly to the previous steps.

Conclusion

You have successfully set up Deepseek locally on your Windows machine. With the installation of Ollama, Docker Desktop, and Open-WebUI completed, you can now explore various large language models. For further exploration, consider testing different models and configurations to see what works best for your needs. Happy experimenting!