Ollama on Windows | Run LLMs locally 🔥

3 min read 2 days ago
Published on Jan 04, 2025 This response is partially generated with the help of AI. It may contain inaccuracies.

Table of Contents

Introduction

This tutorial will guide you through the process of running Large Language Models (LLMs) locally on your Windows machine using Ollama. Ollama is a powerful tool for machine learning enthusiasts and developers that allows you to seamlessly integrate LLMs into your projects. We will also cover how to integrate Ollama with LangChain, enhancing your LLM capabilities.

Step 1: Install Ollama on Windows

  1. Download Ollama

    • Visit the Ollama website.
    • Navigate to the downloads section and choose the Windows version.
  2. Run the Installer

    • Locate the downloaded installer file and double-click it to start the installation process.
    • Follow the on-screen instructions to complete the installation.
  3. Verify Installation

    • Open the Command Prompt.
    • Type the following command to confirm that Ollama is installed correctly:
      ollama version
      
    • You should see the version number of Ollama if the installation was successful.

Step 2: Run a Large Language Model

  1. Choose a Model

    • Visit the Ollama documentation or model repository to select a suitable LLM for your needs.
  2. Download a Model

    • Use the following command in Command Prompt to download your chosen model:
      ollama pull <model_name>
      
    • Replace <model_name> with the name of the model you wish to download.
  3. Run the Model

    • After the model is downloaded, start it with:
      ollama run <model_name>
      
    • This command will initiate the model, and you can begin interacting with it.

Step 3: Integrate Ollama with LangChain

  1. Install LangChain

    • Ensure you have Python installed on your machine.
    • Open Command Prompt and install LangChain using pip:
      pip install langchain
      
  2. Create a LangChain Script

    • Open your preferred code editor and create a new Python file.
    • Import the necessary libraries:
      from langchain.llms import Ollama
      
  3. Initialize Ollama within LangChain

    • Add the following code to your script to set up Ollama:
      llm = Ollama(model="<model_name>")
      
    • Make sure to replace <model_name> with the model you downloaded earlier.
  4. Use the LLM in Your Application

    • You can now use the llm object to generate text or respond to queries:
      response = llm("What is the capital of France?")
      print(response)
      

Conclusion

You have successfully installed Ollama on your Windows machine, run a Large Language Model, and integrated it with LangChain. This setup allows you to leverage powerful language models locally for various applications, such as chatbots, content generation, or NLP tasks. Explore the capabilities of LLMs further by experimenting with different models and extending your LangChain scripts. For additional resources, check the provided links for Ollama and LangChain documentation.