Fully local tool calling with Ollama
Table of Contents
Introduction
In this tutorial, we will explore how to utilize the new Ollama partner package for fully local tool calling with the Llama-3 Groq model. This approach enhances the capabilities of Local Language Models (LLMs) by enabling them to select and input data for tools effectively. We will create a simple tool calling agent in LangGraph that integrates web search and vectorstore retrieval tools, demonstrating the process in a local environment.
Step 1: Set Up Your Environment
To begin, ensure you have the necessary tools and packages installed on your local machine.
-
Install Ollama: Visit the Ollama website and follow the installation instructions for your operating system.
-
Install LangChain: Use pip to install LangChain. Run the following command in your terminal:
pip install langchain
-
Clone the LangGraph Repository: Fetch the latest version of the LangGraph examples using:
git clone https://github.com/langchain-ai/langgraph.git
-
Navigate to the Examples Directory: Change to the examples directory:
cd langgraph/examples/tutorials
Step 2: Create Your Tool Calling Agent
Next, we will set up a tool calling agent using the Llama-3 Groq model.
-
Import Required Libraries: In your Jupyter Notebook or Python script, start by importing the necessary libraries:
from langchain.llms import Ollama from langchain.agents import Tool
-
Define Your Tools: Create the tools you want your agent to use. For example, you might define a web search tool and a vectorstore retrieval tool:
web_search_tool = Tool(name="WebSearch", description="Search the web for information.") vectorstore_tool = Tool(name="VectorStore", description="Retrieve information from a vector store.")
-
Initialize the Llama-3 Model: Set up the Llama-3 Groq model for your agent:
llm = Ollama("llama3-groq-tool-use")
-
Combine Tools and Model: Create an agent that combines the tools with the language model:
agent = create_agent(llm, [web_search_tool, vectorstore_tool])
Step 3: Implement Tool Calling Logic
Now that your agent is set up, implement the logic for calling the tools.
-
Define Input Handling: Create a function that takes user input and determines which tool to call based on the input context:
def handle_input(user_input): if "search" in user_input: return web_search_tool.call(user_input) elif "retrieve" in user_input: return vectorstore_tool.call(user_input) else: return "Input not recognized."
-
Test Your Agent: Run a few test inputs to see how your agent responds:
print(handle_input("search for the latest news")) print(handle_input("retrieve documents related to AI"))
Step 4: Fine-Tuning and Enhancements
To improve your agent's performance, consider the following enhancements:
-
Adjust Tool Descriptions: Make sure the descriptions of your tools are clear and informative to help the model choose appropriately.
-
Incorporate Error Handling: Add error handling to manage unexpected inputs or tool failures gracefully.
-
Test Extensively: Ensure your agent handles various inputs effectively by testing it with different queries.
Conclusion
In this tutorial, we covered the setup and implementation of a local tool calling agent using the Ollama partner package with the Llama-3 Groq model. You learned how to create and configure tools, initialize the model, and handle user input for effective tool usage.
Next steps include exploring additional tools, integrating more complex functionalities, or fine-tuning your agent based on specific use cases. For further reading, check out the blog post and the code repository for more examples and advanced configurations.