How to Install Bolt new AI Locally : Complete Guide 2024
3 min read
6 hours ago
Published on Feb 03, 2025
This response is partially generated with the help of AI. It may contain inaccuracies.
Table of Contents
Introduction
This tutorial provides a comprehensive guide on how to install Bolt.new AI locally using Ollama models. You will learn each step, including troubleshooting common installation issues, to successfully set up and run the Bolt.new interface.
Step 1: Install Git
- Download Git from the official website: Git Downloads.
- Follow the installation prompts and ensure it is added to your system PATH.
- Verify the installation by opening a command prompt and typing:
git --version
Step 2: Install Python
- Download Python from the official site: Python Downloads.
- During installation, check the box to add Python to your PATH.
- Confirm the installation by running:
python --version
Step 3: Install Node.js
- Visit the Node.js website: Node.js Downloads.
- Download the version suitable for your system and complete the installation.
- Check your installation with:
node --version
Step 4: Install Visual Studio Code
- Download Visual Studio Code from: VS Code Downloads.
- Install it by following the on-screen instructions.
Step 5: Install Ollama
- Go to the Ollama website: Ollama.
- Follow the instructions to install Ollama on your system.
Step 6: Install Ollama's Model
- Use the following command to install the desired model:
ollama run qwen2.5-coder
Step 7: Set the Execution Policy
- Open PowerShell as an administrator and run:
Set-ExecutionPolicy RemoteSigned
Step 8: Clone the Bolt.new Repository
- In your command prompt, execute:
git clone https://github.com/coleam00/bolt.new-any-llm.git
- Navigate to the cloned directory:
cd bolt.new-any-llm
Step 9: Install npm
- If npm was not installed with Node.js, run:
npm install
Step 10: Install pnpm
- Install pnpm globally using:
npm install -g pnpm
Step 11: Configure the .env.example File
- Rename the
.env.example
file to.env
. - Open the file in VS Code and configure the necessary environment variables as needed.
Step 12: Create Modelfile
- In the root directory of Bolt.new, create a file named
Modelfile
. - Add the following content:
FROM qwen2.5-coder PARAMETER num_ctx 32768
Step 13: Create a Custom Ollama Model
- Run the following command to create your custom model:
ollama create -f Modelfile qwen2.5-coder-bolt
Step 14: Run Bolt.New UI
- Start the Bolt.new interface by executing:
pnpm run dev
Step 15: Troubleshooting Common Issues
- Deleting an Ollama Model: Use the command:
ollama rm [exact model name]
- Ollama Model Not Visible: Ensure that the Ollama service is running and check the localhost address:
http://127.0.0.1:11434
Conclusion
You have now successfully installed and set up Bolt.new AI locally with Ollama models. If you encounter any issues, refer back to the troubleshooting steps above. As a next step, you can start exploring the functionalities of Bolt.new and consider experimenting with other models or features. Happy coding!