Setting Up Ollama for Hands-On AI¶
This guide will walk you through setting up Ollama as a local, privacy-focused option for powering your Hands-On AI projects.
Note: HandsOnAI works with any OpenAI-compatible provider including OpenAI, OpenRouter, Together AI, and others. This guide focuses specifically on Ollama for users who want to run models locally.
๐งญ Who Is This Guide For?¶
This guide is written for: - Beginners who want to run Hands-On AI with local LLMs using Ollama - Educators seeking privacy-focused solutions for classroom use - Students who prefer not to send data to cloud providers - Anyone who wants to understand how local LLMs work
If you're comfortable using cloud providers like OpenAI or OpenRouter, you can skip this guide and configure HandsOnAI with your preferred provider instead.
๐ Ollama vs Other Providers¶
| Feature | Local Ollama | Cloud Providers |
|---|---|---|
| Privacy | โ All data stays local | โ Data sent to external servers |
| Cost | โ Free after initial setup | โ Pay per token/request |
| Internet | โ Works offline | โ Requires internet connection |
| Setup | โ ๏ธ Requires installation | โ Just need API key |
| Performance | โ ๏ธ Limited by your hardware | โ High-end GPUs available |
| Latest Models | โ ๏ธ Community releases | โ Cutting-edge models first |
What is Ollama?¶
Ollama is a lightweight tool that allows you to run large language models (LLMs) locally on your computer. It makes it easy to download and run models like Llama 3 without needing specialized hardware or complex setup.
System Requirements¶
- Windows, macOS, or Linux computer
- At least 8GB RAM (16GB+ recommended for better performance)
- At least 10GB of free disk space
- An internet connection (for the initial model download)
Installation Guide¶
For macOS¶
- Download the Ollama installer from ollama.ai
- Open the downloaded file and drag the Ollama app to your Applications folder
- Launch Ollama from your Applications folder
- Ollama will run in your menu bar (look for the llama icon)
For Windows¶
- Download the Windows installer from ollama.ai
- Run the installer and follow the on-screen instructions
- Once installed, Ollama will start automatically and run in the system tray
For Linux¶
- Run the following command in your terminal:
curl -fsSL https://ollama.ai/install.sh | sh - Start the Ollama service:
ollama serve
Downloading Your First Model¶
After installing Ollama, you'll need to download at least one model. For beginners, we recommend the llama3 model, which offers a good balance of performance and quality.
- Open a terminal or command prompt
- Run the following command:
ollama pull llama3 - Wait for the download to complete (this may take a few minutes depending on your internet speed)
The first time you run this command, it will download the model files (about 4GB). Once downloaded, the model will be available locally on your computer.
Testing Your Installation¶
Let's make sure everything is working properly:
- In your terminal or command prompt, run:
ollama run llama3 "Hello, how are you today?" - You should see a response from the model
If you received a response, congratulations! Ollama is set up correctly and ready to use with HandsOnAI.
Using Ollama with HandsOnAI¶
HandsOnAI is designed to work with Ollama by default. As long as Ollama is running in the background, HandsOnAI will automatically connect to it at http://localhost:11434.
No additional configuration is required - HandsOnAI detects the local Ollama server automatically.
๐งช Test Your Installation¶
Once youโve started a model using:
ollama run llama3You can test Hands-On AI in Python with:
from hands_on_ai.chat import get_response print(get_response("What is 3 + 4?"))
You should get a real-time AI response!
โ Optional Tip (Advanced Config):¶
โ๏ธ Advanced: Custom Model Host¶
If you're running Ollama on a different host or port (e.g. remote or Docker), you can create a config file to tell Hands-On AI where to send requests.
๐ Using API Key Authentication¶
If your Ollama server requires API key authentication, you can configure it in two ways:
-
Using environment variables:
export HANDS_ON_AI_API_KEY=your-api-key -
Using a config file: Create or edit
~/.hands-on-ai/config.json:{ "api_key": "your-api-key" }
See: docs/configuration.md for complete configuration options.
Troubleshooting¶
"Connection refused" error¶
If Hands-On AI shows a connection error: 1. Make sure Ollama is running 2. Check that you haven't changed the default port (11434) 3. On Windows or macOS, you might need to restart the Ollama application
Slow responses¶
- Try a smaller model like
llama3:8bwhich requires fewer resources - Close other resource-intensive applications
- Ensure your computer meets the minimum requirements
"Out of memory" error¶
- Try a smaller model like
llama3:8b - Increase your system's virtual memory (swap file)
- Close other applications to free up RAM
Available Models¶
Here are some models you can use with HandsOnAI via Ollama:
llama3- The recommended default model for most usersllama3:8b- A smaller, faster version if you have limited resourcescodellama- Specialised for programming tasks (recommended for code-related exercises)mistral- An alternative model with good performancephi- A smaller model with good capabilities for simpler tasks
To download any of these models, use:
ollama pull model_name
Need More Help?¶
For more detailed information about Ollama, visit their official documentation at ollama.ai/docs.
๐ Related Docs¶
- Chat Module Guide - Learn about using the chat module with Ollama
- RAG Module Guide - Learn about using the RAG module with Ollama
- Agent Module Guide - Learn about using the agent module with Ollama
- Configuration Guide - Advanced configuration options
- Education Guide - Use Hands-On AI in educational settings
๐ Beyond Ollama¶
Once comfortable with local Ollama, you can easily switch HandsOnAI to use other providers:
Cloud Providers (for more advanced models)¶
import os
# Switch to OpenAI
os.environ['HANDS_ON_AI_SERVER'] = 'https://api.openai.com'
os.environ['HANDS_ON_AI_API_KEY'] = 'your-openai-key'
os.environ['HANDS_ON_AI_MODEL'] = 'gpt-4'
Multiple Providers in One Project¶
import os
# Use Ollama for privacy-sensitive tasks
os.environ['HANDS_ON_AI_SERVER'] = 'http://localhost:11434'
privacy_response = get_response("Analyze this personal data...")
# Switch to cloud for advanced reasoning
os.environ['HANDS_ON_AI_SERVER'] = 'https://api.openai.com'
os.environ['HANDS_ON_AI_API_KEY'] = 'your-key'
advanced_response = get_response("Solve this complex math problem...")
See the main README for full provider compatibility information.
Now that you have Ollama set up, you're ready to use Hands-On AI! You can start with local privacy-focused projects and expand to cloud providers as needed.