Complete guide for running Ralph on macOS (Intel and Apple Silicon).
curl -fsSL https://raw.githubusercontent.com/craigm26/Ralph/main/install.sh | bash
# Clone repository
git clone https://github.com/craigm26/Ralph.git
cd Ralph
# Run installer
chmod +x install.sh
./install.sh
Google’s Gemini CLI is the easiest to set up:
# Install via Ralph installer
./install.sh --agent gemini
# Or manually
npm install -g @google/gemini-cli
gemini auth login
Run AI models locally on your Mac:
# Install via Ralph installer
./install.sh --agent ollama
# Or manually via Homebrew
brew install ollama
# Start Ollama
ollama serve
# Pull a coding model
ollama pull codellama:13b
| Mac Type | RAM | Recommended Model |
|---|---|---|
| Apple Silicon | 8GB | codellama:7b, phi:latest |
| Apple Silicon | 16GB+ | codellama:13b, deepseek-coder:6.7b |
| Apple Silicon | 32GB+ | qwen2.5-coder:14b, codellama:34b |
| Intel | 16GB+ | codellama:7b |
Apple Silicon Macs run models on GPU (Metal) for excellent performance.
# Install via Ralph installer
./install.sh --agent claude
# Or manually
npm install -g @anthropic-ai/claude-code
# Set API key
export ANTHROPIC_API_KEY="your-key-here"
# Navigate to your project
cd ~/Projects/my-app
# Create task file
cat > RALPH_TASK.md << 'EOF'
---
task: Add user authentication
test_command: npm test
---
# Task: User Authentication
Implement user login and registration.
## Success Criteria
1. [ ] POST /auth/login endpoint works
2. [ ] POST /auth/register creates new users
3. [ ] JWT tokens are generated
4. [ ] All tests pass
EOF
# Run Ralph
./ralph.sh
# Use Gemini
./ralph.sh --agent gemini
# Use Ollama with specific model
./ralph.sh --agent ollama --model codellama:13b
# Use Claude
./ralph.sh --agent claude
Create ~/.ralph/config.json:
{
"defaultAgent": "gemini",
"maxIterations": 20,
"agents": {
"ollama": {
"endpoint": "http://localhost:11434/api/chat",
"defaultModel": "codellama:13b"
}
},
"git": {
"autoCommit": true,
"commitPrefix": "ralph:"
}
}
Models automatically use Metal on Apple Silicon:
Ollama manages memory automatically:
codellama:13b-instruct-q4_K_Mollama serve &Ralph uses Homebrew for package management:
# Check if installed
brew --version
# Install if needed (Ralph installer does this automatically)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Apple Silicon: Add to path
echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile
eval "$(/opt/homebrew/bin/brew shellenv)"
Apple Silicon Macs need Homebrew in PATH:
# Add to shell config
echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile
source ~/.zprofile
# Check if running
pgrep ollama
# Start if needed
ollama serve
# Or restart
pkill ollama && ollama serve
# Use Homebrew Node
brew install node
# Check version
node --version # Should be 18+
chmod +x ralph.sh
./ralph.sh
ollama pull phi:latestWorks with any terminal:
# Open terminal in VS Code
# Run Ralph from integrated terminal
./ralph.sh
# Same as VS Code
./ralph.sh
Run Ollama automatically at startup:
# Create LaunchAgent
cat > ~/Library/LaunchAgents/com.ollama.server.plist << 'EOF'
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.ollama.server</string>
<key>ProgramArguments</key>
<array>
<string>/opt/homebrew/bin/ollama</string>
<string>serve</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<true/>
</dict>
</plist>
EOF
# Load it
launchctl load ~/Library/LaunchAgents/com.ollama.server.plist