Ralph

Ralph for macOS

Complete guide for running Ralph on macOS (Intel and Apple Silicon).

Requirements

Quick Start

One-Line Install

curl -fsSL https://raw.githubusercontent.com/craigm26/Ralph/main/install.sh | bash

Manual Installation

# Clone repository
git clone https://github.com/craigm26/Ralph.git
cd Ralph

# Run installer
chmod +x install.sh
./install.sh

Agent Options

Google’s Gemini CLI is the easiest to set up:

# Install via Ralph installer
./install.sh --agent gemini

# Or manually
npm install -g @google/gemini-cli
gemini auth login

2. Ollama (Local Models)

Run AI models locally on your Mac:

# Install via Ralph installer
./install.sh --agent ollama

# Or manually via Homebrew
brew install ollama

# Start Ollama
ollama serve

# Pull a coding model
ollama pull codellama:13b
Mac Type RAM Recommended Model
Apple Silicon 8GB codellama:7b, phi:latest
Apple Silicon 16GB+ codellama:13b, deepseek-coder:6.7b
Apple Silicon 32GB+ qwen2.5-coder:14b, codellama:34b
Intel 16GB+ codellama:7b

Apple Silicon Macs run models on GPU (Metal) for excellent performance.

3. Claude Code CLI

# Install via Ralph installer
./install.sh --agent claude

# Or manually
npm install -g @anthropic-ai/claude-code

# Set API key
export ANTHROPIC_API_KEY="your-key-here"

Running Ralph

Basic Usage

# Navigate to your project
cd ~/Projects/my-app

# Create task file
cat > RALPH_TASK.md << 'EOF'
---
task: Add user authentication
test_command: npm test
---

# Task: User Authentication

Implement user login and registration.

## Success Criteria

1. [ ] POST /auth/login endpoint works
2. [ ] POST /auth/register creates new users
3. [ ] JWT tokens are generated
4. [ ] All tests pass
EOF

# Run Ralph
./ralph.sh

With Specific Agent

# Use Gemini
./ralph.sh --agent gemini

# Use Ollama with specific model
./ralph.sh --agent ollama --model codellama:13b

# Use Claude
./ralph.sh --agent claude

Configuration

Create ~/.ralph/config.json:

{
    "defaultAgent": "gemini",
    "maxIterations": 20,
    "agents": {
        "ollama": {
            "endpoint": "http://localhost:11434/api/chat",
            "defaultModel": "codellama:13b"
        }
    },
    "git": {
        "autoCommit": true,
        "commitPrefix": "ralph:"
    }
}

Apple Silicon Notes

GPU Acceleration

Models automatically use Metal on Apple Silicon:

Memory Management

Ollama manages memory automatically:

Performance Tips

  1. Use quantized models: codellama:13b-instruct-q4_K_M
  2. Close Chrome/Electron apps: Free up memory
  3. Run Ollama in background: ollama serve &

Intel Mac Notes

Homebrew Setup

Ralph uses Homebrew for package management:

# Check if installed
brew --version

# Install if needed (Ralph installer does this automatically)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

# Apple Silicon: Add to path
echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile
eval "$(/opt/homebrew/bin/brew shellenv)"

Troubleshooting

“command not found: brew”

Apple Silicon Macs need Homebrew in PATH:

# Add to shell config
echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile
source ~/.zprofile

Ollama Not Responding

# Check if running
pgrep ollama

# Start if needed
ollama serve

# Or restart
pkill ollama && ollama serve

Node.js Version Issues

# Use Homebrew Node
brew install node

# Check version
node --version  # Should be 18+

Permission Denied on ralph.sh

chmod +x ralph.sh
./ralph.sh

Model Too Slow

  1. Use smaller model: ollama pull phi:latest
  2. Close memory-intensive apps
  3. Try cloud agent (Gemini is free)

Terminal Apps

Works with any terminal:

IDE Integration

VS Code

# Open terminal in VS Code
# Run Ralph from integrated terminal
./ralph.sh

Cursor

# Same as VS Code
./ralph.sh

Launchd Service (Optional)

Run Ollama automatically at startup:

# Create LaunchAgent
cat > ~/Library/LaunchAgents/com.ollama.server.plist << 'EOF'
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Label</key>
    <string>com.ollama.server</string>
    <key>ProgramArguments</key>
    <array>
        <string>/opt/homebrew/bin/ollama</string>
        <string>serve</string>
    </array>
    <key>RunAtLoad</key>
    <true/>
    <key>KeepAlive</key>
    <true/>
</dict>
</plist>
EOF

# Load it
launchctl load ~/Library/LaunchAgents/com.ollama.server.plist

Next Steps

  1. Read QUICKSTART.md for task examples
  2. Check LOCAL_MODELS.md for model comparisons
  3. Join our community on GitHub Discussions