Ralph

Ralph for Linux - Setup Guide

Complete guide for running Ralph on Linux systems including Ubuntu, Debian, and Raspberry Pi OS.

Supported Systems

Distribution Version Tested
Ubuntu 20.04+ Yes
Debian 11+ Yes
Raspberry Pi OS Bookworm Yes
Fedora 38+ Yes
Arch Linux Rolling Yes

Quick Install

# One-line install
curl -fsSL https://raw.githubusercontent.com/craigm26/Ralph/main/install.sh | bash

# Or clone and run
git clone https://github.com/craigm26/Ralph.git
cd Ralph
./install.sh

Manual Setup

1. Install Dependencies

Ubuntu / Debian / Raspberry Pi OS:

sudo apt update
sudo apt install -y curl jq git

Fedora:

sudo dnf install -y curl jq git

Arch Linux:

sudo pacman -Sy curl jq git

2. Choose Your Agent

# Install Node.js
curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash -
sudo apt install -y nodejs

# Install Gemini CLI
sudo npm install -g @google/gemini-cli

# Authenticate
gemini auth login

Option B: Ollama (Local Models)

# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# Pull a coding model
ollama pull codellama:13b

# For Raspberry Pi, use smaller models
ollama pull codellama:7b
ollama pull phi:latest

Option C: OpenAI API

# Just set your API key
export OPENAI_API_KEY="sk-..."

# Add to .bashrc for persistence
echo 'export OPENAI_API_KEY="sk-..."' >> ~/.bashrc

3. Download Ralph

git clone https://github.com/craigm26/Ralph.git
cd Ralph
chmod +x ralph.sh install.sh

4. Run Ralph

# With Gemini (default)
./ralph.sh

# With Ollama
./ralph.sh ollama

# With OpenAI
./ralph.sh openai

Raspberry Pi Setup

Hardware Recommendations

Model RAM Suitable For
Pi 5 8GB 8GB Ollama with 7B models
Pi 5 4GB 4GB API agents only
Pi 4 8GB 8GB Ollama with small models
Pi 4 4GB 4GB API agents only

Ollama on Raspberry Pi

# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# Use smaller models for Pi
ollama pull phi:latest          # 2.7GB, good for Pi
ollama pull codellama:7b        # 4GB, needs 8GB Pi
ollama pull tinyllama:latest    # 600MB, very fast

# Run Ralph with small model
./ralph.sh ollama --model phi:latest

Performance Tips for Pi

  1. Use API agents - Gemini and OpenAI work great on Pi
  2. Swap space - Add swap for larger local models:
    sudo dphys-swapfile swapoff
    sudo nano /etc/dphys-swapfile  # Set CONF_SWAPSIZE=4096
    sudo dphys-swapfile setup
    sudo dphys-swapfile swapon
    
  3. Active cooling - Required for sustained model inference
  4. SSD storage - Faster than SD card for model loading

Running as a Service

# Create service file
sudo nano /etc/systemd/system/ralph.service
[Unit]
Description=Ralph Autonomous Agent
After=network.target

[Service]
Type=simple
User=pi
WorkingDirectory=/home/pi/myproject
ExecStart=/home/pi/Ralph/ralph.sh --force
Restart=on-failure

[Install]
WantedBy=multi-user.target
sudo systemctl enable ralph
sudo systemctl start ralph

Ubuntu Server Setup

Headless Installation

# Update system
sudo apt update && sudo apt upgrade -y

# Install dependencies
sudo apt install -y curl jq git nodejs npm

# Install Gemini CLI
sudo npm install -g @google/gemini-cli

# Clone Ralph
git clone https://github.com/craigm26/Ralph.git
cd Ralph
chmod +x ralph.sh

# Run in background
nohup ./ralph.sh --force > ralph.log 2>&1 &

Using Screen or tmux

# Install screen
sudo apt install -y screen

# Start Ralph in screen
screen -S ralph
./ralph.sh

# Detach: Ctrl+A, D
# Reattach: screen -r ralph

Docker Setup

FROM ubuntu:22.04

RUN apt-get update && apt-get install -y \
    curl jq git nodejs npm \
    && rm -rf /var/lib/apt/lists/*

RUN npm install -g @google/gemini-cli

WORKDIR /app
COPY . .
RUN chmod +x ralph.sh

CMD ["./ralph.sh", "--force"]
docker build -t ralph .
docker run -it -v $(pwd):/project ralph

Environment Variables

Variable Description Required For
OPENAI_API_KEY OpenAI API key openai agent
ANTHROPIC_API_KEY Anthropic API key anthropic agent
GEMINI_API_KEY Gemini API key gemini (optional)
RALPH_AGENT Default agent All
RALPH_MODEL Default model All

Setting Environment Variables

# Current session
export OPENAI_API_KEY="sk-..."

# Permanent (bash)
echo 'export OPENAI_API_KEY="sk-..."' >> ~/.bashrc
source ~/.bashrc

# Permanent (zsh)
echo 'export OPENAI_API_KEY="sk-..."' >> ~/.zshrc
source ~/.zshrc

Troubleshooting

“jq: command not found”

sudo apt install -y jq

“curl: command not found”

sudo apt install -y curl

Ollama connection refused

# Start Ollama service
ollama serve

# Or as systemd service
sudo systemctl start ollama

“Permission denied” on ralph.sh

chmod +x ralph.sh

Out of memory on Raspberry Pi

# Use smaller models
./ralph.sh ollama --model phi:latest

# Or use API agents
./ralph.sh gemini

Node.js version too old

# Install newer Node.js
curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash -
sudo apt install -y nodejs

Network Setup (Multi-Machine)

Run Ollama on a powerful machine, use Ralph from another:

Server (GPU machine):

# Allow remote connections
OLLAMA_HOST=0.0.0.0 ollama serve

# Or edit systemd service
sudo systemctl edit ollama
# Add: Environment="OLLAMA_HOST=0.0.0.0"

Client (Raspberry Pi or laptop):

./ralph.sh network --endpoint http://192.168.1.100:11434/api/chat --model codellama:34b

Best Practices

  1. Use Git - Ralph commits progress, so initialize git in your project
  2. Write clear tasks - Specific success criteria help Ralph succeed
  3. Add guardrails - When something fails repeatedly, add a sign
  4. Monitor logs - Use ./ralph.sh watch to see activity
  5. Start small - Test with 2-3 iterations before long runs

Next Steps