Complete guide for running Ralph on Linux systems including Ubuntu, Debian, and Raspberry Pi OS.
| Distribution | Version | Tested |
|---|---|---|
| Ubuntu | 20.04+ | Yes |
| Debian | 11+ | Yes |
| Raspberry Pi OS | Bookworm | Yes |
| Fedora | 38+ | Yes |
| Arch Linux | Rolling | Yes |
# One-line install
curl -fsSL https://raw.githubusercontent.com/craigm26/Ralph/main/install.sh | bash
# Or clone and run
git clone https://github.com/craigm26/Ralph.git
cd Ralph
./install.sh
Ubuntu / Debian / Raspberry Pi OS:
sudo apt update
sudo apt install -y curl jq git
Fedora:
sudo dnf install -y curl jq git
Arch Linux:
sudo pacman -Sy curl jq git
# Install Node.js
curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash -
sudo apt install -y nodejs
# Install Gemini CLI
sudo npm install -g @google/gemini-cli
# Authenticate
gemini auth login
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull a coding model
ollama pull codellama:13b
# For Raspberry Pi, use smaller models
ollama pull codellama:7b
ollama pull phi:latest
# Just set your API key
export OPENAI_API_KEY="sk-..."
# Add to .bashrc for persistence
echo 'export OPENAI_API_KEY="sk-..."' >> ~/.bashrc
git clone https://github.com/craigm26/Ralph.git
cd Ralph
chmod +x ralph.sh install.sh
# With Gemini (default)
./ralph.sh
# With Ollama
./ralph.sh ollama
# With OpenAI
./ralph.sh openai
| Model | RAM | Suitable For |
|---|---|---|
| Pi 5 8GB | 8GB | Ollama with 7B models |
| Pi 5 4GB | 4GB | API agents only |
| Pi 4 8GB | 8GB | Ollama with small models |
| Pi 4 4GB | 4GB | API agents only |
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Use smaller models for Pi
ollama pull phi:latest # 2.7GB, good for Pi
ollama pull codellama:7b # 4GB, needs 8GB Pi
ollama pull tinyllama:latest # 600MB, very fast
# Run Ralph with small model
./ralph.sh ollama --model phi:latest
sudo dphys-swapfile swapoff
sudo nano /etc/dphys-swapfile # Set CONF_SWAPSIZE=4096
sudo dphys-swapfile setup
sudo dphys-swapfile swapon
# Create service file
sudo nano /etc/systemd/system/ralph.service
[Unit]
Description=Ralph Autonomous Agent
After=network.target
[Service]
Type=simple
User=pi
WorkingDirectory=/home/pi/myproject
ExecStart=/home/pi/Ralph/ralph.sh --force
Restart=on-failure
[Install]
WantedBy=multi-user.target
sudo systemctl enable ralph
sudo systemctl start ralph
# Update system
sudo apt update && sudo apt upgrade -y
# Install dependencies
sudo apt install -y curl jq git nodejs npm
# Install Gemini CLI
sudo npm install -g @google/gemini-cli
# Clone Ralph
git clone https://github.com/craigm26/Ralph.git
cd Ralph
chmod +x ralph.sh
# Run in background
nohup ./ralph.sh --force > ralph.log 2>&1 &
# Install screen
sudo apt install -y screen
# Start Ralph in screen
screen -S ralph
./ralph.sh
# Detach: Ctrl+A, D
# Reattach: screen -r ralph
FROM ubuntu:22.04
RUN apt-get update && apt-get install -y \
curl jq git nodejs npm \
&& rm -rf /var/lib/apt/lists/*
RUN npm install -g @google/gemini-cli
WORKDIR /app
COPY . .
RUN chmod +x ralph.sh
CMD ["./ralph.sh", "--force"]
docker build -t ralph .
docker run -it -v $(pwd):/project ralph
| Variable | Description | Required For |
|---|---|---|
OPENAI_API_KEY |
OpenAI API key | openai agent |
ANTHROPIC_API_KEY |
Anthropic API key | anthropic agent |
GEMINI_API_KEY |
Gemini API key | gemini (optional) |
RALPH_AGENT |
Default agent | All |
RALPH_MODEL |
Default model | All |
# Current session
export OPENAI_API_KEY="sk-..."
# Permanent (bash)
echo 'export OPENAI_API_KEY="sk-..."' >> ~/.bashrc
source ~/.bashrc
# Permanent (zsh)
echo 'export OPENAI_API_KEY="sk-..."' >> ~/.zshrc
source ~/.zshrc
sudo apt install -y jq
sudo apt install -y curl
# Start Ollama service
ollama serve
# Or as systemd service
sudo systemctl start ollama
chmod +x ralph.sh
# Use smaller models
./ralph.sh ollama --model phi:latest
# Or use API agents
./ralph.sh gemini
# Install newer Node.js
curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash -
sudo apt install -y nodejs
Run Ollama on a powerful machine, use Ralph from another:
Server (GPU machine):
# Allow remote connections
OLLAMA_HOST=0.0.0.0 ollama serve
# Or edit systemd service
sudo systemctl edit ollama
# Add: Environment="OLLAMA_HOST=0.0.0.0"
Client (Raspberry Pi or laptop):
./ralph.sh network --endpoint http://192.168.1.100:11434/api/chat --model codellama:34b
./ralph.sh watch to see activity