Get Ralph running in 5 minutes with any AI agent.
| If you want… | Use | Setup |
|---|---|---|
| Free + huge context | gemini |
npm install -g @google/gemini-cli |
| Best code quality | openai |
Set OPENAI_API_KEY |
| Privacy / offline | ollama |
winget install Ollama.Ollama |
| GUI for local | lmstudio |
Download from lmstudio.ai |
| Enterprise | azure |
Configure endpoint |
# 1. Install
npm install -g @google/gemini-cli
gemini auth login
# 2. Run
.\ralph.bat
# 1. Set API key
$env:OPENAI_API_KEY = "sk-..."
# 2. Run
.\ralph.bat openai
# 1. Install
winget install Ollama.Ollama
# 2. Pull a model
ollama pull codellama:13b
# 3. Run
.\ralph.bat ollama
# Point to any OpenAI-compatible API
.\ralph.bat network -Endpoint http://192.168.1.100:8080/v1/chat/completions
Edit RALPH_TASK.md:
---
task: Build REST API
test_command: npm test
---
# Task: REST API
## Success Criteria
1. [ ] GET /health returns 200
2. [ ] POST /users works
3. [ ] Tests pass
# Run
.\ralph.bat
# Watch progress
.\ralph.bat watch
# Check errors
Get-Content .ralph\errors.log
# Add guardrail to prevent repeat
notepad .ralph\guardrails.md
Add this format:
### Sign: [What went wrong]
- **Trigger**: [When it happens]
- **Instruction**: [What to do instead]
| Command | What |
|---|---|
ralph.bat |
Run default |
ralph.bat openai |
Use OpenAI |
ralph.bat ollama -Model codellama:34b |
Use specific model |
ralph.bat watch |
Monitor logs |
ralph.bat models ollama |
List models |
ralph.bat init |
Reset state |