Coordinate multiple AI coding assistants to collaborate on complex software development tasks
Coordinate Claude, Codex, Gemini, Copilot, and local model backends with intelligent workflows.
Choose between a powerful command-line interface or modern web UI with real-time updates.
Define custom collaboration patterns or use built-in workflows for different scenarios, including offline and hybrid execution.
Security, monitoring, rate limiting, retry logic, and comprehensive test coverage built-in.
Latest implementation supports type-based adapter resolution (dynamic agent names), local/offline backends (Ollama + OpenAI-compatible), cloud-to-local fallback on recoverable failures, and live local model status probing in the Web UI.
graph LR
A[User Request] --> B[AI Orchestrator]
B --> C{Offline Mode?}
C -->|Yes| D[Route to Local Agent by type]
C -->|No| E[Route to Cloud or Local Agent]
D --> F[Execute Workflow Step]
E --> F
F --> G{Step Success?}
G -->|Yes| H[Next Step]
G -->|Recoverable failure| I[Fallback Agent]
I --> H
H --> J[Final Output + Files]
style A fill:#667eea,stroke:#667eea,color:#fff
style B fill:#764ba2,stroke:#764ba2,color:#fff
style D fill:#43e97b,stroke:#43e97b,color:#fff
style E fill:#4facfe,stroke:#4facfe,color:#fff
style I fill:#f093fb,stroke:#f093fb,color:#fff
style J fill:#00c853,stroke:#00c853,color:#fff
Coordinate multiple AI assistants with specialized roles:
--offlinetypeollama,
llamacpp, localai,
text-generation-webui
The AI Orchestrator follows a modular, layered architecture with clear separation of concerns. It's designed for extensibility, reliability, and production-grade performance.
Runtime controls now include offline detection, fallback management, and local model endpoint probing. This allows hybrid and offline execution without changing core orchestration flow.
flowchart TB
subgraph "User
Interfaces"
CLI[CLI Shell
Click + Rich]
WebUI[Web UI
Vue 3 + Socket.IO]
end
subgraph "Core
Orchestrator"
Engine[Orchestration
Engine]
Workflow[Workflow
Manager]
Config[Config
Manager]
Session[Session
Manager]
Router[Type-based
Adapter Resolver]
end
subgraph "Cross-Cutting
Concerns"
Metrics[Prometheus
Metrics]
Cache[Response
Cache]
Retry[Retry
Logic]
Security[Security
Layer]
end
subgraph "AI
Adapters"
Claude[Claude
Adapter]
Codex[Codex
Adapter]
Gemini[Gemini
Adapter]
Copilot[Copilot
Adapter]
Ollama[Ollama
Adapter]
LlamaCpp[LlamaCpp
Adapter]
end
subgraph "Runtime
Controls"
Offline[Offline
Detector]
Fallback[Fallback
Manager]
ModelStatus[Local Model
Status Probe]
end
subgraph "External AI
Tools"
ClaudeCLI[Claude Code
CLI]
CodexCLI[OpenAI Codex
CLI]
GeminiCLI[Google Gemini
CLI]
CopilotCLI[GitHub Copilot
CLI]
OllamaAPI[Ollama API
/api/generate]
OpenAICompat[Local OpenAI-Compatible API
/v1/completions]
end
CLI --> Engine
WebUI --> Engine
Engine --> Workflow
Engine --> Config
Engine --> Session
Engine --> Router
Engine --> Offline
Engine --> Fallback
WebUI --> ModelStatus
ModelStatus --> OllamaAPI
ModelStatus --> OpenAICompat
Workflow --> Metrics
Workflow --> Cache
Workflow --> Retry
Workflow --> Security
Workflow --> Claude
Workflow --> Codex
Workflow --> Gemini
Workflow --> Copilot
Workflow --> Ollama
Workflow --> LlamaCpp
Claude --> ClaudeCLI
Codex --> CodexCLI
Gemini --> GeminiCLI
Copilot --> CopilotCLI
Ollama --> OllamaAPI
LlamaCpp --> OpenAICompat
style CLI fill:#667eea,stroke:#667eea,color:#fff
style WebUI fill:#667eea,stroke:#667eea,color:#fff
style Engine fill:#4facfe,stroke:#4facfe,color:#fff
style Workflow fill:#43e97b,stroke:#43e97b,color:#fff
style Offline fill:#ffe082,stroke:#ffca28,color:#000
style Fallback fill:#f8bbd0,stroke:#ec407a,color:#000
User-facing interfaces: CLI and Web UI
Core business logic and workflow management
Security, caching, metrics, and logging
AI agent integrations with uniform interface
Third-party AI CLI tools
Uniform interface to different AI CLIs
Configurable workflow strategies
Real-time UI updates via Socket.IO
Agent and workflow creation
Config and metrics managers
Retry, cache, and logging decorators
git clone <repository-url>
cd AI-Coding-Tools-Collaborative
pip install -r requirements.txt
chmod +x ai-orchestrator
./ai-orchestrator --help
./ai-orchestrator agents
./ai-orchestrator shell
# Start local backend (example: Ollama)
ollama serve
ollama pull codellama:13b
# Check local backend and model status
./ai-orchestrator models status
# Run local-only workflow
./ai-orchestrator run "Build a Python CLI todo app" --workflow offline-default --offline
# Start interactive shell
./ai-orchestrator shell
orchestrator (default) > create a REST API for user management
✓ Task completed successfully!
📁 Generated Files:
📄 api/routes.py
📄 api/models.py
orchestrator (default) > add JWT authentication
💡 Detected as follow-up to previous task
✓ Authentication added!
orchestrator (default) > /save user-api-project
✓ Session saved!
| Workflow | Agents | Iterations | Use Case |
|---|---|---|---|
| default | Codex → Gemini → Claude | 3 | Production-quality code with review |
| quick | Codex only | 1 | Fast prototyping and iteration |
| thorough | Codex → Copilot → Gemini → Claude → Gemini | 5 | Mission-critical or security-sensitive |
| review-only | Gemini → Claude | 2 | Analyzing existing code |
| document | Claude → Gemini | 2 | Generating documentation |
| offline-default | local-code → local-instruct | 2 | Local-only execution in offline/air-gapped setups |
| hybrid | local-code → claude (fallback local-instruct) | 2 | Local draft with cloud review and local failover |
graph TD
START([Start]) --> LOAD[Load Workflow Config]
LOAD --> NORM[Normalize legacy and steps formats]
NORM --> VALIDATE[Validate workflow and agent availability]
VALIDATE --> INIT[Initialize adapters by type]
INIT --> ITER{Iteration < Max?}
ITER -->|Yes| STEP[Execute step with primary agent]
STEP --> OK{Success?}
OK -->|Yes| CTX[Update context]
OK -->|No recoverable| FB[Run configured fallback]
FB --> CTX
OK -->|No non-recoverable| FAIL[Record step failure]
FAIL --> CTX
CTX --> CHECK{Stop criteria met?}
CHECK -->|No| ITER
CHECK -->|Yes| AGG[Aggregate iteration outputs]
ITER -->|No| AGG
AGG --> REPORT[Generate final result]
REPORT --> END([End])
style START fill:#667eea,stroke:#667eea,color:#fff
style END fill:#43e97b,stroke:#43e97b,color:#fff
style FB fill:#f8bbd0,stroke:#ec407a,color:#000
Define your own workflows in
config/agents.yaml:
agents:
my-custom-llama:
type: llamacpp
endpoint: http://localhost:9000
offline: true
enabled: true
workflows:
custom:
steps:
- agent: "my-custom-llama"
role: "implementer"
- agent: "gemini"
role: "reviewer"
fallback: "my-custom-llama"
Project overview and getting started guide
System design, patterns, and components
Comprehensive feature documentation
Detailed installation and configuration
Guide for integrating new AI agents
Production deployment strategies
git checkout -b feature/your-feature
make allgit commit -m "feat: add amazing feature"
$ ./ai-orchestrator shell
Welcome to AI Orchestrator v1.0.0
Type /help for available commands
orchestrator (default) > create a Python REST API with FastAPI
🤖 Executing workflow: default
📊 Step 1/3: Codex (Implementation)
⏳ Processing...
✓ Implementation complete!
📊 Step 2/3: Gemini (Review)
⏳ Analyzing code...
✓ Review complete! Found 3 suggestions:
• Add input validation
• Include error handling
• Add API documentation
📊 Step 3/3: Claude (Refinement)
⏳ Implementing improvements...
✓ Task completed successfully!
📁 Generated Files:
📄 app/main.py (FastAPI app)
📄 app/models.py (Pydantic models)
📄 app/routes.py (API routes)
📄 app/schemas.py (Request/response schemas)
📄 tests/test_api.py (Unit tests)
📄 requirements.txt (Dependencies)
Workspace: ./workspace/session-abc123
orchestrator (default) > add authentication
💡 Detected as follow-up to previous task
The Web UI provides a visual interface with:
Multi-line textarea with syntax highlighting
Socket.IO for real-time progress
Full Monaco editor with IntelliSense
View, download, and manage generated files
# Run with specific workflow
./ai-orchestrator run "Build authentication system" --workflow thorough
# Custom iterations
./ai-orchestrator run "Optimize database queries" --max-iterations 5
# With verbose output
./ai-orchestrator run "Add caching layer" --workflow default --verbose
# Dry run to preview
./ai-orchestrator run "Refactor code" --dry-run
# Load previous session
./ai-orchestrator shell --load my-project
Join developers using AI Orchestrator to build better software faster