Five independent systems coordinating AI coding assistants with enterprise-grade agentic infrastructure: specialized agents, skills library, 34+ MCP tools, project-scoped context memory, and Graphify codebase knowledge graphs
Coordinate Claude, Codex, Gemini, Copilot, and local model backends with intelligent workflows.
Choose between a powerful command-line interface or modern web UI with real-time updates.
Define custom collaboration patterns or use built-in workflows for different scenarios, including offline and hybrid execution.
Security, monitoring, rate limiting, retry logic, and comprehensive test coverage built-in.
9 specialized agents, 24 skills, and 34+ MCP tools provide enterprise-grade support for any development task.
Graph-based memory with hybrid search and project-scoped isolation lets agents learn from past tasks and avoid repeating mistakes.
Turn any directory into a queryable knowledge graph. AST-based code analysis for 19 languages, REST API, interactive HTML visualization, and multi-format export (JSON, GraphML, DOT, Markdown, Obsidian vault).
Export code graphs and context memory as Obsidian vaults. Pre-configured graph view with color-coded node types, [[wikilinks]], and YAML frontmatter - across all three graph systems.
Multi-role team collaboration with free inter-agent communication, lead-gated responses, configurable turn limits, and live communication graphs.
The project is organized into five standalone systems - Orchestrator, Agentic Team, MCP Server, Context Dashboard, and Graphify - and is built to coordinate real coding workflows across Claude, Codex, Gemini, Copilot, and local backends. Orchestrator drives step-based execution and fallback, Agentic Team runs role-based collaboration, MCP Server exposes automation tools, Context Dashboard surfaces memory/telemetry, and Graphify builds AST-powered knowledge graphs across 19 programming languages.
Local model note: Local adapters are fully wired into routing/offline/fallback flows, but they currently return text outputs only and do not directly edit files. They are best for offline drafting, review, and continuity fallback.
graph LR
A[User Request] --> B[AI Orchestrator]
B --> C{Offline Mode?}
C -->|Yes| D[Route to Local Agent by type]
C -->|No| E[Route to Cloud or Local Agent]
D --> F[Execute Workflow Step]
E --> F
F --> G{Step Success?}
G -->|Yes| H[Next Step]
G -->|Recoverable failure| I[Fallback Agent]
I --> H
H --> J[Final Output + Files]
style A fill:#3b82f6,stroke:#3b82f6,color:#fff
style B fill:#2563eb,stroke:#2563eb,color:#fff
style D fill:#10b981,stroke:#10b981,color:#fff
style E fill:#60a5fa,stroke:#60a5fa,color:#fff
style I fill:#60a5fa,stroke:#60a5fa,color:#fff
style J fill:#10b981,stroke:#059669,color:#fff
Coordinate multiple AI assistants with specialized roles:
--offlinetypeollama,
llamacpp, localai,
text-generation-webui
.obsidian/graph.json color groupsBeyond the core engines, we provide enterprise-grade infrastructure that empowers AI agents to accomplish any development task effectively. This includes specialized agents, reusable skills, MCP tools, and persistent context memory with project-scoped isolation.
graph TB
subgraph "🧠 Agentic Infrastructure"
direction TB
subgraph AGENTS["Specialized Agents (9)"]
WEB[Web Frontend]
API[Backend API]
SEC[Security]
OPS[DevOps]
ML[AI/ML]
DB[Database]
end
subgraph SKILLS["Skills Library (22)"]
DEV[Development]
TEST[Testing]
SECS[Security Skills]
DEVOPS[DevOps Skills]
AIML[AI/ML Skills]
DOCS[Documentation]
end
subgraph TOOLS["MCP Tools (34+)"]
CODE[Code Analysis]
SCAN[Security Scan]
TTOOLS[Testing]
DTOOLS[DevOps]
CTX[Context Memory]
end
subgraph CONTEXT["Graph Context System"]
GRAPH[(SQLite + FTS5)]
SEARCH[Hybrid Search]
EMBED[Embeddings]
end
end
AGENTS --> SKILLS
SKILLS --> TOOLS
TOOLS --> CONTEXT
Domain experts for every development area:
Reusable task templates across 6 categories:
Tools exposed via Model Context Protocol:
Persistent memory with intelligent retrieval:
Enforce best practices across domains:
Easy to extend with new capabilities:
graph TB
subgraph "Context System"
subgraph "Storage"
SQLITE[(SQLite DB)]
FTS5[FTS5 Index]
VEC[(Vectors)]
end
subgraph "Nodes"
CONV[Conversations]
TASK[Tasks]
MISTAKE[Mistakes]
PATTERN[Patterns]
end
subgraph "Search"
BM25[BM25]
EMBED[Embeddings]
RRF[RRF Fusion]
end
subgraph "Export"
JSONX[JSON]
GMLX[GraphML]
OBSX["Obsidian Vault
[[wikilinks]] + graph.json"]
end
end
CONV & TASK & MISTAKE --> SQLITE
SQLITE --> FTS5
EMBED --> VEC
BM25 & EMBED --> RRF
SQLITE --> JSONX & GMLX & OBSX
The AI Orchestrator follows a modular, layered architecture with clear separation of concerns. Five independent systems - Orchestrator, Agentic Team, MCP Server, Context Dashboard, and Graphify - each have their own CLI, config, and adapters with zero cross-imports.
Runtime controls include offline detection, fallback management, local model endpoint probing, and Graphify codebase knowledge graphs for deep project understanding.
flowchart TB
subgraph "User
Interfaces"
CLI[CLI Shell
Click + Rich]
WebUI[Web UI
Vue 3 + Socket.IO]
MCP[MCP Server
FastMCP 3.x]
end
subgraph "Core
Orchestrator"
Engine[Orchestration
Engine]
Workflow[Workflow
Manager]
Config[Config
Manager]
Session[Session
Manager]
Router[Type-based
Adapter Resolver]
end
subgraph "Cross-Cutting
Concerns"
Metrics[Prometheus
Metrics]
Cache[Response
Cache]
Retry[Retry
Logic]
Security[Security
Layer]
end
subgraph "AI
Adapters"
Claude[Claude
Adapter]
Codex[Codex
Adapter]
Gemini[Gemini
Adapter]
Copilot[Copilot
Adapter]
Ollama[Ollama
Adapter]
LlamaCpp[LlamaCpp
Adapter]
end
subgraph "Runtime
Controls"
Offline[Offline
Detector]
Fallback[Fallback
Manager]
ModelStatus[Local Model
Status Probe]
end
subgraph "External AI
Tools"
ClaudeCLI[Claude Code
CLI]
CodexCLI[OpenAI Codex
CLI]
GeminiCLI[Google Gemini
CLI]
CopilotCLI[GitHub Copilot
CLI]
OllamaAPI[Ollama API
/api/generate]
OpenAICompat[Local OpenAI-Compatible API
/v1/completions]
end
CLI --> Engine
WebUI --> Engine
MCP --> Engine
Engine --> Workflow
Engine --> Config
Engine --> Session
Engine --> Router
Engine --> Offline
Engine --> Fallback
WebUI --> ModelStatus
ModelStatus --> OllamaAPI
ModelStatus --> OpenAICompat
Workflow --> Metrics
Workflow --> Cache
Workflow --> Retry
Workflow --> Security
Workflow --> Claude
Workflow --> Codex
Workflow --> Gemini
Workflow --> Copilot
Workflow --> Ollama
Workflow --> LlamaCpp
Claude --> ClaudeCLI
Codex --> CodexCLI
Gemini --> GeminiCLI
Copilot --> CopilotCLI
Ollama --> OllamaAPI
LlamaCpp --> OpenAICompat
style CLI fill:#3b82f6,stroke:#3b82f6,color:#fff
style WebUI fill:#3b82f6,stroke:#3b82f6,color:#fff
style Engine fill:#60a5fa,stroke:#60a5fa,color:#fff
style Workflow fill:#10b981,stroke:#10b981,color:#fff
style Offline fill:#f59e0b,stroke:#d97706,color:#fff
style Fallback fill:#a78bfa,stroke:#7c3aed,color:#fff
User-facing interfaces: CLI and Web UI
Core business logic and workflow management
Security, caching, metrics, and logging
AI agent integrations with uniform interface
Third-party AI CLI tools
Uniform interface to different AI CLIs
Configurable workflow strategies
Real-time UI updates via Socket.IO
Agent and workflow creation
Config and metrics managers
Retry, cache, and logging decorators
Agentic Team is a separate runtime path from orchestrator workflows. It models a true software team where roles route work to each other at runtime, and only the lead role can finalize the user-facing response.
This path has its own backend/UI (agentic_team/orchestrator/ui/app.py),
its own CLI REPL (./ai-orchestrator agentic-shell),
dedicated validation, and live communication graph/timeline
streaming.
flowchart LR
subgraph Orchestrator Path
OCLI[ai-orchestrator run/shell] --> OCORE[orchestrator.core]
OCORE --> OWF[predefined workflow steps]
end
subgraph Agentic Team Path
AUI[agentic_team/orchestrator/ui/app.py]
ASHELL[ai-orchestrator agentic-shell]
AUI --> AENGINE[agentic_team.engine]
ASHELL --> AENGINE
AENGINE --> PM[project_manager]
PM --> SA[software_architect]
PM --> SD[software_developer]
PM --> QA[qa_engineer]
PM --> DO[devops_engineer]
SA --> SD
SD --> QA
SD --> DO
QA --> PM
DO --> PM
PM --> USER[final response to user]
end
style OCLI fill:#3b82f6,stroke:#3b82f6,color:#fff
style OCORE fill:#2563eb,stroke:#2563eb,color:#fff
style AENGINE fill:#10b981,stroke:#10b981,color:#fff
style PM fill:#f59e0b,stroke:#d97706,color:#fff
style USER fill:#60a5fa,stroke:#60a5fa,color:#fff
message and
finalize
team_turn,
team_communication, progress_log
agentic_team.roles| Role | Purpose | Typical Outgoing Handoffs |
|---|---|---|
| project_manager | Lead, planning, gating final response | architect, developer, QA, DevOps, or finalize to user |
| software_architect | Architecture and constraints | developer or PM |
| software_developer | Implementation | QA, DevOps, or PM |
| qa_engineer | Validation and regressions | developer or PM |
| devops_engineer | Runtime and deployability | developer or PM |
sequenceDiagram
participant PM as project_manager (claude)
participant DEV as software_developer (codex)
participant QA as qa_engineer (gemini)
participant USER as user
PM->>DEV: action=message
"Implement endpoint + tests"
DEV->>QA: action=message
"Implementation complete, validate"
QA->>PM: action=message
"Validation passed"
PM->>USER: action=finalize
"Ready to ship"
# Start standalone UI backend
./start-agentic-ui.sh
# Start standalone REPL
./ai-orchestrator agentic-shell
# Inspect team mappings in REPL
/team
/validate
# Full docs (protocol, examples, failure handling)
open AGENTIC_TEAM.md
The fifth standalone system in the project. Graphify turns any directory into a queryable knowledge graph using deterministic AST analysis - no LLM required for code. It supports 19 programming languages via tree-sitter grammars, provides a full REST API with 34+ endpoints, interactive HTML visualization, Obsidian vault export with color-coded graph view, and multi-format export. Built for enterprise-grade production use with thread-safe SQLite + FTS5 storage, file watching for incremental rebuilds, and project-scoped isolation.
graph TB
subgraph Input["Input Layer"]
CLI["CLI (Click)"]
API["REST API (Flask)"]
Watch["File Watcher"]
end
subgraph Core["Core Engine"]
Scanner["Scanner"]
Cache["SHA-256 Cache"]
Graph["GraphStore (SQLite + FTS5)"]
end
subgraph Analyzers["Language Analyzers (19 languages)"]
PY["Python"]
JS["JavaScript"]
TS["TypeScript"]
GO["Go"]
RS["Rust"]
JV["Java"]
MORE["C, C++, Ruby, C#, Kotlin, Scala, PHP, Swift, Lua, Zig, Elixir, ObjC"]
end
subgraph Output["Output Layer"]
HTML["HTML Visualization (vis.js)"]
JSON["JSON Export"]
GML["GraphML Export"]
DOT["DOT / Graphviz"]
MD["Markdown Report"]
OBS["Obsidian Vault"]
end
CLI --> Scanner
API --> Graph
Watch --> Scanner
Scanner --> Cache
Scanner --> Analyzers
Analyzers --> Graph
Graph --> Output
.obsidian/graph.json with color-coded node types# Scan a project directory
graphify scan /path/to/project
# Search the knowledge graph
graphify search "authentication" --limit 20
# Query connections between concepts
graphify query "What connects UserAuth to Database?"
# Find shortest path between nodes
graphify find-path "AuthController" "DatabasePool"
# Generate a full report
graphify report /path/to/project
# Export to multiple formats
graphify export --format json --output graph.json
graphify export --format html --output graph.html
graphify export --format graphml --output graph.graphml
# Export as Obsidian vault for interactive graph exploration
graphify export obsidian /path/to/project --output ./code-vault
# → Open ./code-vault in Obsidian, press Ctrl/Cmd+G for graph view
# Start the REST API server
graphify serve --port 5000
# Watch for file changes and rebuild incrementally
graphify watch /path/to/project
# Detect god nodes (highest connectivity)
graphify god-nodes --limit 10
# Analyze hotspots (most-connected files)
graphify hotspots --limit 10
Selected product views from the repository screenshot set to give a quick visual tour before diving into architecture and implementation details.
Workflow controls, live updates, file management, and editor integration in one interface.
Role routing, communication graph, and turn timeline for multi-role execution.
Conversation-first command flow for incremental implementation and follow-up tasks.
Structured orchestration output showing workflow progress and generated artifacts.
Interactive knowledge graph visualization with node inspection, analytics charts, and hybrid search across both systems.
Interactive MCP console for exploring and testing 34+ tools across both orchestrator and agentic team engines.
git clone <repository-url>
cd AI-Coding-Tools-Collaborative
pip install -r requirements.txt
chmod +x ai-orchestrator
./ai-orchestrator --help
./ai-orchestrator agents
./ai-orchestrator shell
# Start local backend (example: Ollama)
ollama serve
ollama pull codellama:13b
# Check local backend and model status
./ai-orchestrator models status
# Run local-only workflow
./ai-orchestrator run "Build a Python CLI todo app" --workflow offline-default --offline
# Start interactive shell
./ai-orchestrator shell
orchestrator (default) > create a REST API for user management
[OK] Task completed successfully!
[FILES] Generated Files:
- api/routes.py
- api/models.py
orchestrator (default) > add JWT authentication
[TIP] Detected as follow-up to previous task
✓ Authentication added!
orchestrator (default) > /save user-api-project
✓ Session saved!
| Workflow | Agents | Iterations | Use Case |
|---|---|---|---|
| default | Codex → Gemini → Claude | 3 | Production-quality code with review |
| quick | Codex only | 1 | Fast prototyping and iteration |
| thorough | Codex → Copilot → Gemini → Claude → Gemini | 5 | Mission-critical or security-sensitive |
| review-only | Gemini → Claude | 2 | Analyzing existing code |
| document | Claude → Gemini | 2 | Generating documentation |
| offline-default | local-code → local-instruct | 2 | Local-only execution in offline/air-gapped setups |
| hybrid | local-code → claude (fallback local-instruct) | 2 | Local draft with cloud review and local failover |
graph TD
START([Start]) --> LOAD[Load Workflow Config]
LOAD --> NORM[Normalize legacy and steps formats]
NORM --> VALIDATE[Validate workflow and agent availability]
VALIDATE --> INIT[Initialize adapters by type]
INIT --> ITER{Iteration < Max?}
ITER -->|Yes| STEP[Execute step with primary agent]
STEP --> OK{Success?}
OK -->|Yes| CTX[Update context]
OK -->|No recoverable| FB[Run configured fallback]
FB --> CTX
OK -->|No non-recoverable| FAIL[Record step failure]
FAIL --> CTX
CTX --> CHECK{Stop criteria met?}
CHECK -->|No| ITER
CHECK -->|Yes| AGG[Aggregate iteration outputs]
ITER -->|No| AGG
AGG --> REPORT[Generate final result]
REPORT --> END([End])
style START fill:#3b82f6,stroke:#3b82f6,color:#fff
style END fill:#10b981,stroke:#10b981,color:#fff
style FB fill:#a78bfa,stroke:#7c3aed,color:#fff
Define your own workflows in
orchestrator/config/agents.yaml:
agents:
my-custom-llama:
type: llamacpp
endpoint: http://localhost:9000
offline: true
enabled: true
workflows:
custom:
steps:
- agent: "my-custom-llama"
role: "implementer"
- agent: "gemini"
role: "reviewer"
fallback: "my-custom-llama"
Project overview and getting started guide
System design, patterns, and components
Comprehensive feature documentation
Standalone runtime, communication protocol, and examples
Detailed installation and configuration
Guide for integrating new AI agents
Production deployment strategies
Workflow-based multi-agent orchestration
Specialized agents, skills, MCP tools, context system
Codebase knowledge graph system - architecture, API, and CLI
FastMCP server - 34+ tools for both engines
Graph-based memory with hybrid search
9 domain-specific agent definitions
22 reusable skill templates
git checkout -b feature/your-feature
make allgit commit -m "feat: add amazing feature"
$ ./ai-orchestrator shell
Welcome to AI Orchestrator v1.0.0
Type /help for available commands
orchestrator (default) > create a Python REST API with FastAPI
[AGENT] Executing workflow: default
[STEP] Step 1/3: Codex (Implementation)
⏳ Processing...
✓ Implementation complete!
[STEP] Step 2/3: Gemini (Review)
⏳ Analyzing code...
✓ Review complete! Found 3 suggestions:
• Add input validation
• Include error handling
• Add API documentation
[STEP] Step 3/3: Claude (Refinement)
⏳ Implementing improvements...
✓ Task completed successfully!
[FILES] Generated Files:
- app/main.py (FastAPI app)
- app/models.py (Pydantic models)
- app/routes.py (API routes)
- app/schemas.py (Request/response schemas)
- tests/test_api.py (Unit tests)
- requirements.txt (Dependencies)
Workspace: ./workspace/session-abc123
orchestrator (default) > add authentication
[TIP] Detected as follow-up to previous task
The Web UI provides a visual interface with:
Multi-line textarea with syntax highlighting
Socket.IO for real-time progress
Full Monaco editor with IntelliSense
View, download, and manage generated files
# Run with specific workflow
./ai-orchestrator run "Build authentication system" --workflow thorough
# Custom iterations
./ai-orchestrator run "Optimize database queries" --max-iterations 5
# With verbose output
./ai-orchestrator run "Add caching layer" --workflow default --verbose
# Dry run to preview
./ai-orchestrator run "Refactor code" --dry-run
# Load previous session
./ai-orchestrator shell --load my-project
Join developers using AI Orchestrator to build better software faster