GitIntel

Git Intelligence MCP Server — deep repository analytics computed locally from your commit history. No external APIs, no data leaves your machine.

TypeScript 5.7 Node.js ≥18 MCP SDK 1.27 Zod 3.24 Vitest 3.0 Git ≥2.20 MIT License

What GitIntel Does

GitIntel surfaces the same insights that tools like CodeScene and GitPrime charge for — hotspots, temporal coupling, knowledge maps, churn analysis, complexity trends, risk scoring, and more. Everything runs locally against your .git directory.

🔥
Change Hotspots

Find files changed most frequently — the top 4% contain 50%+ of bugs.

🔄
Code Churn

Detect code rewritten as fast as it's written. Churn ratio near 1.0 = instability.

🔗
Temporal Coupling

Hidden dependencies not visible in imports — files that always change together.

🧠
Knowledge Map

Who knows what, weighted by recency. Find the right reviewer, spot knowledge silos.

📈
Complexity Trend

Track how files grow in lines, nesting depth, and functions over time.

⚠️
Risk Assessment

Score changes 0–100 combining hotspot history, size, sensitivity, and spread.

📜
Release Notes

Structured changelogs from conventional commits with breaking change detection.

👥
Contributor Stats

Team dynamics, collaboration graphs, workload distribution, and knowledge silos.

📖
File History

Full commit history of any file with rename tracking — who changed it, when, and why.

Code Age

Age map of every file — identify stale code vs actively maintained areas.

📅
Commit Patterns

Work pattern analytics: day-of-week, time-of-day, commit sizes, and velocity trends.

🌱
Branch Risk

Analyze branch staleness, divergence from main, and merge risk across all branches.

System Architecture

GitIntel is a Node.js MCP server that communicates over stdio using JSON-RPC 2.0. It calls Git via execFile (never exec) to prevent shell injection. All operations are strictly read-only.

graph TD
    Client["MCP Client<br/>Claude Code / Codex"]
    Server["mcp-git-intel<br/>Node.js process"]
    Git["Git CLI<br/>execFile, no shell"]
    Repo[".git directory<br/>read-only"]

    Client <-->|"stdio JSON-RPC 2.0<br/>MCP protocol"| Server
    Server -->|"execFile with args array<br/>30s timeout, 50MB buffer"| Git
    Git -->|"stdout/stderr"| Server
    Git -->|"read-only operations"| Repo

    subgraph ServerInternals ["Server Internals"]
        direction TB
        Transport["StdioServerTransport"]
        McpServer["McpServer SDK"]
        Tools["12 Tool Handlers"]
        Resources["2 Resource Handlers"]
        GitLayer["Git Layer<br/>executor, parser, repo"]
        UtilLayer["Util Layer<br/>scoring, formatting"]

        Transport --> McpServer
        McpServer --> Tools
        McpServer --> Resources
        Tools --> GitLayer
        Tools --> UtilLayer
        Resources --> GitLayer
    end
        

Data Flow

Every tool invocation follows the same pipeline: validate, execute, parse, score, format, return.

sequenceDiagram
    participant C as MCP Client
    participant S as Tool Handler
    participant V as Validation (Zod)
    participant G as Git Executor
    participant P as Parser
    participant Sc as Scoring Engine
    participant F as Formatter

    C->>S: callTool(name, args)
    S->>V: Validate inputs
    V->>V: validatePathFilter() / validateRef()
    S->>G: gitExec(args, options)
    G->>G: execFile('git', [...args])
    G-->>S: stdout / stderr
    S->>P: Parse git output
    P-->>S: Structured data
    S->>Sc: Calculate scores
    Sc-->>S: Scored results
    S->>F: Format tables and bars
    F-->>S: Formatted text
    S-->>C: CallToolResult
        

Module Dependency Graph

graph TD
    index["src/index.ts<br/>Entry point"]

    subgraph ToolsLayer ["Tools Layer"]
        hotspots["hotspots.ts"]
        churn["churn.ts"]
        coupling["coupling.ts"]
        knowledge["knowledge-map.ts"]
        complexity["complexity.ts"]
        risk["risk.ts"]
        release["release-notes.ts"]
        contributors["contributors.ts"]
        filehistory["file-history.ts"]
        codeage["code-age.ts"]
        commitpatterns["commit-patterns.ts"]
        branchrisk["branch-risk.ts"]
    end

    subgraph ResourcesLayer ["Resources Layer"]
        summary["summary.ts"]
        activity["activity.ts"]
    end

    subgraph GitLayer ["Git Layer"]
        executor["executor.ts<br/>execFile wrapper"]
        parser["parser.ts<br/>Output parsers"]
        repo["repo.ts<br/>Validation"]
    end

    subgraph UtilLayer ["Util Layer"]
        scoring["scoring.ts<br/>Score algorithms"]
        formatting["formatting.ts<br/>Output formatting"]
    end

    index --> hotspots & churn & coupling & knowledge & complexity & risk & release & contributors
    index --> filehistory & codeage & commitpatterns & branchrisk
    index --> summary & activity

    hotspots --> executor & repo & parser & formatting & scoring
    churn --> executor & repo & formatting & scoring
    coupling --> executor & repo & formatting & scoring
    knowledge --> executor & repo & formatting & scoring
    complexity --> executor & repo & formatting
    risk --> executor & repo & formatting & scoring
    release --> executor & repo & parser & formatting
    contributors --> executor & formatting & scoring
    filehistory --> executor & repo & formatting
    codeage --> executor & repo & formatting & scoring
    commitpatterns --> executor & formatting
    branchrisk --> executor & formatting & scoring

    summary --> executor
    activity --> executor
    repo --> executor
        

Entry Point Flow

graph TD
    Start["main()"] --> ExpandHome["Expand ~ in repo path"]
    ExpandHome --> CheckGit["checkGitVersion()"]
    CheckGit -->|"< 2.20"| WarnGit["Warn: old git version"]
    CheckGit -->|">= 2.20"| ResolveRoot["resolveRepoRoot()"]
    WarnGit --> ResolveRoot
    ResolveRoot -->|"not a repo"| NullRoot["repoRoot = null<br/>(tools use repo_path param)"]
    ResolveRoot -->|"valid"| ValidRoot["repoRoot = path"]
    NullRoot --> CreateServer["Create McpServer"]
    ValidRoot --> CreateServer
    CreateServer --> RegisterTools["Register 12 tools"]
    RegisterTools --> RegisterResources["Register 2 resources"]
    RegisterResources --> Connect["Connect StdioServerTransport"]
    Connect --> Running["Server running on stdio"]
    Running -->|"SIGINT / SIGTERM"| Shutdown["Graceful shutdown"]
        

All 12 Analysis Tools

Each tool returns formatted text with tables, score bars [████████░░] 80, and actionable recommendations — not raw git output.

graph TD
    subgraph HotspotAnalysis ["Hotspot Analysis"]
        H["hotspots<br/>Change frequency"]
        CH["churn<br/>Write/rewrite ratio"]
        CO["coupling<br/>Temporal coupling"]
    end
    subgraph CodeArchaeology ["Code Archaeology"]
        FH["file_history<br/>File commit log"]
        CA["code_age<br/>Staleness map"]
        CT["complexity_trend<br/>Complexity over time"]
    end
    subgraph TeamAnalysis ["Team Analysis"]
        KM["knowledge_map<br/>Who knows what"]
        CS["contributor_stats<br/>Team dynamics"]
        CP["commit_patterns<br/>Work patterns"]
    end
    subgraph RiskRelease ["Risk & Release"]
        RA["risk_assessment<br/>Change risk scoring"]
        RN["release_notes<br/>Changelog generation"]
        BR["branch_risk<br/>Branch staleness"]
    end
        
Tool What it does Key Insight
hotspots Files that change most frequently Top 4% of files contain 50%+ of bugs
churn Code written then rewritten Churn ratio near 1.0 = instability
coupling Files that always change together Hidden dependencies not in imports
knowledge_map Who knows a file/dir best (recency weighted) Find reviewers, spot silos
complexity_trend How file complexity evolves over time Catch files growing out of control
risk_assessment Risk score 0–100 for changes Hotspot + size + sensitivity + spread
release_notes Structured changelog from conventional commits Groups by type, extracts breaking changes
contributor_stats Team dynamics, collaboration graph Workload distribution, knowledge silos
file_history Full commit history of a single file Trace evolution, find bug introductions
code_age Age map of every file in the repo Find stale code vs active areas
commit_patterns When and how the team commits Weekend/late-night work, velocity trends
branch_risk Branch staleness and divergence analysis Merge risk, cleanup candidates

hotspots — Change Frequency Analysis

The top 4% of files by change frequency typically contain 50%+ of bugs. Use this to identify files needing refactoring, better test coverage, or architectural attention.

graph LR
    A["git log --name-only<br/>--since=N days"] --> B["Count file appearances<br/>across commits"]
    B --> C["Normalize to 0-100<br/>heat score"]
    C --> D["Format table with<br/>score bars"]
        
Parameter Type Default Description
days integer (>0) 90 Days to look back
limit integer (1–100) 20 Max results
path_filter string Filter to files under this path
Interpretation: Files with high change frequency are candidates for refactoring, better test coverage, or breaking into smaller modules. Files changed by many authors may indicate unclear ownership.

churn — Code Churn Analysis

High churn indicates instability, unclear requirements, or code that is hard to get right.

graph LR
    A["git log --numstat<br/>--since=N days"] --> B["Sum additions/deletions<br/>per file"]
    B --> C["Calculate churn ratio<br/>deletions / additions"]
    C --> D["Sort by total churn<br/>format table"]
        
Parameter Type Default Description
days integer (>0) 90 Days to look back
limit integer (1–100) 20 Max results
path_filter string Filter to files under this path
Churn ratio near 1.0 means code is being rewritten as fast as it's written. Investigate root cause: unstable requirements? Wrong abstraction? Hard problem?

coupling — Temporal Coupling Detection

Reveals hidden dependencies not visible in imports or type signatures.

graph LR
    A["git log --name-only<br/>multi-file commits"] --> B["Build co-change<br/>matrix"]
    B --> C["coupling = shared /<br/>min(A, B)"]
    C --> D["Filter by threshold<br/>format pairs"]
        
Parameter Type Default Description
days integer (>0) 90 Days to look back
min_coupling float (0–1) 0.5 Minimum coupling score
min_commits integer (>0) 3 Minimum shared commits
limit integer (1–50) 20 Max pairs
path_filter string Filter to path
Coupling formula: shared_commits / min(commits_A, commits_B). Uses min so that if B always changes with A, coupling is 1.0 even if A changes independently.

knowledge_map — Who Knows What

Find the right reviewer. Identify knowledge silos. Plan team transitions.

graph LR
    A["git log --numstat<br/>per-author stats"] --> B["Weight: 30% volume<br/>30% frequency"]
    B --> C["Weight: 40% recency<br/>30-day half-life"]
    C --> D["Score 0-100<br/>per author"]
    D --> E["Bus factor =<br/>authors with score >= 30"]
        
Parameter Type Default Description
path string (required) File or directory to analyze
days integer (>0) 365 Days to look back

complexity_trend — Complexity Over Time

Track how a file's complexity evolves. Catch files growing out of control before they become unmaintainable.

graph LR
    A["git log for file"] --> B["Sample N points<br/>evenly across history"]
    B --> C["git show hash:path<br/>at each sample"]
    C --> D["Measure: lines, depth<br/>functions, long lines"]
    D --> E["Format trend table<br/>with warnings"]
        
Parameter Type Default Description
path string (required) File to analyze
samples integer (3–30) 10 Time samples
days integer (>0) 180 Days to look back

risk_assessment — Change Risk Scoring

Score the risk of uncommitted changes or a commit range before merging. Combines four signals.

graph TD
    D["git diff --numstat"] --> R["Risk Engine"]
    L["git log --name-only<br/>90-day history"] --> R
    R --> H["Hotspot Factor<br/>30% weight"]
    R --> S["Size Factor<br/>25% weight"]
    R --> SE["Sensitivity Factor<br/>30% weight"]
    R --> SP["Spread Factor<br/>15% weight"]
    H & S & SE & SP --> O["Overall Risk<br/>0-100 score"]
        
Parameter Type Default Description
ref_range string uncommitted Git ref range (e.g. main..feature)

Risk Thresholds

Score Level Action
≥ 70 HIGH Thorough code review required
≥ 40 MEDIUM Standard review, extra testing
< 40 LOW Standard process

Sensitivity Patterns

Score File Patterns
100 .env, .pem, .key, .cert
90–95 Auth, payment, credential, session files
80 Database, migration, schema files
70 Docker, CI/CD, Jenkinsfile
60 Config files
0 Everything else

release_notes — Changelog Generation

Generate structured changelogs from conventional commits between two git refs.

Parameter Type Default Description
from_ref string (required) Starting ref
to_ref string HEAD Ending ref
group_by type|scope|author type Grouping strategy

contributor_stats — Team Dynamics

Comprehensive contributor analytics: activity, collaboration graph, knowledge silos, focus areas, and commit time patterns.

graph LR
    A["git log --numstat"] --> B["Per-author profiles"]
    B --> C["Collaboration graph<br/>shared files"]
    B --> D["Knowledge silos<br/>sole contributors"]
    B --> E["Focus areas<br/>top directories"]
    B --> F["Activity timeline"]
        
Parameter Type Default Description
days integer (>0) 90 Days to look back
author string Filter to specific author

file_history — File Commit Log

Show the full commit history of a specific file — who changed it, when, how much, and why. Uses --follow to track renames across the file's lifetime.

graph LR
    A["git log --follow<br/>--numstat --format"] --> B["Parse commits with<br/>COMMIT: prefix"]
    B --> C["Sum additions/deletions<br/>per commit"]
    C --> D["Format table with<br/>date, author, lines, subject"]
        
Parameter Type Default Description
path string (required) File path to analyze
days integer (>0) 365 Days to look back
limit integer (1–100) 30 Max commits to return
Use cases: Understanding why a file looks the way it does, finding when a bug was introduced, tracing the evolution of a critical module, or preparing for a refactor.

code_age — Staleness Analysis

Shows the last-modified date of every tracked file. Identifies stale files that haven't been touched in months or years vs actively maintained areas.

graph LR
    A["git ls-files"] --> B["Get tracked file list"]
    B --> C["git log --name-only<br/>--diff-filter=AMRC"]
    C --> D["Map: file → last<br/>modified date"]
    D --> E["Sort by age<br/>format with staleness bars"]
        
Parameter Type Default Description
path_filter string Filter to files under this path
limit integer (1–100) 30 Max files to return
sort oldest | newest oldest Sort order
Interpretation: High-staleness files may be stable infrastructure that rarely needs changes, or abandoned code that should be reviewed for removal. Cross-reference with hotspots to distinguish the two.

commit_patterns — Work Pattern Analytics

Analyze when and how the team commits: day-of-week distribution, hour-of-day heatmap, commit size breakdown, and weekly velocity trends.

graph LR
    A["git log --format=aI|aN<br/>--shortstat"] --> B["Parse timestamps<br/>and stat lines"]
    B --> C["Day/hour buckets<br/>size categorization"]
    C --> D["Weekly velocity<br/>trend analysis"]
    D --> E["Auto-generate insights<br/>weekend %, late-night %"]
        
Parameter Type Default Description
days integer (>0) 90 Days to look back
author string Filter to a specific author

Commit Size Thresholds

Category Lines Changed Interpretation
Small ≤ 20 Focused, reviewable changes
Medium 21–100 Standard feature work
Large 101–500 Consider breaking up
Huge > 500 Review burden, potential risk
Watch for: High weekend commit percentages (burnout risk), late-night hotfixes (quality concerns), and >30% huge commits (review bottleneck).

branch_risk — Branch Staleness & Divergence

Analyze all branches for staleness, divergence from the main branch, and merge risk. Identifies stale branches for cleanup and highly diverged branches that may cause merge conflicts.

graph LR
    A["git branch --format<br/>refname, date, author"] --> B["Parse branch list"]
    B --> C["git rev-list --left-right<br/>--count per branch"]
    C --> D["Compute ahead/behind<br/>staleness scores"]
    D --> E["Categorize: stale >90d<br/>diverged >20 commits"]
        
Parameter Type Default Description
base_branch string HEAD Branch to compare against
include_remote boolean false Include remote tracking branches
Cleanup strategy: Branches with 0 ahead commits are likely merged and safe to delete. Branches >90 days stale with significant divergence are high-risk merge candidates.

MCP Resources

Resources provide static data that clients can read at any time (not invoked as tools).

git://repo/summary

Repository snapshot: branch, last commit, total commits, active contributors, top languages, age, remote URL.

git://repo/activity

Recent 50-commit activity feed with hash, relative date, author, subject, and change stats.

Installation

GitIntel is not published to npm. Clone, build, and register locally.

Prerequisites

  • Node.js ≥ 18
  • Git ≥ 2.20

Build from Source

bash
git clone <repo-url>
cd mcp-server
npm install
npm run build

Register with Claude Code

bash
# Analyze current directory
claude mcp add git-intel -- node /absolute/path/to/mcp-server/dist/index.js

# Specific repository
claude mcp add git-intel -- node /absolute/path/to/mcp-server/dist/index.js /path/to/repo

Register with Any MCP Client

json
{
  "mcpServers": {
    "git-intel": {
      "command": "node",
      "args": ["/absolute/path/to/mcp-server/dist/index.js"],
      "env": {
        "GIT_INTEL_REPO": "/path/to/your/repo"
      }
    }
  }
}

Configuration

Priority Method Example
1 CLI argument node dist/index.js /path/to/repo
2 Environment variable GIT_INTEL_REPO=/path/to/repo
3 Current working directory Falls back to process.cwd()

The ~ prefix is expanded to the user's home directory in all path inputs.

Security Model

graph TD
    Input["User Input<br/>tool args"] --> Zod["Zod Schema<br/>Validation"]
    Zod --> PathVal["validatePathFilter()<br/>blocks .. and abs paths"]
    Zod --> RefVal["validateRef()<br/>char whitelist"]
    PathVal & RefVal --> ExecFile["execFile()<br/>no shell, array args"]
    ExecFile --> Git["Git CLI<br/>read-only commands only"]
    ExecFile --> Timeout["30s timeout"]
    ExecFile --> Buffer["50MB max buffer"]
    ExecFile --> Env["GIT_TERMINAL_PROMPT=0<br/>GIT_PAGER=''<br/>LC_ALL=C"]
        
Threat Mitigation
Shell injection execFile — array args, no shell interpolation
Path traversal validatePathFilter() blocks .. and absolute paths
Ref injection validateRef() strict character whitelist
Write operations Strictly read-only — no tool modifies the repository
Network access No external network calls — all data is local
Interactive prompts GIT_TERMINAL_PROMPT=0
Timeouts 30-second default on all git commands
Memory exhaustion 50MB max buffer

Performance

Git Command Efficiency

  • Targeted --format strings minimize output parsing
  • --no-merges skips merge commits that inflate counts
  • --since filters are pushed to git (server-side filtering)
  • Coupling analysis caps at 50 files/commit to avoid O(n²) pair generation
  • Complexity trend samples evenly (default 10 points) rather than every commit

Output Limits

  • All tools accept a limit parameter (default 20, max 50–100)
  • Results sorted by relevance before truncation

Concurrency

  • MCP protocol handles one request at a time over stdio (serial)
  • Each tool makes 1–3 sequential git calls

Deployment

graph LR
    subgraph Build ["Build Pipeline"]
        Code["Source Code"] --> Docker["Docker Build<br/>Multi-stage"]
        Docker --> Image["Container Image<br/>~120MB"]
    end

    subgraph Registry ["Container Registry"]
        Image --> ECR["AWS ECR"]
        Image --> ACR["Azure ACR"]
    end

    subgraph Orchestration ["Orchestration"]
        ECR --> EKS["AWS EKS<br/>Kubernetes"]
        ACR --> AKS["Azure AKS<br/>Kubernetes"]
    end

    subgraph IaC ["Infrastructure as Code"]
        TF["Terraform"] --> EKS
        TF --> AKS
    end
        

The project includes production-ready infrastructure code for both AWS and Azure. See the Infrastructure section for details.

Docker

bash
# Build the image
docker build -t mcp-git-intel:latest .

# Run against a local repo
docker run --rm -v /path/to/repo:/repo mcp-git-intel:latest

# Docker Compose (with health checks, resource limits)
docker compose up

Infrastructure

graph TD
    subgraph TerraformRoot ["Terraform"]
        TFMain["terraform/<br/>main.tf, variables.tf"]
        TFModules["modules/<br/>networking, eks, aks"]
    end

    subgraph AWS ["AWS Stack"]
        VPC["VPC + Subnets"]
        ECR["ECR Repository"]
        EKS["EKS Cluster"]
        IAM["IAM Roles"]
        VPC --> EKS
        ECR --> EKS
        IAM --> EKS
    end

    subgraph Azure ["Azure Stack"]
        RG["Resource Group"]
        VNET["VNet + Subnets"]
        ACR["ACR Registry"]
        AKS["AKS Cluster"]
        RG --> VNET --> AKS
        RG --> ACR --> AKS
    end

    subgraph K8s ["Kubernetes Manifests"]
        NS["Namespace"]
        Deploy["Deployment"]
        SVC["Service"]
        HPA["HPA"]
        PDB["PodDisruptionBudget"]
        NP["NetworkPolicy"]
        SA["ServiceAccount"]
    end

    TFMain --> AWS
    TFMain --> Azure
    AWS --> K8s
    Azure --> K8s
        

Directory Structure

text
aws/
  cloudformation/
    ecr.yaml              # ECR repository
    vpc.yaml              # VPC, subnets, NAT
    eks.yaml              # EKS cluster + node group
azure/
  arm/
    acr.json              # Container registry
    vnet.json             # Virtual network
    aks.json              # AKS cluster
terraform/
  main.tf                 # Root module
  variables.tf            # Input variables
  outputs.tf              # Outputs
  providers.tf            # Provider config
  environments/
    dev.tfvars            # Dev overrides
    staging.tfvars        # Staging overrides
    prod.tfvars           # Production config
  modules/
    networking/           # VPC/VNet
    eks/                  # AWS EKS
    aks/                  # Azure AKS
k8s/
  base/
    namespace.yaml
    deployment.yaml
    service.yaml
    hpa.yaml
    pdb.yaml
    networkpolicy.yaml
    serviceaccount.yaml
    configmap.yaml
  overlays/
    dev/
    staging/
    prod/

Development Workflow

bash
npm run dev          # Run server with tsx (auto-reload)
npm run cli          # Interactive REPL for testing
npm run smoke        # Smoke test -- every tool and resource
npm test             # Unit tests (vitest)
npm run test:watch   # Watch mode
npm run lint         # Type check (tsc --noEmit)
npm run build        # Compile TypeScript to dist/

CLI REPL

The CLI features a color-coded interface with categorized tool display, boxed banners, and timing indicators for each tool call.

text
git-intel> tools                          # List all 12 tools by category
git-intel> resources                      # List all resources
git-intel> call hotspots {"days": 60}     # Call a tool
git-intel> call code_age {"sort":"oldest"}# New: code archaeology
git-intel> call commit_patterns           # New: work patterns
git-intel> call branch_risk               # New: branch analysis
git-intel> read git://repo/summary        # Read a resource
git-intel> help                           # Show all commands and examples
git-intel> exit

Testing

graph LR
    Unit["Unit Tests<br/>vitest"] --> Scoring["scoring.ts"]
    Unit --> Parsing["parser.ts"]
    Unit --> Formatting["formatting.ts"]
    Smoke["Smoke Test"] --> AllTools["All 12 tools"]
    Smoke --> AllResources["All 2 resources"]
    CLI["CLI REPL"] --> AdHoc["Ad-hoc testing"]
        
  • Unit tests (npm test): Vitest tests for scoring, parsing, formatting
  • Smoke test (npm run smoke): Connects a real MCP client, calls every tool and resource
  • CLI REPL (npm run cli): Interactive testing during development

Design Decisions

Formatted Text Output (not JSON)

Tools return pre-formatted text with markdown tables, score bars, and interpretation sections. AI clients can present output directly without additional formatting. JSON would require the AI to format it, adding latency and errors.

Per-Tool Git Commands (not shared cache)

Each tool makes its own git calls. Tools need different git output formats. Caching would add complexity and memory pressure. Git's pack format makes re-reading fast.

Zod for Input Validation

The MCP SDK uses Zod for schema definition, providing runtime validation and TypeScript type inference. Every parameter has a default value, so tools work with zero arguments.

Coupling Uses min() Denominator

coupling = shared / min(commitsA, commitsB) captures the "B depends on A" relationship. If B changed 5 times and always with A, coupling is 1.0 even if A changes independently.

Knowledge Score Weights Recency at 40%

30% volume + 30% frequency + 40% recency. Deliberately over-weights recency because code understanding decays. Recent contributors know the current state better.

Project Structure

text
src/
  index.ts              Entry point, server setup, tool/resource registration
  cli.ts                Interactive REPL for testing
  smoke-test.ts         Automated smoke test
  git/
    executor.ts         Safe git command runner (execFile, timeouts, env)
    parser.ts           Git output parsers (log, numstat, conventional commits)
    repo.ts             Repo validation, path/ref sanitization
  tools/
    hotspots.ts         Change frequency analysis
    churn.ts            Code churn (additions vs deletions)
    coupling.ts         Temporal coupling detection
    knowledge-map.ts    Knowledge scoring per author
    complexity.ts       Complexity trend over time
    risk.ts             Multi-factor risk assessment
    release-notes.ts    Changelog from conventional commits
    contributors.ts     Contributor analytics and collaboration
    file-history.ts     Single-file commit history with rename tracking
    code-age.ts         File staleness and age map analysis
    commit-patterns.ts  Work pattern analytics (day, time, size, velocity)
    branch-risk.ts      Branch staleness and divergence analysis
  resources/
    summary.ts          Repository snapshot resource
    activity.ts         Recent commit activity feed
  util/
    scoring.ts          Normalization, recency decay, coupling, risk scoring
    formatting.ts       Tables, score bars, text output helpers
    colors.ts           ANSI color utilities for CLI output