πŸ€– Agentic Multi‑Stage Bot

A Research & Outreach Agent That Thinks, Plans, and Acts

Full-fledged multi-stage agentic chatbot with autonomous reasoning, tool chaining, and persistent memory. Built on LangGraph, FastAPI, and ChromaDB for production-ready deployments.

🐍 Python 3.10+ ⚑ FastAPI πŸ”— LangGraph 🦜 LangChain 🧠 OpenAI/Anthropic πŸ” ChromaDB πŸ’Ύ SQLite πŸ”„ SSE Streaming 🐳 Docker Ready ☸️ Kubernetes πŸ” Production Ready πŸ“– Open Source

✨ Key Features

Enterprise-grade agentic system with multi-stage reasoning, autonomous tool use, and production-ready infrastructure

🧠

Multi-Stage Reasoning

LangGraph-powered state machine with plan β†’ decide β†’ act β†’ reflect β†’ finalize workflow. Single atomic actions per step for deterministic traces.

πŸ”§

Autonomous Tool Use

Web search, URL fetch, calculator, file write, email drafting, and KB search/add. Tools chain automatically based on agent decisions.

πŸ’Ύ

Dual Memory Systems

SQLite for conversation history and feedback. ChromaDB vector store for RAG and persistent knowledge with semantic search.

🌐

Web Experience Layer

FastAPI backend with SSE streaming. Zero-build Vue.js UI with real-time chat, KB ingestion, and feedback collection.

πŸ”’

Production Security

Rate limiting, rotating logs, secrets management, and audit trails. Docker containerization with Kubernetes manifests included.

πŸ§ͺ

Batteries Included

Comprehensive test suite (pytest), code quality tools (Ruff), CI/CD pipelines, and extensive documentation for rapid development.

πŸ“±

Social Media Automation

AI-powered content generation for Twitter, LinkedIn, Instagram, and Facebook. Smart scheduling, analytics, and multi-platform campaigns.

πŸš€

Advanced Deployments

Blue/green and canary deployments. GitOps with ArgoCD/Flux. Automated health checks and rollback capabilities.

πŸ”Œ

Extensible Architecture

7-layer modular design. Add new tools, nodes, or agent profiles without breaking core functionality. Client SDKs for Python and TypeScript.

πŸ—οΈ 7-Layer Architecture

Clear separation of concerns with well-defined interfaces between layers

1️⃣ Experience Layer
FastAPI + SSE streaming β†’ Web UI, CLI, REST API endpoints
2️⃣ Discovery Layer
WebSearch (DuckDuckGo), WebFetch (URL extraction), KB semantic search (RAG)
3️⃣ Agent Composition
Agent profiles with persona, objectives, capabilities, and safety constraints
4️⃣ Reasoning & Planning
LangGraph state machine: plan β†’ decide β†’ act β†’ reflect β†’ finalize
5️⃣ Tool & API Layer
Calculator, FileWrite, Emailer, Web tools, KB add/search with structured I/O
6️⃣ Memory & Feedback
SQLite (turns, feedback), ChromaDB (vector KB), Citations store
7️⃣ Infrastructure
Logging, rate limiting, config management, packaging, observability hooks

Each layer has a specific role and communicates through well-defined interfaces, allowing for independent evolution while maintaining clear contracts.

πŸ”„ Agent Graph (LangGraph)

Typed state machine driving LLM and tool orchestration with single atomic actions per step

flowchart LR subgraph EXP[Experience Layer] IN[User prompt] RL{Rate limit} STREAM[SSE stream] FB[Feedback API] end subgraph MEM[Memory Layer] SQL[(SQLite)] VEC[(ChromaDB)] CIT[Citations] end subgraph GRAPH[Reasoning & Planning] START((START)) PLAN[PLAN] DECIDE{DECIDE} ACT[ACT] TOOLS((ToolNode)) REFLECT[REFLECT] FINALIZE([FINALIZE]) end subgraph TOOLS_LAYER[Tool Layer] TSEARCH[web_search] TFETCH[web_fetch] TKBS[kb_search] TKBADD[kb_add] TCALC[calculator] TWRITE[file_write] TEMAIL[emailer] end IN --> RL RL -->|ok| START RL -->|429| HALT[Backoff] START --> PLAN PLAN --> DECIDE DECIDE -->|search/fetch/kb/calc/write/email| ACT DECIDE -->|finalize| FINALIZE ACT --> TOOLS TOOLS --> REFLECT REFLECT -->|done| FINALIZE REFLECT -->|continue| DECIDE TOOLS --> TSEARCH & TFETCH & TKBS & TKBADD & TCALC & TWRITE & TEMAIL PLAN -.-> VEC FINALIZE --> SQL STREAM --> FB

Key Nodes

PLAN

Creates 3-6 step action plan with KB context pre-retrieval

DECIDE

Selects next atomic action: search, fetch, calculate, write, etc.

ACT

Binds LLM to tool schema and forces single precise call

TOOLS

Executes tool calls via LangGraph ToolNode

REFLECT

Emits BRIEFING (final) or NEXT action token

FINALIZE

Completes execution and persists results

🎁 Bonus: Additional Pipelines

Two powerful companion pipelines for RAG and autonomous coding

πŸ“š

Agentic RAG Pipeline

Full agentic RAG with multi-step plan β†’ retrieve β†’ reflect β†’ verify β†’ answer. Includes memory, tool use, quality checks, and dual retrieval (vector + web).

flowchart TD U[User] --> IR[Intent Router] IR --> PL[Planner] PL --> RP[Retrieval Planner] RP --> R1[Vector Retriever] RP --> R2[Web Retriever] R1 --> W[Writer] R2 --> W W --> C[Critic] C -->|follow-ups| RP W --> G[Guardrails] G --> A[Answer + Evidence]
πŸ’»

Agentic Coding Pipeline

Autonomous coding assistant that drafts patches, formats code, synthesizes tests, and iterates until quality gates pass. Multi-LLM coordination (GPT + Claude).

flowchart TD T[Task] --> OR[Pipeline] OR --> C1[GPT Coder] OR --> C2[Claude Coder] C1 --> OR C2 --> OR OR --> F[Ruff Format] F --> OR OR --> TS[Test + Pytest] TS --> OR OR --> QA[Gemini QA] QA --> OUT[Patch]

πŸš€ Advanced Deployment Strategies

Production-grade deployment with zero-downtime, progressive rollouts, and GitOps

Strategy Use Case Rollback Speed Traffic Control Automation
Blue/Green Major releases, DB migrations ⚑ Instant All-or-nothing Script-based
Canary (Manual) Gradual validation ⚑ Fast Progressive Interactive
Canary (Flagger) Automated delivery ⚑⚑ Very Fast Progressive Fully automated
GitOps (ArgoCD) Declarative, auditable ⚑ Fast Configurable Git-driven
GitOps (Flux) Image-driven, automated ⚑ Fast Configurable Fully automated

βœ… Zero-Downtime

Traffic switching with no service interruption

βœ… Health Validation

Automated checks before traffic switch

βœ… Progressive Rollouts

Gradual traffic increase for risk mitigation

βœ… Quick Rollback

One-command rollback procedures

βœ… Infrastructure as Code

Terraform modules for AWS resources

βœ… Auto-Monitoring

Continuous health checks with rollback

⚑ Quick Start

Get up and running in minutes with our streamlined setup process

bash
# 1) Create virtual environment & install dependencies
python -m venv .venv
source .venv/bin/activate
pip install -U pip
pip install -r requirements.txt

# 2) Configure API keys
cp .env.example .env
# Edit .env with your OPENAI_API_KEY or ANTHROPIC_API_KEY

# 3) Ingest seed knowledge into vector store
make ingest

# 4) Run the server
make run

# Open http://localhost:8000 and start chatting!
# Try: "Build a competitive briefing on ACME Robotics"

CLI Demo

bash
source .venv/bin/activate
python -m agentic_ai.cli demo "Summarize top AMR vendors with citations"
7
Architecture Layers
15+
Built-in Tools
100%
Type Safe
3
Deployment Strategies

πŸ”§ Tools & Capabilities

Atomic tools designed for specific purposes, chainable for complex tasks

Tool Purpose Input Output
web_search General discovery via DuckDuckGo Natural language query JSON list of results
web_fetch Extract readable text from URL URL string Clean text content
kb_search Vector KB semantic search Natural language query JSON: {id, text, metadata}
kb_add Add document to knowledge base JSON: {id, text, metadata} "ok" confirmation
calculator Safe math via Python math module Math expression Result string
file_write Persist artifacts to disk JSON: {path, content} Absolute file path
emailer Draft/queue email as .eml JSON: {to, subject, body} .eml file path

πŸ“– HTTP API Reference

RESTful API with SSE streaming for real-time interactions

GET /api/new_chat

Create a new chat session

Returns: {"chat_id": "uuid"}

POST /api/chat

Send message with SSE streaming

Body: {"chat_id", "message"}

POST /api/ingest

Add document to KB

Body: {"id", "text", "metadata"}

POST /api/ingest_url

Ingest content from URL

Body: {"url", "metadata"}

POST /api/ingest_file

Upload file to KB

Form: file, id, tags

POST /api/feedback

Submit rating & comment

Body: {"chat_id", "rating"}

πŸ”Œ Client SDKs

TypeScript and Python SDKs for seamless integration

TypeScript/Node.js

typescript
import { AgenticAIClient } from "./clients/ts";

const client = new AgenticAIClient({
  baseUrl: "http://localhost:8000"
});

// Start chat
const { chat_id } = await client.newChat();
await client.chatStream({
  chat_id,
  message: "Brief ACME Robotics",
  onToken: t => process.stdout.write(t)
});

// Ingest content
await client.ingest("text", { tags: ["kb"] });
await client.ingestUrl("https://example.com");

Python (Async)

python
from clients.python import AgenticAIClient

async with AgenticAIClient("http://localhost:8000") as c:
    # Start chat
    meta = await c.new_chat()
    await c.chat_stream(
        "Brief ACME Robotics",
        chat_id=meta["chat_id"],
        on_token=lambda t: print(t, end="")
    )

    # Ingest content
    await c.ingest("text", {"tags": ["kb"]})
    await c.ingest_url("https://example.com")

βš™οΈ Configuration

Pydantic-based settings with .env support for easy configuration

bash
# Model Configuration
MODEL_PROVIDER=openai          # openai | anthropic
OPENAI_API_KEY=sk-...
OPENAI_MODEL_CHAT=gpt-4o-mini
OPENAI_MODEL_EMBED=text-embedding-3-small

ANTHROPIC_API_KEY=sk-ant-...
ANTHROPIC_MODEL_CHAT=claude-3-5-sonnet-latest

# Storage
CHROMA_DIR=.chroma
SQLITE_PATH=.sqlite/agent.db

# Server
APP_HOST=0.0.0.0
APP_PORT=8000

# Rate Limiting
RATE_LIMIT_TOKENS=100
RATE_LIMIT_REFILL_RATE=10