End-to-end reference for the automated daily pipeline and on-demand deep research: scheduler triggers, multi-engine CLI execution (Claude, Codex, Gemini, Copilot), multi-agent research, Notion publishing, Obsidian vault integration with graph visualization, Teams and Slack delivery, custom topic briefs, and maintenance workflows.
From scheduler trigger to Notion, Obsidian vault, and optional multi-channel webhook delivery (Teams + Slack).
The system operates across five primary layers: a platform-native scheduler (launchd or Task Scheduler), a scripted entry point, an AI CLI engine registry (Claude, Codex, Gemini, or Copilot), the AI execution layer, and the output destinations. This modular design ensures that the core prompting and research logic remains decoupled from the specific OS or AI model being used.
flowchart LR
subgraph Schedulers
A1["macOS launchd"]
A2["Windows Task Scheduler"]
A3["Manual / make run"]
end
A1 --> B["briefing.sh / .ps1"]
A2 --> B
A3 --> B
B --> C["prompt.md"]
C --> D["AI Engine (Claude/Codex/Gemini/Copilot)"]
D --> E["WebSearch (9 topics)"]
D --> F["Notion MCP"]
F --> G["Notion Page"]
D --> H["card.json"]
D --> OBS["obsidian.md"]
B --> I{"Teams webhook?"}
I -->|Yes| J["notify-teams"]
J --> K["Teams Channel"]
B --> L{"Slack webhook?"}
L -->|Yes| M["notify-slack"]
M --> N["Slack Channel"]
B --> OC{"Obsidian vault?"}
OC -->|Yes| OP["publish-obsidian"]
OP --> OV["Obsidian Vault +\nGraph View"]
flowchart TD
subgraph Triggers
A1["macOS launchd"]
A2["Windows Task Scheduler"]
A3["Manual make run"]
end
A1 --> B1["briefing.sh"]
A2 --> B2["briefing.ps1"]
A3 --> B1
A3 --> B2
B1 --> C["prompt.md"]
B2 --> C
C --> D["AI Engine"]
D --> E["WebSearch"]
D --> F["Notion MCP"]
F --> G["Notion briefing page"]
D --> CS["logs/covered-stories.txt"]
D --> H["logs/YYYY-MM-DD.log"]
D --> I["logs/YYYY-MM-DD-card.json"]
D --> OI["logs/YYYY-MM-DD-obsidian.md"]
B1 --> JT{"AI_BRIEFING_TEAMS_WEBHOOK set?"}
B2 --> JT
JT -->|"No"| KT["Skip Teams"]
JT -->|"Yes"| LT["notify-teams script"]
LT --> MT{"card JSON exists and is valid?"}
MT -->|"Yes"| NT["POST Adaptive Card to Teams webhooks"]
MT -->|"No"| OT["Teams notify failed"]
B1 --> JS{"AI_BRIEFING_SLACK_WEBHOOK set?"}
B2 --> JS
JS -->|"No"| KS["Skip Slack"]
JS -->|"Yes"| LS["notify-slack script"]
LS --> MS{"card JSON exists and conversion succeeds?"}
MS -->|"Yes"| NS["Convert to Block Kit and POST to Slack webhooks"]
MS -->|"No"| OS["Slack notify failed"]
B1 --> JO{"AI_BRIEFING_OBSIDIAN_VAULT set?"}
B2 --> JO
JO -->|"No"| KO["Skip Obsidian"]
JO -->|"Yes"| LO["publish-obsidian script"]
LO --> MO{"obsidian.md exists and vault writable?"}
MO -->|"Yes"| NO["Copy to vault + create topic stubs"]
MO -->|"No"| OO["Obsidian publish skipped"]
sequenceDiagram
participant S as Scheduler/Manual
participant E as Entry Script
participant C as AI Engine
participant W as WebSearch
participant N as Notion MCP
participant T as notify-teams
participant TW as Teams Webhook(s)
participant SL as notify-slack
participant SW as Slack Webhook(s)
participant OB as publish-obsidian
participant OV as Obsidian Vault
S->>E: Start briefing script
E->>E: Setup date, logs, env
E->>E: Refresh webhook env (Windows)
E->>C: invoke selected engine (AI_BRIEFING_CLI or daily fallback)
C->>C: Step 0a read logs/covered-stories.txt
C->>N: Step 0b fetch most recent briefing
N-->>C: prior coverage
loop 9 topic areas
C->>W: search recent news
W-->>C: results
end
C->>N: Step 3 create today's page
N-->>C: page URL
C->>C: Step 5 append to covered-stories.txt
C-->>E: exit 0 + output
E->>E: append logs/YYYY-MM-DD.log
opt Teams webhook configured
E->>T: call notify-teams --all
T->>T: validate card.json
T->>TW: POST payload
TW-->>T: 2xx
end
opt Slack webhook configured
E->>SL: call notify-slack --all
SL->>SL: convert card via teams-to-slack.py
SL->>SW: POST Block Kit payload
SW-->>SL: 2xx
end
opt Obsidian vault configured
E->>OB: call publish-obsidian
OB->>OB: extract [[wikilinks]]
OB->>OV: copy briefing + create topic stubs
OV-->>OB: success
end
What each stage expects, produces, and where failures usually happen.
Our end-to-end flow is completely headless and automated. From the moment the scheduler fires, the entry scripts handle environment sanitization, log rotation, and secure API execution. Daily briefing supports fallback (`claude → codex → gemini → copilot`) when no explicit engine is set. Custom brief supports the same engines, but uses explicit selection (`--cli` / `-Cli`), then `AI_BRIEFING_CLI`, then defaults to `claude` (no fallback chain).
| Stage | Input | Output | Primary File/Tool |
|---|---|---|---|
| Trigger | 08:00 schedule or manual command | Script execution starts | com.ainews.briefing.plist, install-task.ps1, Makefile |
| Bootstrap | Date + environment | Prompt text + runtime context | briefing.sh, briefing.ps1 |
| Dedup | logs/covered-stories.txt + latest Notion page | Exclusion list for this run | Step 0a (file) + Step 0b (Notion MCP) |
| AI Run | prompt.md | Notion briefing + card.json + obsidian.md + covered-stories update | AI engine (Claude/Codex/Gemini/Copilot), WebSearch, Notion MCP |
| Teams Notify | logs/YYYY-MM-DD-card.json + AI_BRIEFING_TEAMS_WEBHOOK | Adaptive Card post(s) | notify-teams.sh, notify-teams.ps1 |
| Slack Notify | logs/YYYY-MM-DD-card.json + AI_BRIEFING_SLACK_WEBHOOK | Block Kit post(s) | notify-slack.sh, notify-slack.ps1, teams-to-slack.py |
| Obsidian Publish | logs/YYYY-MM-DD-obsidian.md + AI_BRIEFING_OBSIDIAN_VAULT | Vault briefing page + topic stub pages | publish-obsidian.sh, publish-obsidian.ps1 |
| Multi-URL Routing | Semicolon-separated webhook URLs | First URL by default, all URLs with --all/-All | Teams + Slack notify scripts |
| Cleanup | Historic log files | Old *.log removed | Entry script cleanup step |
Resolved: `prompt.md` Step 4 generates `logs/YYYY-MM-DD-card.json` directly. Step 5 generates `logs/YYYY-MM-DD-obsidian.md` with [[wikilinks]]. Teams notifier posts the card as-is, Slack notifier converts it to Block Kit, and Obsidian publisher copies the markdown to the vault with topic stub pages.
flowchart TD
A[Entry script exits success] --> B{card.json exists?}
B -->|No| B1[Notify paths fail fast]
B -->|Yes| T{Teams webhook env set?}
B -->|Yes| S{Slack webhook env set?}
T -->|No| T0[Skip Teams]
T -->|Yes| T1[notify-teams]
T1 --> T2{JSON valid + URL list valid?}
T2 -->|No| T3[Teams notify failed]
T2 -->|Yes| T4[POST to first URL or all URLs]
S -->|No| S0[Skip Slack]
S -->|Yes| S1[notify-slack]
S1 --> S2{teams-to-slack conversion OK?}
S2 -->|No| S3[Slack notify failed]
S2 -->|Yes| S4[POST Block Kit to first URL or all URLs]
A --> OB{obsidian.md exists?}
OB -->|No| OB0[Skip Obsidian]
OB -->|Yes| OC{Vault env set + writable?}
OC -->|No| OC0[Obsidian publish skipped]
OC -->|Yes| OC1[publish-obsidian]
OC1 --> OC2[Copy to AI-News-Briefings/]
OC1 --> OC3["Extract [[wikilinks]]"]
OC3 --> OC4[Create Topics/ stub pages]
Our automated briefing engine targets ultra short-horizon AI news (strictly the past 24 hours). By segmenting the web search across these 9 highly specific domains, we bypass algorithmic noise and surface the exact technical, financial, and strategic updates you actually care about.
Real-time monitoring of the Anthropic ecosystem. This covers unannounced model deprecations, API rate limit changes, new system prompt capabilities, Claude Code CLI updates, and official blog announcements regarding safety benchmarks or enterprise tier features.
Comprehensive tracking of OpenAI's rapidly shifting product lines. We monitor silent API parameter updates, GPT model rollouts, OpenAI Codex integration news, ChatGPT Plus feature drops, and strategic partnership announcements (e.g., Apple, Microsoft).
The developer tooling war is moving fast. We track paradigm shifts in AI-native editors like Cursor and Windsurf, alongside major extension updates for GitHub Copilot, JetBrains AI Assistant, and the integration of local SLMs directly into Xcode and VS Code.
Beyond single-turn chat, we track the execution layer. This includes the latest advancements in LangChain, CrewAI, and Microsoft AutoGen, alongside new integrations for the Model Context Protocol (MCP), browser-automation agents, and multi-agent consensus frameworks.
Macro-level shifts from the tech giants. We monitor Google DeepMind, Meta, and xAI for major lab announcements, while simultaneously tracking NVIDIA, AMD, and Groq for semiconductor news, cluster deployments, and inference hardware breakthroughs.
The bleeding edge of the open-weights community. We track Hugging Face leaderboards, new quantization methods (GGUF, AWQ), and massive drops like Llama 3/4 iterations, Mistral architectural shifts, and DeepSeek's latest parameter-efficient milestones.
Following the money in the AI boom. We scrape news of unannounced Seed and Series A funding rounds for stealth startups, major acqui-hires (talent poaching) between leading AI labs, and massive compute-infrastructure capital expenditures.
The legal landscape defining AI's future. We track sweeping government policy changes (EU AI Act, US Executive Orders), copyright infringement lawsuits from major publishers against AI labs, and developments in cryptographic provenance and watermarking.
How AI is fundamentally altering web development. We track Vercel's v0, Next.js AI integrations, React Native copilot generation, and the overall shift toward AI-generated UIs, dynamic rendering, and automated TypeScript type-safety tooling.
Pick a topic, choose your destinations, and get a comprehensive news-focused research briefing with linked citations and dates.
flowchart LR
A["Topic Input
(CLI flags or REPL)"] --> B["custom-brief.sh / .ps1"]
B --> C["prompt-custom-brief.md"]
C --> D["AI Engine"]
subgraph "Phase 1: Parallel Discovery"
D --> A1["Agent 1: Breaking News"]
D --> A2["Agent 2: Technical Analysis"]
D --> A3["Agent 3: Industry Impact"]
D --> A4["Agent 4: Trends"]
D --> A5["Agent 5: Policy"]
D --> A6["Agent 6+: Topic-specific"]
end
A1 --> DD["Phase 2: Deep Dive"]
A2 --> DD
A3 --> DD
A4 --> DD
A5 --> DD
A6 --> DD
DD --> S["Phase 3: Synthesis"]
S --> OUT["Terminal Output"]
S -->|"--notion"| N["Notion Page"]
S -->|"--teams/--slack"| CARD["Card JSON"]
S -->|"--obsidian"| OBS["Obsidian Vault +\nGraph View"]
CARD --> T["Teams"]
CARD --> SL["Slack"]
flowchart TD
subgraph "Entry Points"
SH["custom-brief.sh"]
PS["custom-brief.ps1"]
SK["/custom-brief skill"]
end
SH --> PT["prompt-custom-brief.md"]
PS --> PT
SK --> CC["AI Engine"]
PT --> CC
subgraph "Phase 1: Parallel Discovery"
CC --> A1["Agent 1: Breaking News"]
CC --> A2["Agent 2: Technical Analysis"]
CC --> A3["Agent 3: Industry Impact"]
CC --> A4["Agent 4: Trend Trajectory"]
CC --> A5["Agent 5: Policy and Ethics"]
end
subgraph "Phase 2-3: Synthesis"
A1 --> DD["Deep Dive Follow-ups"]
A2 --> DD
A3 --> DD
A4 --> DD
A5 --> DD
DD --> SYNTH["Thematic Synthesis"]
end
subgraph "Output"
SYNTH --> STDOUT["Terminal"]
SYNTH -->|"--notion"| NOTION["Notion Page"]
SYNTH -->|"--teams/--slack"| CARD["Card JSON"]
SYNTH -->|"--obsidian"| OBSMD["Obsidian Markdown"]
CARD --> NT["notify-teams"]
CARD --> NS["notify-slack"]
OBSMD --> PO["publish-obsidian"]
PO --> OV["Vault +\nTopic Stubs"]
end
# Full research with all destinations ./custom-brief.sh --topic "AI in healthcare" --notion --teams --slack --obsidian # Terminal + Notion only ./custom-brief.sh -t "quantum computing" -n # Terminal + Obsidian vault (graph-enabled) ./custom-brief.sh -t "AI coding assistants" --obsidian # Interactive mode (prompts for topic and destinations) ./custom-brief.sh # Pick a specific engine ./custom-brief.sh --cli codex -t "AI safety" -n # PowerShell .\custom-brief.ps1 -Topic "AI regulation EU" -Notion -Teams -Obsidian # Make target make custom-brief T="open source LLMs" NOTION=1 TEAMS=1 OBSIDIAN=1
| Aspect | Daily Briefing | Custom Brief |
|---|---|---|
| Trigger | Scheduled (8:00 AM daily) | On-demand |
| Scope | 9 fixed AI topics | Any user-defined topic |
| Depth | Broad scan (search per topic) | Deep (5 parallel agents + follow-ups) |
| Deduplication | Yes (covered-stories.txt) | No (standalone) |
| Notion title | YYYY-MM-DD - AI Daily Briefing | YYYY-MM-DD - Custom Brief: [Topic] |
| Engine selection | AI_BRIEFING_CLI or fallback chain | --cli/-Cli → AI_BRIEFING_CLI → claude |
| Obsidian vault file | AI-News-Briefings/YYYY-MM-DD.md | AI-News-Briefings/YYYY-MM-DD - [Topic].md |
| CLI output | Logged to file only | Printed to terminal + logged |
| Card filename | logs/YYYY-MM-DD-card.json | logs/custom-TIMESTAMP-card.json |
Daily operation commands for both standard runs and historical date backfills.
Operations are standardized around a cross-platform Makefile, giving you a unified surface to install schedulers, trigger runs, inspect logs, and validate the environment. All outputs are captured in date-stamped logs with an automatic 30-day rotation policy to keep your disk clean.
# Clone + install scheduler git clone https://github.com/hoangsonww/AI-News-Briefing cd AI-News-Briefing make install # Run now make run # Backfill a specific date make run D=2026-03-15 # Watch logs make tail
# macOS install (launchd) chmod +x briefing.sh cp com.ainews.briefing.plist ~/Library/LaunchAgents/ launchctl load ~/Library/LaunchAgents/com.ainews.briefing.plist # Windows install (Task Scheduler) .\install-task.ps1 # Windows custom schedule example .\install-task.ps1 -Hour 7 -Minute 30
| Action | macOS/Linux | Windows |
|---|---|---|
| Run now | bash briefing.sh | .\briefing.ps1 |
| Backfill date | bash briefing.sh 2026-03-15 | .\briefing.ps1 -BriefingDate 2026-03-15 |
| Trigger scheduler | launchctl kickstart "gui/$(id -u)/com.ainews.briefing" | schtasks /run /tn AiNewsBriefing |
| Tail today log | tail -f logs/$(date +%Y-%m-%d).log | Get-Content .\logs\$(Get-Date -Format yyyy-MM-dd).log -Wait |
| Scheduler status | launchctl list | grep ainews | schtasks /query /tn AiNewsBriefing |
A unified suite of 10 intelligence agents built natively for Claude Code, OpenAI Codex, and Gemini CLI.
We provide generic, highly-specialized research tools capable of scanning GitHub trends, parsing SEC filings, summarizing ArXiv papers, tracking crypto tokenomics, and performing deep social intelligence scraping across Reddit and X.
graph TD
subgraph "CLI Environments"
CC[Claude Code]
CX[OpenAI Codex]
GM[Gemini CLI]
end
subgraph "Marketplace Catalogs"
CMP[.claude-plugin/marketplace.json]
CXMP[.agents/plugins/marketplace.json]
end
subgraph "Plugin Types (10 Available)"
News[News & Media: ai-news, last30days, podcast-summarizer]
Dev[Tech & Dev: trend-spotter, paper-reader, repo-auditor]
Biz[Business: earnings, competitor-intel, startup-scout, crypto]
end
CC -->|/plugin install| CMP
CX -->|Local UI| CXMP
GM -->|extensions link| News
GM -->|extensions link| Dev
GM -->|extensions link| Biz
CMP --> News
CMP --> Dev
CMP --> Biz
CXMP --> News
CXMP --> Dev
CXMP --> Biz
Automates daily AI news research, Notion publishing, and Teams/Slack webhooks. Deep multi-angle research.
AI agent search engine scored by upvotes and real money. Searches Reddit, X, HN, YouTube, and Polymarket.
Analyzes GitHub trending repos, package downloads, and tech Twitter to identify emerging developer tools.
Fetches SEC filings and earnings call transcripts to synthesize an objective business briefing.
Connects to ArXiv and Semantic Scholar to explain complex methodology in plain English (ELI5).
Finds top competitors, compares feature releases, pricing changes, and customer sentiment on Reddit/G2.
Analyzes GitHub repos for security vulnerabilities, stale dependencies, code quality, and bus factor.
Extracts transcripts from YouTube/podcasts and synthesizes actionable takeaways in a scannable format.
Tracks Y Combinator, Product Hunt, and VC funding rounds to identify promising early-stage startups.
Analyzes tokenomics, whitepapers, on-chain activity, and crypto Twitter sentiment for Web3 projects.
Single interface for execution, logs, scheduler management, and validation.
| Command | Category | Description |
|---|---|---|
make run | Execution | Run briefing in foreground |
make run-bg | Execution | Run briefing in background |
make run D=YYYY-MM-DD | Execution | Backfill for a target date |
make run-scheduled | Execution | Trigger via scheduler service |
make custom-brief T="topic" | Execution | Deep-research a specific topic |
make custom-brief T="..." NOTION=1 TEAMS=1 | Execution | Research + publish to Notion/Teams |
make custom-brief T="..." OBSIDIAN=1 | Execution | Research + publish to Obsidian vault |
make custom-brief-bg T="..." | Execution | Custom brief in background |
make tail | Logs | Tail today log |
make logs | Logs | List all logs |
make log-date D=YYYY-MM-DD | Logs | Print date-specific log |
make clean-logs | Logs | Delete logs older than 30 days |
make install | Scheduler | Install scheduler for current platform |
make uninstall | Scheduler | Remove scheduler |
make status | Scheduler | Show scheduler status |
make check | Validate | Verify Claude binary path |
make validate | Validate | Validate key project files and prompt steps |
make info | Info | Show platform/model/topic summary |
Script pairs exist for both `.sh` and `.ps1` variants.
| Script | Purpose | Example |
|---|---|---|
health-check | Verify install, prompt structure, scheduler, CLI | bash scripts/health-check.sh |
dry-run | Run pipeline without writing to Notion | bash scripts/dry-run.sh --model haiku --budget 1.00 |
test-notion | Test Notion MCP connectivity | bash scripts/test-notion.sh |
log-summary | Summarize recent run outcomes | bash scripts/log-summary.sh 14 |
log-search | Search logs with optional context | bash scripts/log-search.sh --search "Anthropic" --context 2 |
cost-report | Estimate spend by period | bash scripts/cost-report.sh --month 2026-03 |
export-logs | Archive logs to tar.gz/zip | bash scripts/export-logs.sh --from 2026-03-01 --to 2026-03-09 |
backup-prompt | Version and restore prompt.md | bash scripts/backup-prompt.sh --list |
topic-edit | Add/remove/list topics in prompt.md | bash scripts/topic-edit.sh --list |
update-schedule | Adjust scheduler time | bash scripts/update-schedule.sh --hour 7 --minute 30 |
notify | Native OS status notification | bash scripts/notify.sh |
notify-teams | Validate and POST Adaptive Card JSON to Teams webhook(s) | bash scripts/notify-teams.sh --all --card-file logs/2026-03-24-card.json |
notify-slack | Convert Teams card JSON to Block Kit and POST to Slack webhook(s) | bash scripts/notify-slack.sh --all --card-file logs/2026-03-24-card.json |
publish-obsidian | Copy Obsidian markdown to vault and create [[wikilink]] topic stubs | bash scripts/publish-obsidian.sh --file logs/2026-04-11-obsidian.md |
test-obsidian | Test Obsidian vault connectivity, permissions, and .obsidian config | bash scripts/test-obsidian.sh |
uninstall | Remove scheduler and optional artifacts | bash scripts/uninstall.sh --all |
Teams and Slack reuse the same logs/YYYY-MM-DD-card.json. Obsidian uses the dedicated logs/YYYY-MM-DD-obsidian.md with [[wikilinks]] for graph visualization.
The notification pipeline employs a highly scalable fan-out architecture. A single JSON Adaptive Card is generated during the AI run, which is then dynamically converted into platform-specific formats (like Slack Block Kit) and dispatched across multiple webhook URLs seamlessly without needing any intermediate service.
flowchart LR
A["Claude writes logs/YYYY-MM-DD-card.json"] --> B{"Channel notifier"}
B --> C["notify-teams.sh / notify-teams.ps1"]
B --> D["notify-slack.sh / notify-slack.ps1"]
C --> E["POST Adaptive Card JSON to Teams webhooks"]
D --> F["teams-to-slack.py conversion"]
F --> G["POST Block Kit JSON to Slack webhooks"]
H["Claude writes logs/YYYY-MM-DD-obsidian.md"] --> I["publish-obsidian.sh / .ps1"]
I --> J["Copy to vault/AI-News-Briefings/"]
I --> K["Extract [[wikilinks]]"]
K --> L["Create Topics/ stub pages"]
L --> M["Obsidian Graph View\nvisualizes topic connections"]
# Configure multiple webhook URLs (semicolon-separated) export AI_BRIEFING_TEAMS_WEBHOOK="https://teams-one;https://teams-two" export AI_BRIEFING_SLACK_WEBHOOK="https://hooks.slack.com/services/T.../B.../one;https://hooks.slack.com/services/T.../B.../two" # Default: first URL only bash scripts/notify-teams.sh bash scripts/notify-slack.sh # Fan out to all configured URLs bash scripts/notify-teams.sh --all bash scripts/notify-slack.sh --all # Explicit card file for replay/testing bash scripts/notify-teams.sh --all --card-file logs/2026-03-24-card.json bash scripts/notify-slack.sh --all --card-file logs/2026-03-24-card.json
# Persist multiple webhook URLs (semicolon-separated) [Environment]::SetEnvironmentVariable("AI_BRIEFING_TEAMS_WEBHOOK", "https://teams-one;https://teams-two", "User") [Environment]::SetEnvironmentVariable("AI_BRIEFING_SLACK_WEBHOOK", "https://hooks.slack.com/services/T.../B.../one;https://hooks.slack.com/services/T.../B.../two", "User") # Default: first URL only .\scripts\notify-teams.ps1 .\scripts\notify-slack.ps1 # Fan out to all configured URLs .\scripts\notify-teams.ps1 -All .\scripts\notify-slack.ps1 -All # Explicit card file for replay/testing .\scripts\notify-teams.ps1 -All -CardFile ".\logs\2026-03-24-card.json" .\scripts\notify-slack.ps1 -All -CardFile ".\logs\2026-03-24-card.json"
# Set vault path (absolute path to your Obsidian vault root) export AI_BRIEFING_OBSIDIAN_VAULT="/Users/you/Documents/ObsidianVault" # Test vault connectivity bash scripts/test-obsidian.sh # Manual publish (auto-runs after daily briefing when vault is configured) bash scripts/publish-obsidian.sh --file logs/2026-04-11-obsidian.md
# Persist vault path [Environment]::SetEnvironmentVariable("AI_BRIEFING_OBSIDIAN_VAULT", "C:\Users\you\Documents\ObsidianVault", "User") # Test vault connectivity .\scripts\test-obsidian.ps1 # Manual publish .\scripts\publish-obsidian.ps1 -File ".\logs\2026-04-11-obsidian.md"
Legacy note: `scripts/build-teams-card.py` remains for historical parser flow, but current runtime uses direct card generation in `prompt.md` Step 4. Step 5 generates Obsidian-formatted markdown. Slack reuses the card through `scripts/teams-to-slack.py`, and Obsidian publishing writes directly to the local vault.
Briefings are written as markdown files with [[wikilinks]] and YAML frontmatter. Obsidian's graph view automatically maps topic connections across all briefings.
flowchart TD
subgraph "Claude CLI Output"
A["logs/YYYY-MM-DD-obsidian.md"]
end
subgraph "publish-obsidian.sh"
B["Copy briefing to vault"]
C["Extract all [[wikilinks]]"]
D{"Topic stub exists?"}
D -->|No| E["Create Topics/Name.md\nwith YAML frontmatter"]
D -->|Yes| F["Skip (idempotent)"]
end
subgraph "Obsidian Vault"
G["AI-News-Briefings/\n2026-04-11.md"]
H["Topics/Claude Code.md"]
I["Topics/OpenAI.md"]
J["Topics/AI Coding IDEs.md"]
K["Topics/..."]
end
A --> B --> G
A --> C --> D
G -.->|"[[Claude Code]]"| H
G -.->|"[[OpenAI]]"| I
G -.->|"[[AI Coding IDEs]]"| J
H -.->|"graph edges"| I
I -.->|"graph edges"| J
graph LR
B1["2026-04-11\nDaily Briefing"] --> CC["Claude Code"]
B1 --> OA["OpenAI"]
B1 --> ACI["AI Coding IDEs"]
B1 --> AG["Agentic AI"]
B2["2026-04-10\nDaily Briefing"] --> CC
B2 --> OA
B2 --> MS["Mistral"]
B2 --> HF["Hugging Face"]
B3["2026-04-11\nCustom: AI Regulation"] --> POL["AI Policy"]
B3 --> OA
B3 --> EU["EU AI Act"]
CC --> ANT["Anthropic"]
ACI --> CUR["Cursor"]
ACI --> WS["Windsurf"]
style CC fill:#7c3aed,stroke:#a78bfa,color:#fff
style OA fill:#7c3aed,stroke:#a78bfa,color:#fff
style ACI fill:#7c3aed,stroke:#a78bfa,color:#fff
style AG fill:#7c3aed,stroke:#a78bfa,color:#fff
style MS fill:#7c3aed,stroke:#a78bfa,color:#fff
style HF fill:#7c3aed,stroke:#a78bfa,color:#fff
style POL fill:#7c3aed,stroke:#a78bfa,color:#fff
style EU fill:#7c3aed,stroke:#a78bfa,color:#fff
style ANT fill:#7c3aed,stroke:#a78bfa,color:#fff
style CUR fill:#7c3aed,stroke:#a78bfa,color:#fff
style WS fill:#7c3aed,stroke:#a78bfa,color:#fff
style B1 fill:#1e3a5f,stroke:#3b82f6,color:#fff
style B2 fill:#1e3a5f,stroke:#3b82f6,color:#fff
style B3 fill:#1e3a5f,stroke:#3b82f6,color:#fff
| Feature | Implementation |
|---|---|
| Wikilinks | Section headings use ## [[Topic Name]], inline mentions wrapped in [[brackets]] |
| YAML Frontmatter | Briefings: type: briefing, date, topics list. Stubs: type: topic, created |
| Topic Hub Pages | Auto-created in Topics/ on first reference; idempotent on subsequent runs |
| Graph Edges | Each [[wikilink]] creates a bidirectional edge in Obsidian's graph view |
| Cross-Briefing Links | "Related Topics" header links all topics at top of each briefing for maximum connectivity |
| Tags | YAML tags array enables tag-based filtering in Obsidian search |
| Env Variable | AI_BRIEFING_OBSIDIAN_VAULT — absolute path to vault root directory |
| Local-First | No API, no cloud sync — files written directly to disk. Obsidian Sync is optional. |
How the graph grows: Each daily run adds one briefing page linking to ~9 topic nodes. Each custom brief adds another page with its own topic nodes. Over time, Obsidian's graph view reveals which topics cluster together, which are trending, and how AI news themes evolve. Topic hub pages serve as the gravitational centers of the graph.
Verify syntax, structure, arg handling, template substitution, card JSON, notification error paths, Obsidian vault publishing, and cross-platform portability. No external services called.
Reliability is paramount. We maintain a comprehensive suite of 201 non-blocking tests across Bash and PowerShell. These tests validate syntax, argument handling, prompt templates, and notification schemas without requiring active API connections, ensuring the pipeline remains robust across all platforms.
flowchart TD
subgraph "Bash (macOS / Linux / Git Bash)"
R["tests/run-all.sh"] --> T1["test-custom-brief.sh\n48 tests"]
R --> T2["test-daily-brief.sh\n80 tests"]
R --> T3["test-notifications.sh\n17 tests"]
R --> T4["test-portability.sh\n26 tests"]
R --> T5["test-obsidian.sh\n30 tests"]
end
subgraph "PowerShell (Windows)"
PS["tests/test-all.ps1\n91 tests"]
end
T1 --> X["Args, templates,\nprompts, skills, Obsidian flag"]
T2 --> Y["Steps, topics,\nchangelogs, scripts, Obsidian"]
T3 --> Z["Card JSON, converter,\nerror handling"]
T4 --> W["Bash 3.2, awk, date,\nANSI safety"]
T5 --> V["Vault structure, topic stubs,\nfrontmatter, idempotency"]
PS --> X
PS --> Y
PS --> Z
# All bash tests (macOS / Linux / Git Bash) bash tests/run-all.sh # Individual suites bash tests/test-custom-brief.sh bash tests/test-daily-brief.sh bash tests/test-notifications.sh bash tests/test-portability.sh bash tests/test-obsidian.sh # PowerShell (Windows) powershell -ExecutionPolicy Bypass -File tests\test-all.ps1
| Suite | Tests | Coverage |
|---|---|---|
test-custom-brief.sh | 48 | Args, template substitution, prompt structure, skill, Obsidian flag |
test-daily-brief.sh | 80 | Prompt steps, 9 topics, 8 changelog URLs, entry scripts, dedup, Obsidian integration |
test-notifications.sh | 17 | Card JSON validity, Adaptive Card structure, converter, errors |
test-portability.sh | 26 | Bash 3.2 compat, awk, date, -f not -x, ANSI safety |
test-obsidian.sh | 30 | Vault simulation, publish flow, topic stubs, frontmatter, idempotency |
test-all.ps1 | 91 | PowerShell syntax, all prompts, cards, converter, docs |
Baseline estimates with Opus model, 9 topics, no hard budget cap.
Primary docs to keep this wiki and runtime behavior aligned.
User-facing setup, topic scope, scheduler installation, Makefile commands, Teams, Slack, and Obsidian delivery overview.
Deep architecture walkthrough with component-level design decisions and operational constraints.
Detailed end-to-end execution, sequence, failure paths, artifact contracts, and alignment options.
Teams webhook setup, multi-webhook configuration, Adaptive Card behavior, and troubleshooting delivery issues.
Converter that transforms Teams Adaptive Card JSON into Slack Block Kit payloads for Slack webhook delivery.
Agent instructions for search scope, formatting requirements, Notion write, and Obsidian markdown with [[wikilinks]].
Custom topic deep research: CLI usage, agent architecture, output formats, Obsidian graph support, and comparison with daily briefing.
Full setup guide: AI CLI engines, Notion MCP, Obsidian vault, webhook configuration, scheduler install, and verification.
Slack webhook setup, Block Kit conversion, multi-webhook configuration, and troubleshooting.
Test suite documentation: architecture, per-suite coverage tables (including Obsidian suite), run commands, design principles.
Log tailing and management: live tail, read, search, summarize, export, and cleanup for daily and custom briefs.
Cross-platform operator interface for run, schedule, custom-brief, log, validation, and status workflows.