Gloo360 OS — What We're Building and Why
Author: Vince Vigil · Director, Platform Operations, Gloo360 Date: April 2026 Status: Active Development — POC in Progress
The Problem
Gloo360 manages 11+ client engagements generating $15M+ ARR. The operational data that drives staffing decisions, portfolio health assessments, and vendor management is scattered across people, spreadsheets, and tribal knowledge.
Workforce data lives in Monday.com, Rippling, compensation spreadsheets, and transition tracking documents spread across drives and people's heads.
Portfolio health lives in a manually maintained spreadsheet, an account hub maintained by two separate teams, HubSpot (AG alone has 79 CRM records), and individual CSM knowledge that doesn't transfer when people move.
Vendor management is barely tracked at all. We know Splunk costs Gloo360 $90K/month. We know clients bear Workday and Salesforce costs directly. But there is no single system that captures all vendor spend, contract terms, renewal dates, or optimization opportunities across the portfolio.
The consequence is real: leadership decisions about staffing, capacity, client health, and cost optimization rely on manual assembly by people who are already at capacity. Critically, information doesn't flow between roles — a sales rep or CSM may communicate something to a client, but that context never reaches the capability lead running the implementation. Decisions get made without the full picture. Not because people are withholding information, but because there's no system to carry it.
The Opportunity
We don't need another dashboard or another database. We need agents that do the work of consolidating information — agents that ingest data from source systems, normalize it into a shared knowledge layer, detect patterns, and produce actionable summaries on a schedule.
A local agent system (v0) already exists. Developed using Claude's Cowork environment, it manages tasks, drafts communications, tracks clients, handles compensation analysis, and coordinates across 12 specialized agents. The architecture is proven. What we haven't done yet is point agents at Gloo360's operational data and let them work.
Gloo360 OS (v1) extends this foundation to solve three core operational problems — the three data buckets leadership has identified as highest priority — as a proof of concept.
The core insight: context is the prerequisite for any effective AI workflow. Agents fail not because models lack capability, but because they lack access to the right operational context. This entire system is fundamentally about building a transparent infrastructure for gathering, normalizing, and surfacing the context that Gloo360 staff and AI agents both need to make good decisions.
The Three Data Buckets
Gloo360 OS targets three data sets that, when stitched together, enable a resource management plan that leadership has never had in one place.
1. Portfolio Health
Is each client engagement healthy? Are SLAs being met? Is scope creeping? Are we at risk of losing anyone?
This data currently lives across four separate sources: Baca's spreadsheet, Smay and Chesnut's account hub, HubSpot deal records, and individual CSM knowledge. None of them talk to each other.
The target: one document, updated weekly, showing the health status of all 11+ clients — RAG ratings, ARR, key risks, next actions — without asking anyone to assemble it. This is the first bucket because it has the most existing infrastructure. HubSpot is already connected via MCP. Client profile files already exist. The stabilization agent already tracks health for four active clients. We're extending what works, not building from scratch.
2. Workforce Intelligence
Who works on what? How many people came over in each acquihire? What's the attrition rate? What's the comp structure by engagement?
This data lives in Monday.com, Rippling, compensation spreadsheets, and people's heads. The manual assembly required to answer these questions took an 8-agent validation sweep and still had open discrepancies. That's not sustainable.
The target: automated weekly collection of workforce data per client, with monthly cross-portfolio synthesis — headcount, attrition trends, open roles, acquihire pipeline status, revenue per employee.
3. Vendor Management
What vendor spend is Gloo360 bearing on behalf of clients versus what clients bear directly? When do contracts renew? Are we paying for duplicated capabilities across the portfolio?
This is the least automated bucket today — vendor data lives in contracts, portals, and tribal knowledge. But the value is significant: a single view of all vendor relationships, costs, and renewal calendars would transform how Gloo360 thinks about cost optimization and client pricing.
The target: per-client vendor logs with a monthly cross-portfolio synthesis — total Gloo-borne vs. client-borne spend, a 90-day renewal calendar, and consolidation opportunities.
The Architecture: A Four-Stage Pipeline
Every data flow in the Gloo360 OS follows the same four-stage pattern:
INGEST → NORMALIZE → REASON → OUTPUT
Stage 1 — Ingest. A scheduled task's only job is to read from a source system and write what it finds into a markdown file. It runs on a schedule (daily, weekly, monthly) and can be triggered on demand. It doesn't analyze — it captures. Source access uses MCPs where available (HubSpot, Google Drive, Gmail, Slack) and manual seeding where MCPs don't yet exist.
Stage 2 — Normalize. The ingestion task writes to a standard file format specific to its domain — a vendor log, a staff roster, a client health snapshot. Every file follows a consistent template so downstream reasoning can parse it reliably regardless of where the data came from.
Stage 3 — Reason. An orchestration step reads across all normalized files using a two-pass pattern: the first pass extracts a concise summary per client file, the second pass reasons across those summaries to detect patterns, anomalies, gaps, and opportunities. This keeps the context window lean regardless of portfolio size.
Stage 4 — Output. The orchestration produces a summary document with findings, recommendations, and breadcrumb links back to the source files for anyone who wants to drill deeper. Every number traces to its source. No black boxes.
Why Markdown
The knowledge layer is built on plain markdown files. This is intentional:
- Zero infrastructure cost — runs on existing architecture, no new databases to stand up
- Human-readable — any leader can open any file and understand it without training
- Version-controlled — every change is tracked in git
- Agent-native — Claude reads and writes markdown natively, no serialization layer needed
- Source-agnostic — collectors write normalized markdown regardless of whether the data came from an MCP, a Google Sheet export, or a manual paste
Data sources are swappable without touching the reasoning layer. Today HubSpot data comes via MCP. When a Monday.com MCP becomes available, swap the collector — the orchestrator that reads the normalized files doesn't change.
The Two-Pass Orchestration Pattern
With 11+ clients and multiple data files per client, a naive "load everything" approach would consume 60–90K tokens of source data before reasoning begins. Instead:
Pass 1 — Extract. For each client, read the relevant normalized file and produce a 5–10 line structured summary.
Pass 2 — Synthesize. Read only the summaries. Detect cross-portfolio patterns, flag anomalies, produce the output document with breadcrumbs back to the full source files.
This is the same pattern the existing triage skill uses (scan → synthesize). It scales with portfolio size.
Data Sources: What's Connected Today
| Source | Status | What It Gives Us |
|---|---|---|
| HubSpot | ✅ MCP connected | Deal records, company records, revenue data, client contacts |
| Google Drive | ✅ MCP connected | Baca's spreadsheet, contract docs, comp sheets, SOWs |
| Gmail | ✅ MCP connected | Xander's Monday census emails, vendor renewal notices |
| Google Calendar | ✅ MCP connected | Client meeting cadence as a health signal |
| Slack | ✅ MCP connected | Vendor discussions, client escalations, team capacity signals |
| Monday.com | ⚠️ No MCP yet | Xander emails weekly report → captured via Gmail MCP |
| Rippling | ⚠️ No MCP yet | Manual export to Google Drive |
| Vendor portals | ❌ Not connected | Manual entry of contract terms and renewal dates |
The Google Workspace ecosystem — Drive, Gmail, Calendar — serves as the intermediary for data that doesn't have a direct MCP. Most of Gloo's operational documents land somewhere in Workspace, which means we can reach them today.
The Scaling Path
v1 POC (now): Markdown files, Cowork-hosted scheduled tasks, three data buckets, pilot clients. Prove the architecture works and produces value before investing in infrastructure.
Phase 1 — Notion: When Bryce and Ben need direct access without going through Vince, a sync layer pushes agent-produced summaries into Notion databases. Agents continue to read and write markdown as their working format. Notion is the view layer. Human corrections in Notion flow back into the markdown layer on the next sync.
Phase 2 — Forge / Organizational Scale: The patterns proven in the POC transfer directly. Collector tasks become Forge-hosted workflows. The markdown knowledge layer maps to Forge's data store. The two-pass orchestration pattern becomes Forge's reasoning architecture. The breadcrumb system becomes Forge's provenance layer. The POC validates the information architecture — not the platform.
What This Portal Is
This site — the Gloo360 OS Context Layer — is the human-readable window into the architecture. It serves two audiences:
For leadership: A place to read the strategic documents, architecture diagrams, and pipeline designs behind the Gloo360 OS. The diagrams in this portal are the same Mermaid source files that power the architecture — readable in Notion, GitHub, Slack canvases, and Cowork artifacts. Nothing gets out of sync.
For the agent system: The content in this portal is stored as plain markdown files in a Git repository. Every document here can be read directly by any agent in the system — the same way a human reads it. No API required, no parsing layer, no translation. This makes the portal a crawlable reference point for the agent network as it grows.
The architecture diagrams embedded here — the four-stage pipeline, the portfolio health use case, the client folder structure — are the canonical visual representations of how Gloo360 OS works. They live here so they're always findable, always current, and always linkable.
Success Criteria
The POC succeeds when:
- Bryce can open one document and see the health of all 11+ clients — RAG ratings, ARR, key risks, next actions — without asking anyone to assemble it.
- The portfolio summary updates weekly without manual effort beyond what's already happening (Baca's spreadsheet, Xander's Monday report, CSMs in HubSpot).
- Every number in the summary traces back to a source file. Click a breadcrumb, see the underlying data.
- At least one use case runs end-to-end on schedule — collection triggers, normalization writes, orchestration synthesizes, output is fresh by Monday morning.
The stretch goal: Exec Steering meeting prep time drops measurably. Vince or Baca can point to a specific meeting where the agent-produced summary replaced manual prep.
What We're Not Doing
Not building a dashboard. Dashboards are views. We're building the intelligence layer that a dashboard would read from.
Not replacing HubSpot, Rippling, or Monday. Source systems stay. Scheduled tasks read from them. The OS is the reasoning layer on top.
Not building Forge. Forge is an engineering platform. Gloo360 OS is an operational intelligence system. They're complementary, not competing.
Not boiling the ocean. Three use cases. Pilot clients. Prove value with the data we can actually reach today. Expand from there.
Gloo360 — Focus on Your Mission, Not Your Tech Stack.