Gloo360 OS: Context Pipeline — Executive Summary
Operational Intelligence for Gloo360 Leadership
Author: Vince Vigil | Date: April 7, 2026 | Status: POC In Progress
The Problem
Gloo360 manages a growing number of client engagements. The operational data that drives staffing, portfolio health, and vendor decisions is scattered across spreadsheets, disparate systems, and subject to information only going to a few people. Leadership decisions rely on manual assembly by people who are already at capacity.
Critically, information doesn't flow between roles. A sales rep or CSM may communicate something to a client, but that context never reaches the capability lead running the implementation. Decisions get made without the full picture — not because people are withholding information, but because there's no system to carry it.
Three critical data gaps exist:
-
Workforce Intelligence — Teams, roles, levels, titles, comp, R&R, acquihires, and attrition are scattered across Monday.com, Rippling, spreadsheets, TAA, and people's heads.
-
Portfolio Health — Client health, SLAs, scope, and revenue live in Ryan Baca's spreadsheet, the account hub, HubSpot, Google Drive, and individual CSM knowledge.
-
Vendor Management — Gloo360 bears some vendor costs on behalf of clients and supports clients with their vendor spend, with no consolidated view of spend (internal or client-borne), system ownership, contracts, or renewal dates.
Proposed Solution
We don't need another dashboard. We need agents that do the work to consolidate information — agents that ingest data from source systems, normalize it into a shared knowledge layer, detect patterns, and produce actionable summaries on a schedule. Every insight traces back to its source data. No black boxes, operated for full transparency.
Context is King
The core insight driving this work: context is the prerequisite for any effective AI workflow. Industry research consistently shows that AI agents fail not because the models lack capability, but because they lack access to the right operational context. When agents don't know what a CSM told a client last week, or what the capability lead is prioritizing, they produce generic outputs that no one trusts. This POC is fundamentally about solving the context problem — building a transparent system for gathering, normalizing, and surfacing the operational context that Gloo360 staff and AI agents both need to do their jobs well.
Foundational Start
This work builds on a local agent system developed using Claude's Cowork mode (v0). Over the past several weeks, 12 specialized agents, a layered file structure, and a lightweight operating model for managing tasks, communications, and client tracking have been stood up — all running locally against markdown files with minimal infrastructure. The Gloo360 OS (v1) uses this foundational system as its backbone, extending it to tackle operational data problems at the portfolio level.
POC Process
The approach is intentionally local-first: run small iterations, fine-tune the data models and agent patterns against real Gloo360 data, and validate what actually works before investing in long-term infrastructure. Part of this POC is connecting live data sources (HubSpot, Google Drive, Gmail, Slack) into the local system and producing outputs that Bryce (and eventually others) can begin reading and operating from directly — bridging from a single-operator system to a multi-operator tool.
The architecture uses a four-stage pipeline:
| 1. Ingest | 2. Normalize | 3. Reason | 4. Output |
|---|---|---|---|
| Scheduled tasks pull from source systems via MCPs | Write standardized markdown files per client | Two-pass pattern: summarize, then synthesize across clients | Summaries with "breadcrumbs" back to source data |
The clear lane for this proof of concept is the pipeline itself: ingest, normalize, reason, output. This system produces context — not action. Action may come from Forge agents, from staff operating with better information, or from future automation. The value of getting the context layer right first is that every downstream system makes better decisions because it has the full picture.
Three Use Cases
We are standing up a proof of concept now, running within the foundational work at zero infrastructure cost. Three use cases advance in parallel, ordered by data readiness:
| Use Case | Data Sources | Key Output | Status |
|---|---|---|---|
| 1. Portfolio Health | HubSpot MCP, Google Drive, client profiles | Cross-portfolio health summary (RAG + risks + pipeline) | Pilot data seeded |
| 2. Workforce Intelligence | Monday.com, Rippling, Gmail (finance reports from Tom Zander), Drive (comp sheets, R&R docs), HR agent | Headcount, attrition, levels, titles, comp benchmarking, R&R status, org origination tracking across all clients | Template ready |
| 3. Vendor Management | Google Drive (contracts, SOWs), Slack, finance reports, manual seed | Vendor spend (Gloo-borne + client-borne), system ownership map, renewal calendar, consolidation opps | Template ready |
Current Status: v0 Complete, POC Active
The local agent system is in place and the first use case is being seeded with live data. Work completed to date:
-
Architecture proposal (v1) — Four-stage pipeline pattern, data source strategy, and success criteria defined.
-
Client directory migrated — 14 clients restructured into per-client subdirectories with standardized file templates.
-
Cross-portfolio intelligence layer created — gloo360_os/ directory with corrections feedback mechanism.
-
Operational data store established — Normalized client data is being written to a structured knowledge layer. The tool for leadership to access this data is still to be determined.
What Success Looks Like
Success for this POC is a transparent, traceable system for gathering operational context across the Gloo360 client base. Bryce can gather information and operate off of intelligence produced through this pipeline — portfolio health, workforce composition, vendor status — without chasing people for updates or assembling it manually. Every data point traces back to its source.
This is foundational work for the Gloo360 OS. The pipeline produces context, not action. Action is the domain of implementation agents, Forge workflows, or Gloo360 staff operating with better information. By getting the context layer right first, we're building the substrate that makes every downstream workflow — human or AI — more effective. The patterns validated here transfer directly to Forge when the platform is ready.
When this work is complete, the following will be true:
Resource Management Plan
This is the first and most critical deliverable. Leadership can look at a single view showing incoming work across the portfolio and see where there are gaps on teams — staffing shortfalls, skill mismatches, capacity constraints. Capability leads and leadership no longer have to chase individuals for headcount status or guess at availability. The resource management plan becomes the foundation everything else iterates from.
Financial Visibility — Micro P&Ls
Financial data is structured and pushed to capability leads so they can run their own micro P&Ls per team and per client. Leaders don't have to wait for finance to assemble numbers — the data flows to them in a form they can act on. This supports the finance work already underway with Brandon Hines and gives capability leads real ownership of their financial performance.
Communications Generated from Pipeline Events
When milestones occur — a stabilization plan is delivered and agreed upon by the client, a QAP is produced, a new client signs, a QBR is completed — the system generates draft communications to the relevant stakeholders. Today, information exists but doesn't flow down. Capability leads and delivery teams lack visibility into client status changes and pipeline movement, even when the people with the information are in the room. The pipeline solves this by detecting events and drafting comms automatically. A designated person reviews and approves before anything sends — human-in-the-loop on all outbound messaging. The format, approval chain, and specific triggers are being defined as part of this POC.
Permissioning and Role-Based Access
Different stakeholders see different slices of the data. Leadership gets the full portfolio view. Capability leads see their teams and clients. CSMs see their accounts. The permissioning model (RBAC) ensures the right people get the right context without exposing sensitive data to the wrong audience. Scope and implementation to be defined.
Project Auto-Population from Stabilization Plans
Once a stabilization plan is delivered and agreed upon by the client, projects auto-populate in Linear and Asana so capability leads can begin execution immediately. This is downstream of the context pipeline — an action layer built on validated context. The stabilization agreement becomes the trigger that bridges planning into delivery.
Client Hub Integration
Smay's Client Hub is evaluated as both a data source and a potential consumer of pipeline output. Integration points are mapped so the pipeline and the hub reinforce each other rather than duplicating effort. (Vince needs a walkthrough of current hub capabilities before scoping this.)
Onboarding Timelines and Delivery Milestones
Executive-level summaries of onboarding timelines and delivery dates are surfaced through the pipeline. When dates shift or milestones are hit, stakeholders are informed through the comms layer. This connects to the onboarding process Becky McKenzie is designing for the 360 people operations workflow — including offer timing windows, documented evaluation conversations, and the transition from acqui-hire to applied-position model.
QBR Data as Pipeline Input
QBR transcripts and meeting data feed into the ingest layer as a source of client health signals and operational context. What was discussed, what commitments were made, and what risks surfaced — all captured and normalized alongside other data sources.
Business Analysis Documentation
Documentation of business analysis (discovery findings, gap assessments, technical evaluations) serves as another ingest source, ensuring that the analytical work already being done feeds the context layer rather than sitting in isolated documents.
Next Step
Get to 80% of the architecture concept, then bring Jake in for product-oriented thinking on how to move from local POC to a broader MVP. The patterns validated locally transfer to Forge when the platform is ready.