How Trueground works
Trueground is a system of record for organizational decision logic, designed to be consulted by AI agents at runtime. It captures how an enterprise actually makes decisions — including the unwritten rules, exceptions, and relationship dynamics that never make it into documentation — and serves that knowledge to autonomous systems in formats they can consume.
It is not a documentation tool. It is not a vector database. It is not an AI model. It is the structured ground truth that sits between an organization's tacit knowledge and the agents acting on its behalf.
This page walks through the mechanics: what the system captures, how it stays current, and how agents consume it.
Extract, structure, encode, maintain
Trueground operates as a four-phase cycle. Knowledge enters through extraction, gets organized into structured objects, encodes into agent-consumable formats, and is maintained against drift. The cycle returns: drift triggers re-extraction, and the system is designed to never reach a steady state where curation can stop.
Each phase produces a specific artifact and feeds the next.
Extract
Knowledge enters through interviews, observation, and operational data. Recorded sessions with the people who actually do the work. Pattern mining across CRM notes, support tickets, and approval chains where the documented process was bypassed. Relationship mapping for the supplier dynamics, internal politics, and customer hierarchies that shape decisions but exist in no system.
The output of extraction is a set of candidate patterns — decision points, exceptions, relationships, implicit rules — each linked back to the moment in source material where it was observed.
Structure
Candidate patterns become structured knowledge objects. A practitioner — typically a knowledge engineer or domain consultant — organizes them into decision frameworks, entity relationship models, and rules. The system enforces structural completeness: every framework requires a confidence boundary, every relationship requires direction, every rule requires an exception handler. Knowledge that is incomplete refuses to save.
This is the central design opinion of the product, and it is covered in detail in §3.
Encode
Structured objects export into the formats AI agents can read. Three are supported today: a JSON prompt library for context injection, a DMN-convertible XML for decision-engine integration, and a Cypher dump for graph-database-backed agents. The same underlying object exports to all three; the choice is a function of how the consuming system is built.
Maintain
The system watches for drift. Confidence on every object decays over time without re-verification. Connected source systems emit signals when something changes — a policy threshold, an account owner, a supplier relationship — and the system flags the affected objects for review. Human review, not automatic update, closes the loop.
When review surfaces a gap that re-extraction can fill, the cycle returns to phase one.
Three object types, with structure enforced
Trueground's data model has three primary objects. Each carries the structural elements an AI agent needs to use the knowledge safely. The system refuses to persist objects that lack them. This is what distinguishes Trueground from a wiki, a vector store, or a folder of documents — the structure is mandatory, and the mandate is what makes the output trustworthy enough to put behind an autonomous system.
Decision frameworks
A decision framework represents how a real decision gets made. It is a tree of condition nodes, outcomes, overrides, and escalation paths, with each node carrying a confidence score and a link back to its source pattern.
Three structural elements are required on every framework:
- A confidence boundary that names where organizational knowledge is strong and where it is uncertain, with explicit thresholds for autonomous action and human review.
- An exception handler that defines what the agent does when the framework encounters a situation it was not designed for, including fallback action, escalation target, and a non-empty list of documented limitations.
- At least one escalation node in the tree itself. There must always be a path to a human.
Entity relationship models
Entity relationships capture the informal hierarchies and contextual weights that decision frameworks reference but cannot themselves express. Who reports to whom in practice, not on the org chart. Which accounts get special treatment and why. Which suppliers have leverage, and under what conditions.
Two structural rules are enforced:
- Every relationship has explicit direction — directed or bidirectional, declared. The practitioner must decide.
- Every relationship has conditions — the circumstances under which it applies. A relationship that holds always is rare; most are conditional on context, deal size, account class, or counterparty.
Rules
Rules are the constraints that sit alongside decision frameworks. Hard limits the agent must not cross. Soft guidelines it should prefer. Behavioral standards that govern tone, format, and disclosure.
Every rule requires an exception handler — what happens when the rule encounters something it was not designed for — and a severity (critical, warning, info) that governs how the agent responds when the rule fires.
Three formats, one underlying object
Trueground exports the same structured knowledge into three formats. The choice depends on how the consuming system is built, not on what the knowledge contains.
Structured prompt library
A JSON document organized by scenario. When an agent encounters a situation it recognizes — deal qualification, pricing exception, customer escalation — the relevant scenario block is injected into its context. Includes the decision framework summary, key relationships, confidence advisories ("high confidence on mid-market deals; flag multi-BU pursuits for human review"), and hard constraints.
This is the simplest integration path and the most common starting point.
{
"scenario_id": "...",
"trigger": "deal_qualification_required",
"context_injection": "When qualifying enterprise deals for this organization, apply these criteria: ...",
"decision_framework_summary": "...",
"confidence_notes": "High confidence on mid-market deals with single buyer. Low confidence on multi-BU pursuits — flag for human review.",
"hard_constraints": [ "..." ]
} DMN-convertible XML
For environments with a decision engine in place, or where the AI risk function requires a formal decision model. Trueground emits XML that follows DMN — Decision Model and Notation, the OMG industry standard — under a custom namespace. It is close enough to the standard that a future compliance effort is a mapping exercise, not a rewrite. Confidence boundaries and exception handlers are first-class XML elements.
Knowledge graph
For agents that need to traverse relationship context — who influences whom, which accounts are connected, which conditions apply where — Trueground exports a Cypher script that loads directly into Neo4j or any compatible graph database. All entity properties, relationship weights, and conditions are preserved.
A note on direct consultation
The current integration model is export-driven: knowledge is generated, the consuming system loads it, the agent reads it. A direct consult API — where an agent asks Trueground a structured question at runtime and receives the relevant decision context — is on the roadmap. The export model is sufficient for most current deployments and gives the consuming system full control over caching, latency, and offline operation.
Four mechanisms, one review queue
The hardest problem in this category is not capture. It is keeping captured knowledge true as the organization moves underneath it. Trueground addresses this with four mechanisms that converge on a single review queue.
Change signals
Connectors to source systems — CRM, ERP, ticketing, customer success platforms — emit signals when the underlying reality shifts. A policy threshold changes. An account owner is reassigned. A supplier is added or removed. Signals do not update the knowledge layer directly. They flag the affected objects and create review-queue items.
Confidence decay
Every object carries a confidence score and a last-reviewed timestamp. Confidence drifts down over time without re-verification, on a configurable cadence. The decay is a model of the real-world fact that knowledge goes stale: a rule that was true six months ago may not be true today, and a system that does not reflect this will quietly become a liability.
When confidence crosses below the review threshold defined in an object's confidence boundary, the system flags it for re-verification.
Drift detection
The system watches for divergence between the encoded framework and observed agent or human behavior. When a decision path is being bypassed at increasing rates, that is a signal: either the rule is wrong and reality has moved on, or discipline is slipping and the rule is being eroded. The system does not adjudicate. It surfaces the pattern.
The review queue
All four mechanisms write to a single prioritized queue. Priority is computed from object confidence, signal severity, age of the unresolved item, and the number of downstream objects affected. Practitioners work the queue. Updates flow back into the knowledge layer through the structuring interface.
The system surfaces work. Humans decide.
Where Trueground runs
Trueground is a set of services running on a private instance, in customer-controlled cloud infrastructure or as a managed deployment.
The core services are: a structuring service that holds the decision frameworks, entities, and rules; an extraction service that processes interviews, transcripts, and operational data; a maintenance service that consumes change signals and runs the review queue; and an export engine that emits the consumable formats. Structured data is held in PostgreSQL, the entity graph in Neo4j, and inter-service events on a message bus.
Integration boundaries
Three classes of integration matter:
- Inbound: source material. Transcripts, recordings, exports from operational systems. Flows through the extraction service.
- Inbound: change signals. Webhooks or pollers from connected systems. Flows through the maintenance service.
- Outbound: agent-consumable knowledge. The three export formats. Generated on demand or scheduled.
The structured knowledge layer itself stays inside Trueground. Nothing in the data model is designed to be served by a third party.
Adjacent to governance, distinct from RAG
Trueground addresses a layer that most current AI infrastructure leaves unfilled. A short positioning, oriented around the systems it works with rather than against:
- AI governance platforms — Credo AI, Holistic AI, and similar — monitor what models do. They observe, score, and report on model behavior. Trueground sits below them: it is the organizational ground truth those models should be consulting before they act. Governance platforms watch outputs. Trueground supplies inputs.
- Observability and runtime guardrails — Arize, AgentOps, and the policy-enforcement layer — instrument agents in production. They are complementary to Trueground, not overlapping. Observability tells you what happened. Trueground defines what should happen.
- Retrieval and RAG systems — Glean, Hebbia, vector databases — retrieve and summarize existing documents. They are useful where the knowledge is already written down. Trueground captures the knowledge that is not written down — the unwritten rules, the informal hierarchies, the judgment calls — and structures it for agent use. The two coexist; an agent can consult both.
- Operational systems — CRM, ERP, ticketing, knowledge management — are sources of change signals and raw material. Trueground reads from them. It does not replace them.
The position is specific. Trueground is the structured ground truth layer. It is not the model, not the monitor, not the document store. It is the thing the agent stands on.
What's coming
A short, honest list of what is in active development:
- Direct consult API. A runtime interface for agents to query Trueground for the relevant decision context, complementing the export model.
- Deeper source-system integrations. Beyond CRM and ERP, into procurement systems, communication platforms, and approval workflows.
- Agent consultation audit trail. A structured record of what knowledge an agent consulted before each decision, available to AI governance platforms downstream.
- Multi-domain frameworks. Cross-domain decision logic — a sales decision that depends on a procurement constraint, for example — is currently modeled as multiple linked frameworks. First-class support is on the way.
Items here are committed direction, not promises with dates. The roadmap is reviewed quarterly with customers and partners.