Glostarep

Why Real Time Decisioning for AI Agents Fails Without a Customer Context Layer

Why Real Time Decisioning for AI Agents Fails Without a Customer Context Layer

The martech stack as we know it is dissolving. That is the bold claim at the centre of a new research report by Scott Brinker, published in collaboration with Databricks. In response, Alex Dean, Co-Founder and CEO of Snowplow, has shared a sharp perspective and added something the report left out.

The shift, according to Dean, is not just about new tools. It is about rethinking the foundation that real-time decisioning for AI agents runs on.

Brinker’s report argues that the familiar stack of marketing technology boxes is giving way to a composable canvas. Data platforms such as Databricks, Snowflake, and BigQuery now sit at the centre of everything. AI agents, custom software, and analytics tools no longer sit on top of the data. Instead, they operate within it.

Dean agrees strongly with this framing. However, he adds a layer the report does not fully develop: the customer context layer.

So what exactly is it? The customer context layer is a real-time behavioral infrastructure. It sits across the data foundation and the customer-facing systems of a business. It is wired directly into digital experiences, so AI agents can understand what a customer is doing right now, not just what they did last week.

This is where real-time decisioning for AI agents breaks down for most organisations. There is a difference between customer records and customer context. CRMs and CDPs manage records well. Who the customer is, their purchase history, and their segment. But they struggle with context, what customers are doing at this very moment, and what that signals about intent.

Behavioral event streams fill this gap. These are continuous, granular records of how customers interact with a product, website, or app. They are the richest real-time signal available to any AI agent making a decision. Yet they are notoriously hard to get right. Events must be structured at collection, validated, and enriched before they reach the data platform. If behavioral data going in is noisy or inconsistent, AI agents will compound those errors fast, and at scale.

Dean also raises an important point about identity. A rich behavioral stream is only useful when it connects to a known individual across devices, touchpoints, and sessions. Without that stitching, even the best behavioral data becomes noise.

Beyond the context layer, Dean pushes the report’s composability argument further. He points to open data formats like Apache Iceberg and Linux Foundation Delta Lake as essential infrastructure. The principle is clear. Standardise the foundation so that everything built on top can be swapped, extended, or improved without risk. Real-time decisioning for AI agents demands this kind of flexibility.

The semantic layer also comes up. The report describes it as the “keeper of coherence” a shared vocabulary across agents and applications. Dean’s addition, most of those questions must be answered before data enters the platform, not after. Behavioral data is easy to collect badly. Events arrive with inconsistent naming and undefined schemas. By the time they reach the platform, the damage is already done.

Perhaps the most compelling part of Dean’s perspective is what he calls the agentic feedback loop. It runs in four stages. First, collect: capturing behavioral events from both human and AI-driven interactions as structured, validated data. Second, resolve and enrich: stitching events to a known identity to build a coherent customer picture. Third, serve: delivering enriched context to AI agents in real time and combining it with historical records for deeper decisioning. Fourth, learn: feeding the outcome of every agent decision back into the data foundation as a new behavioral event.

That last stage is critical. Without it, AI agents operate on data that grows staler every day. With it, the system becomes self-improving. Real-time decisioning for AI agents only reaches its potential when that loop closes fully.

Dean also draws a distinction that matters for engineering teams, analytics versus decisioning. Analytics is backward-looking. Latency of minutes or hours is acceptable. Decisioning is forward-looking and needs behavioral context within milliseconds. Those are different infrastructure problems. Most pipelines force a choice between speed and depth. The composable canvas demands both, at the same time, on the same data.

The report frames the shift to this new architecture as a three-to-five year horizon. Dean’s view is more immediate. For organisations that have already invested in data platforms as operational cores, and that are already running behavioral data through real-time agents, this is not the future. It is already how they operate today.

Leave a Comment

Your email address will not be published. Required fields are marked *