Contextual CDPs: Memory, not identity, is what makes context work
Part 2 of 6: How model context protocols could replace the brittle logic of user tracking.
In Part 1, I explored how model context protocols (MCPs) could shift CDPs from passive data stores into dynamic context engines. Profiles that update themselves in the moment, based on intent, tone, and history. Not just segments, but signals. And now? Let’s see what happens when those signals actually drive decisions… in real time, across multiple touchpoints.
Because here’s the next big unlock: context-aware orchestration.
Real-time isn’t just about speed
Let’s be honest, "real-time" has been one of those buzziest of buzzwords we’ve collectively abused for years. It often means “fast batch,” or “eventual personalization.” What we really mean when we say real-time (or should mean) is right time.
Context-aware decisioning isn’t only about speed, it’s about understanding recent events in context, knowing what matters in the moment, and anticipating what’s likely to happen next. This is where MCPs really prove their value.
Imagine your orchestration engine has access not only to recent events and attributes, but also to a memory layer that understands the emotional temperature of a support call from earlier today. Or a RAG module that fetches the latest product interactions and social sentiment when deciding how to follow up.
This isn’t speculation. Companies like Urban Company are already using LLM-based systems to decide, mid-interaction, whether a customer issue is due to internal fault or user error, in context, and at scale.
Channel fluidity and the new journey logic
Let’s say a customer starts browsing on mobile, chats with support, and then disappears. Most orchestration tools would fall back on timers and triggers. But what if your orchestration logic could remember the chat’s tone, retrieve inferred intent, and decide that now isn’t the time to nudge or that the best nudge is a proactive SMS with an apology and solution?
With memory systems and contextual retrieval, we stop relying on generic flowcharts and start reacting to nuance. The result feels more human.
This is where the old journey orchestration model starts to crumble. Instead of “if user does X, send Y,” we start building orchestration logic that reasons: “Given everything I know right now, what would be most helpful to this customer?”
Memory as an orchestration layer
Here’s a simple framing: real-time orchestration gets smarter when it remembers more of the right things. And memory systems, whether session-based, long-term, or retrieved on demand, make that possible.
A few speculative but plausible applications:
In-session memory that adjusts messaging if a customer seems confused or frustrated
Long-term memory that flags historical churn risk patterns across interactions
Retrieval systems that surface policy constraints, loyalty status, or recent objections before a new message is sent
Think of them less as full-blown AI agents and more as orchestration systems enhanced by memory scaffolding → focused, contextual, and purpose-built. Lightweight, contextual, and composable.
Governance in the Moment
The real magic (and real risk!) is that context can change fast. One minute a user is a happy advocate, the next they’re angry on Twitter… or X, or Xitter, or whatever it’s called this week. Your orchestration engine needs to know that. Or at least, know enough to tread carefully.
MCPs can play a key role in governance by surfacing dynamic context. Think of it as real-time compliance meets empathy: “This customer has opted out of SMS,” or “This customer’s tone just shifted to negative → delay the promo.”
You don’t need a crystal ball. You need just enough context to make slightly better decisions, moment to moment.
Where this fits in the CDP Reboot
If the first phase of the CDP Reboot was about reconnecting data with purpose, this second phase is about making that purpose responsive. Orchestration powered by context is more aware.
It’s the difference between shouting into a void and actually listening before you speak.
In Part 3, I’ll explore how all of this changes our responsibility as data stewards. Because when systems start to infer, remember, and act, we’d better know how they’re doing it.
And we’d better build in a way that earns trust, not just attention.
Part 3: Trust as a feature
Part 4: Building the bridge
Part 5: Composable, not chaotic
Part 6: Contextual Fluency