AI can only move as fast as your data, and will accentuate the data you have - for good, or for worse. And right now, business signals, from leadership changes to buyer intent, are shifting faster than most GTM systems can keep up.
Leadership churn is accelerating across markets, making data stale much faster than before. And nearly 70% of buyer intent categories in Cognism’s dataset are now tied to AI and automation, suggesting teams may be relying on static or outdated data when setting up their AI strategies. Which could mean they fail before they ever get off the ground.
Everyone wants AI to transform their go-to-market motion. Fewer teams stop to ask whether their data is actually ready for it.
At its simplest definition, AI-ready data is data that AI systems can trust, understand, and act on.
In a go-to-market context, that means data that is:
Sandy Tsang, VP of RevOps at Cognism, defines ‘AI-ready data’ as:
“AI-ready data would be data that you can take and then leverage the power of LLMs to enhance it. To leverage it for your marketing purposes, for your sales purposes, for whatever customer analysis purposes.
To me, it’s a lot about how clean and usable it is, not just for a human, but now that we’re feeding it to a machine to be able to find the trends or insights we want.
So for me, AI-ready data is something that can actually be ingested into an AI tool and then give us an output of: here’s a summary of what this data set really means.”
This is where many teams go wrong. They assume AI-readiness is about having more data, or about bolting AI features onto existing tools.
In reality, it’s about whether your underlying data reflects what’s actually happening in the market right now, because that is what AI will amplify. AI doesn’t know what is ‘right’ or ‘wrong’, so you need to make sure your sources are correct and clearly structured.
Historically, humans compensated for messy data. As Sandy points out, teams used to manually tidy data as it came in:
“If we purchased different data sets or lists, you had humans looking through it and spotting where things needed to be tidied up, number formatting, inconsistencies, things like that.”
AI, unfortunately, doesn’t work that way. When data is fed into AI systems, every assumption, inconsistency, and contradiction is exposed at scale. It reveals structural weaknesses that were previously hidden by human judgment.
That’s why many GTM teams feel disappointed by AI outputs. Not because the AI is weak, but because the data wasn’t designed for machines to interpret in the first place. And when those outputs don’t quite make sense, trust erodes quickly.
Even when AI produces results, Sandy believes the issue isn’t usually the data alone - it’s communication.
“If something looks fishy, it’s about visibility. Where did this come from? What was its intended purpose? How do I know I can trust it?”
Without context, AI outputs become just numbers on a screen. Data only becomes useful when it’s paired with interpretation and action.
“It’s not just ‘here’s the number.’ It’s the insight and the action you take from it.”
AI outputs should spark conversation rather than replace it. Teams need space to question results, challenge assumptions, and reframe insights in context.
Without that dialogue, AI becomes something people quietly ignore - another system producing answers no one fully trusts.
Clean data is often reduced to formatting. Sandy, however, proposes that the definition no longer goes far enough.
“It goes a step beyond formatting. It’s about how data is structured, how it makes sense logically, and how different sources interact when you join them together.”
She describes data as building blocks. If those blocks don’t fit - or actively contradict one another - AI systems struggle to produce meaningful insight.
She adds:
“When you’re looking at your building blocks, do they actually fit together? Or do they contradict each other or not really make sense?”
This is also where humans still play a critical role. Someone has to decide:
AI doesn’t replace judgment; it exposes where judgment was never clearly applied. And this is exactly why AI-readiness so often breaks down in practice.
When asked about the biggest barrier to AI-ready data, Sandy didn’t point to AI tooling at all. She pointed to reconciliation.
“There are so many sources, and trying to reconcile where it all goes, being able to definitively say this is the source, this is why, and this is how it joins up, that’s the hardest part.”
That reconciliation work is complex, ongoing, and often invisible. It’s why data architects and data engineers exist, and why AI doesn’t remove the need for them. If anything, AI raises the cost of skipping this step.
One of the most common responses to messy, hard-to-reconcile data is to look for a single source of truth.
On the surface, this feels like the right solution. If everyone used the same dataset, surely AI outputs would be cleaner, more consistent, and easier to trust.
In practice, Sandy believes this approach often creates more problems than it solves.
Different go-to-market teams are trying to answer different questions, each of which requires different data.
Finance, for example, needs data that reflects invoicing, revenue recognition, and financial reality. RevOps often needs a more real-time view of pipeline movement, bookings, and sales activity. Marketing may need another lens, this time focused on engagement, intent, and account behaviour.
Forcing all of that into one “truth” usually means compromising the data for everyone.
In an AI context, this is especially risky. When AI systems are fed data that wasn’t designed for the question being asked, outputs become inconsistent or misleading, even if the data itself is technically accurate.
AI-ready teams don’t rely on a single source of truth. Instead, they are explicit about purpose.
They document:
This clarity matters more than consolidation. AI doesn’t need one dataset to answer every question - it needs to know which data to trust in which context.
Data providers play an important role in whether go-to-market data is truly AI-ready, but only when the relationship is built on transparency and collaboration.
As GTM teams grapple with reconciliation, ownership, and trust, it becomes clear that AI-readiness doesn’t end at the organisation’s boundaries. Most go-to-market data isn’t created internally. It’s enriched, validated, and supplemented by external partners.
Sandy is clear that the real value of those providers isn’t just in the data itself, but in how closely teams work with them.
“The more you involve your data provider, the more value you’re going to get out of that data.”
In an AI context, this involvement becomes critical. When teams don’t understand where external data comes from, how it’s sourced, or why discrepancies exist, trust in the process and output can disappear.
Providers that genuinely support AI-ready GTM teams are those that:
This level of transparency gives teams something essential: context. And context is what allows people to trust, challenge, and act on AI-powered insights rather than quietly ignoring them.
It depends on the wider data ecosystem and on whether your partners are willing to stand behind the data they provide.
Unfortunately, there’s no universal pre-flight check for data AI readiness.
Sandy said:
“I wish there were a pre-qualification checker, but I’m not aware of one, yet.”
In practice, AI-readiness is often revealed only after data has been ingested. The most telling signal isn’t a score or benchmark. It’s whether the AI can meaningfully work with what you’ve given it.
“The output you want from AI is for it to tell you what it was able to use and what it wasn’t.”
If AI can’t generate the insights you expect, or large parts of a dataset are ignored, misinterpreted, or unusable, that’s often a signal that something in the data-to-AI chain isn’t ready.
Whether that’s the structure and clarity of the data itself, how it’s being served and contextualised for the model, or the model’s ability to interpret the relationships within it.
The challenge, of course, is that most GTM teams can’t pause everything and rebuild their data foundations from scratch. Systems are already live, stakeholders are using the numbers, and AI experimentation is happening in parallel.
That’s why Sandy’s advice is deliberately pragmatic: start with definitions.
“The priority is defining your metrics, how they’re calculated, and what the sources are.”
Before investing further in AI tooling or automation, teams need shared clarity on some fundamentals - not in theory, but in writing, and with agreement across teams.
At a minimum, that means aligning on:
This work often feels basic, but it’s where most AI initiatives either succeed or quietly stall.
Without clear definitions, AI systems are forced to operate across inconsistent logic and competing interpretations. The result is outputs that feel plausible, but aren’t trusted, and insights that don’t lead to action.
With definitions in place, something important changes. Data becomes legible not just to humans, but to machines. AI has a stable foundation to build on, rather than amplifying ambiguity. Only then does AI have something solid to amplify.
Once those foundations exist, teams can iterate - improving data quality, strengthening integrations, and expanding AI use cases with confidence, rather than trying to solve everything at once.
As AI becomes more embedded across go-to-market systems, Sandy sees the role of RevOps evolving rather than fragmenting.
Despite growing interest in titles like “Revenue Data” or “VP of Revenue Data,” she doesn’t believe data ownership should be split away from RevOps. In her view, the two responsibilities are fundamentally interdependent.
RevOps designs how data moves through the organisation:
Revenue data work, meanwhile, focuses on:
Separating these functions risks breaking the feedback loop between structure and interpretation. When the people producing insights are disconnected from how the data is generated, governed, and moved through systems, trust and accuracy suffer, especially once AI is introduced.
AI raises the stakes on this relationship.
As Sandy notes, AI already allows RevOps teams to process far larger volumes of data than before, not just numerical data, but qualitative inputs like call transcripts, notes, and feedback at scale. AI can surface themes, patterns, and trends in minutes that would have taken weeks before.
But faster processing doesn’t remove the need for human ownership.
Context still matters. Judgement still matters. Someone still needs to decide which insights matter, how they should be framed, and what action they should lead to. Without that stewardship, AI outputs risk becoming technically impressive but operationally irrelevant.
In an AI-ready future, RevOps becomes the connective tissue between systems, data, and decision-making. Not just maintaining infrastructure, but actively shaping how insights are generated, trusted, and used across the business.
AI-ready data isn’t about perfection. Instead, it’s about whether your data is trusted, understood, and usable enough to drive action.
As Sandy makes clear, AI doesn’t magically fix broken go-to-market foundations. It reflects them back - faster, louder, and without the human smoothing that used to hide inconsistencies.
Teams that struggle with AI aren’t usually failing because the technology is immature. They’re failing because:
The teams that succeed take a different approach. They focus less on adding AI everywhere, and more on making their data legible, to machines and to people. They define what matters, document why it matters, and create the conditions for AI to amplify insight rather than ambiguity.
In that world, RevOps isn’t just about running systems. It’s about ensuring the data flowing through them can be trusted, interpreted, and used to drive action.