AI News Bureau
Written by: CDO Magazine
Updated 1:38 PM UTC, February 26, 2026
The rapid release of powerful foundation and multimodal models has pushed AI to the top of the executive agenda. From GPT-class systems to advanced multimodal platforms, the narrative often centers on capability, speed, and competitive advantage. But beneath the excitement lies a harder question for enterprise leaders: are organizations building on a strong enough data foundation to realize that value?
In Part 2 of this three-part video interview series, Patrick McQuillan, a Fortune 500 data governance and responsible AI leader, joins Peter Geovanes, Founder and CEO of Juris Tech Advisors, to challenge the idea that AI success begins with models. He argues that most organizations are overlooking the foundational work required to make AI scalable, governable, and economically viable.
During the conversation, McQuillan reframes AI governance as more than compliance or risk mitigation. He positions it as a driver of operational efficiency, faster time-to-market, and sustainable ROI. From reducing the hidden costs of data discovery to lowering AI failure rates, he explains why organizations must rethink how governance, data lifecycle management, and investment discipline fit into the enterprise AI journey.
Part 1 of the conversation explored what Fortune 500 companies are getting right and wrong when it comes to data governance and responsible AI.
McQuillan argues that most organizations approach AI as a standalone initiative when it is actually a downstream component of a broader data system. “You need to know the available data,” he says, emphasizing that the prerequisites apply even outside the most heavily regulated sectors.
From his perspective, governance begins with practical inventory and traceability, and understanding how it feeds the business. Data quality, lineage, and cataloging, he says, are the baseline. That means identifying “prioritized data, primary data sets, or core source data” and ensuring that downstream data pipelines are visible and accountable.
“It needs to be identified and traced,” he says, and it needs to be understood that “this isn’t an AI life cycle. This is a data life cycle. AI is a component of it.”
Even when companies believe they operate outside strict oversight, McQuillan warns that regulation follows geography, not corporate comfort. “The EU AI Act, GDPR, those apply if you’re doing business in Europe,” he says. “It does affect you, and they do make examples of individuals at companies.”
The outcome of skipping this work is not only regulatory exposure, but it is also business waste. “Even beyond the responsible side of things, just using AI without a long plan creates low ROI and spending,” he says.
Further, McQuillan describes governance as a system spanning technology, people, and process, especially when moving from data provisioning to model output and continuous monitoring. When organizations align systems, people, and processes and keep humans in the loop, they begin removing the hidden drag that kills momentum.
That drag shows up as teams “spinning out, troubleshooting things, tracking down information, working with product, with legal, with engineering.” With lineage, quality assessment, and lifecycle mapping in place, “from data provisioning all the way to AI, AI output, assessing that output, and then rinse and repeat,” McQuillan argues companies uncover an immediate operational payoff.
The payoff, he notes, is measurable. “People would be shocked by how much AI failure rates could fall and how much ROI could be improved, like fixed as a sustainable return over time,” he adds.
McQuillan doesn’t rule out the cultural resistance to responsible AI. “We all come from a space where the word responsible sometimes gives a bit of an ick,” he says.
In his experience, many leaders interpret “responsible” as an abstract trust gesture, nice to say but hard to justify. “A lot of companies see the word ‘responsible’ like this intangible way of adding to the trust value in their company,” he says. His approach is to change the framing by placing “responsible” within a broader governance umbrella that the business can recognize as value-generating.
Within that umbrella, responsibility is concrete: “Responsible is making sure your GenAI isn’t hallucinating” and “making sure that it’s not biased, that there’s no toxicity, and that there’s consistency.” McQuillan notes that it is essential for certain use cases, companies, and laws and regulations.
But when trying to persuade the C-suite, he shifts the conversation from ethics as a constraint to governance as throughput and investment discipline. McQuillan says the most effective leadership conversations start with operational reality, not ideology. When leaders walk through the timeline honestly, he says patterns emerge across divisions and teams.
He says he has never seen a company that does not waste significant time trying to discover the data they need to work with. He adds that organizations also struggle with steep learning curves, overinvesting in some areas, underinvesting in others, and frequent churn across platforms, vendors, and short-term teams or contractors.
McQuillan also calls out the pre-development spending that quietly balloons budgets. “There’s so much consulting happening around AI, there’s so much money spent before the actual AI is being developed,” he says, “and it doesn’t have to be that way.”
For him, well-governed pipelines create value primarily through operational improvement. He says that well-governed AI and data flowing through the pipeline generate significant value for the company, generating value primarily through EBITDA improvement rather than direct revenue.
McQuillan’s definition of governance expands beyond risk avoidance into a mechanism for better investment decisions. With the right visibility, he says, companies can evaluate whether an initiative merits continued spending — or whether it exists primarily to satisfy a narrow internal agenda.
Good governance, he argues, includes understanding “risk rating, the size of the data we’re working with, where the AI’s being deployed, and the amount of revenue it’s supposed to generate.” McQuillan stresses that leadership should be able to say, “If we’re noticing this isn’t worth the investment, why are we doing this?”
He then mentions that governance done correctly with a broader view makes it possible to “take the bigger picture, shift funding, and move time to better projects.”
McQuillan’s governance checklist spans both data foundations and model behavior. At the foundation level, he points to well-labeled data, clear data lineage, and strong pipelines. At the AI layer, the first step is confirming that AI is actually the right solution before defaulting to generative AI. As he puts it, “You want to make sure the AI is conceptually sound; that is the appropriate solution for what you’re trying to do.” He adds that many use cases simply do not require GenAI.
From there, the focus shifts to the guardrails that separate a prototype from a dependable system. This includes monitoring hallucinations and toxicity, identifying bias, and checking whether the data favors certain protected groups. For traditional machine learning, he highlights “precision, accuracy, and recall.” For generative systems, the priority becomes “consistency of the prompts and the assumptions being made so that output is consistent.”
He also emphasizes robustness over time. As input data changes, systems must continue to produce comparable results and maintain value as they scale across new cultures and environments. Governance becomes inseparable from global deployment, where performance must remain consistent across regions with different expectations and contexts.
For McQuillan, this is the real meaning of governance. It is not a box-checking exercise or simply a way to avoid risk. “That is well-governed AI,” he says. “It’s not just keeping folks out of trouble. It’s making sure that the end user is actually getting what they need from it to the degree we need at a measurable level.”
CDO Magazine appreciates Patrick McQuillan for sharing his insights with CDO Magazine.