AI News Bureau
Written by: CDO Magazine Bureau
Updated 2:21 PM UTC, February 20, 2026
In boardrooms across regulated industries, AI has become both an urgency and a mirror, reflecting what organizations have invested in for years and what they have postponed. For Patrick McQuillan, a global data governance and responsible AI leader at a Fortune 500 enterprise, the current moment is less about the novelty of generative models and more about whether enterprises have the discipline to treat data as the strategic asset it has always been.
In this first part of a three-part interview series, McQuillan speaks with Peter Geovanes, Founder and CEO of Juris Tech Advisors, about what Fortune 500 companies are getting right and wrong when it comes to data governance, responsible AI, and the organizational choices that determine whether AI becomes a durable advantage or a cycle of short-lived experiments.
McQuillan’s path into data and AI begins far from corporate governance frameworks. He started in diplomacy and international security, where he encountered a persistent problem: decisions that lack measurable grounding.
“I began my career as an international security consultant with the United Nations and was an economic officer with them for a number of years.”
That early work pushed him toward quantification and evidence-based decision-making. “I noticed there was a need for empiricism in the space,” he explains, adding that “evidence-based conversation carried a lot more weight,” which led him to “get heavier into the quantitative side of things.”
From there, he moved into private-sector economics, work that deepened his exposure to legal and regulatory realities and eventually to the technical side of data and AI.
“I worked as an economist for several years, mainly in antitrust, securities fraud, internet, intellectual property, and quite a bit of work in international arbitration,” he says. The progression, he notes, provided “a lot of quantitative background and a lot of eventual data engineering and ultimately AI development space.”
Over time, he led data science divisions across multiple companies and geographies and increasingly focused on governance and responsible AI. “More recently, I’ve been heading data governance and responsible AI for a few different companies, and had my consulting firm for a little while in that space,” he says.
Alongside industry work, he remained anchored in academia. For McQuillan, the value of this range is perspective, seeing the full pipeline from data creation and management to model development, deployment, and oversight.
McQuillan draws a sharp distinction between how governance is treated in lightly regulated sectors versus heavily regulated ones. In many Fortune 500 environments outside healthcare, financial services, energy, or defense, he sees governance implemented with flexibility, often framed as a lever for operational performance rather than a top-tier business priority.
In those less regulated contexts, governance work is frequently tied to optimization and compliance basics, alongside “meeting requirements of certain European laws, PII protections, and things like that.”
The tradeoff, as McQuillan describes it, is pragmatic: some organizations normalize occasional risk events as part of business operations. “They’re aware of the probability that there might be some occasional breaches and it might just be a small cost of doing business, and they can move forward,” he says.
But in heavily regulated companies, governance stops being a “nice-to-have” optimization effort and becomes inseparable from business execution. “It becomes a broader business question with the AI space,” he explains.
Even as governance maturity varies, McQuillan sees a broader uplift happening across the market, especially in how organizations treat metadata and cataloging as operational infrastructure, not compliance paperwork.
He emphasizes that the scope is expanding beyond privacy and personally identifiable information (PII). “It goes well beyond PII now,” he says, pointing to “cataloging created automated triggers” and “connecting the dots between how data is actually fueling the development of new products, capabilities, and analytics.”
However, McQuillan is careful to frame governance as enabling multiple outcomes, not just AI. “It doesn’t all just pipe into AI,” he says. “It can go into dashboarding and performance optimization. It can go into a lot of work streams.”
The cost of weak governance shows up in familiar ways: teams can’t find data, requirements arrive late in the process, and launches stall when compliance realities collide with product timelines. Without governance, McQuillan argues, organizations “ultimately suffer from higher cost basis,” with downstream consequences that “impact the bottom line.”
McQuillan sees a clear step-change in executive urgency since generative AI (GenAI) became mainstream. “There’s been a rapid adoption, particularly since the advent of GenAI and the type of generative and agentic technologies that a lot of C-suites are taking on,” he says.
But he also describes a common leadership gap: many executives feel pressure to become “AI-enabled” without a clear definition of what that means or how to build it sustainably. “There’s very much a well-understood need across all companies to become AI-enabled in some way,” he says. “But the problem is a lot of folks don’t necessarily know how to define that.”
In the absence of clarity, organizations often fall into scattershot experimentation. What concerns McQuillan the most is how the pace of the “race” shapes priorities. He doesn’t dismiss near-term wins and acknowledges their weight, but argues that the missing piece is organized strategic thinking — especially the architectural and infrastructural work needed to support durable outcomes.
In his view, inconsistent prioritization can actively erase progress. He describes a churn cycle where new initiatives replace recent work before foundations are set. The prescription is not to stop innovating, but to make foundational consistency non-negotiable.
When asked whether the long-running mantra “data is the new oil” still holds in the era of large language models and agentic workflows, McQuillan is direct. “It holds true now more than ever,” he says.
He acknowledges why attention drifts: “It’s natural for people to gravitate toward things that are shiny,” and “AI in and of itself is an absolutely magnificent space.”
But in his framing, the allure of AI can distort what organizations emphasize. He calls out a tendency to over-focus on the logic and the “AI layer” while under-investing in the integrity and freshness of the data feeding it. “AI is just data filtered through logic.”
And the risk does not stop at launch. Once a system is live, performance can degrade quietly, especially if monitoring is weak and data inputs become stale. “If something goes live and is commercialized six months later, it could drift,” he says. “It could be doing something totally different than what it was intended to do because there’s no monitoring.”
McQuillan brings it back to a broader business truth: data underpins everything, not only AI products. “It’s not just being filtered through AI,” he says. “An income sheet is data, and there’s a lot of data that sits underneath those numbers that the business needs to use to optimize itself.”
In that context, he argues that organizations often misplace attention, treating the model mechanics as the prized asset rather than the underlying resource. “People over-index on the logic that AI is filtering that data through to produce an outcome and not necessarily caring so much about the data itself,” he says.
Concluding, McQuillan offers a metaphor that reframes responsible AI as a quality and stewardship problem at the source. “It’s almost like saying you care about the goblet you drink from, rather than worrying if the water’s been spoiled.”
CDO Magazine appreciates Patrick McQuillan for sharing his insights with CDO Magazine.