Opinion & Analysis
Written by: Ben Ganzfried | Senior Director, Data Platform & Governance, Hungryroot
Updated 3:00 PM UTC, February 26, 2026

Most people do not begin their careers in data by watching probabilities determine whether someone they love will live. When my older brother was diagnosed with leukemia at twenty-six, our family entered a world governed by lab results and clinical data that we had no choice but to trust. During that time, I learned a lesson that would follow me through every role I’ve held since: sound decisions depend on solid information, and solid information depends on reliable foundations.
I encountered this lesson again as a cancer researcher at the Harvard School of Public Health. Our challenges were not algorithms, but fragmented and inconsistent data. Hundreds of labs had generated ovarian cancer gene expression datasets, yet the lack of standardization made it nearly impossible to reliably identify biomarkers tied to survival. Without shared definitions and reproducible foundations, the promise of personalized treatment remained largely out of reach.
Years later, I had similar experiences in the business world. At Wayfair, rapid growth laid bare how the absence of shared definitions and a single source of truth eroded trust across teams, and also taught me how to fix it at scale. Even earlier, working in analytics with the New England Patriots, I learned that the true lesson of the Moneyball era was not about sophisticated models, but about the integrity of the data, consistent definitions, and shared understanding of success.
Over time, it’s patently clear that these were not isolated lessons from different industries. They were manifestations of the same underlying dynamic: as complexity grows, informal coordination breaks down.
What once relied on shared context and human judgment must eventually be made explicit, owned, and governed. When it is not, organizations often adapt predictably by creating new roles.
If you are not immersed in data, analytics, or AI, today’s organizational landscape can feel chaotic. Over the past decade, organizations have introduced Chief Data Officers, Chief Analytics Officers, Chief AI Officers, and various amalgams of all three, alongside the evolution of roles such as BI engineers, analytics engineers, data engineers, and machine learning engineers.
What is unfolding is not random. It reflects a simple organizational reality: intelligence scales faster than accountability.
This pattern long predates today’s AI. In the 1970s, database pioneer Charles Bachman described databases as systems of inquiry – repositories of organizational memory designed to support decision-making. When organizations were small, informal coordination and human judgment compensated for ambiguity. Intelligence existed, and accountability was good enough.
As organizations grew, scale fractured that informal trust. Bill Inmon’s work on data warehousing improved integration, but introduced new challenges, as I saw at Wayfair, and which are commonplace at many large enterprises. Definitions drifted, copies proliferated, and confidence in shared meaning eroded. Data was managed, but rarely owned end-to-end as a strategic asset.
Over time, organizations layered on increasingly sophisticated analytical techniques. Machine learning, and later deep learning, expanded what systems could predict and automate, but they did not change the underlying dependency: models are only as reliable as the data, definitions, and assumptions beneath them.
The financial crisis of 2008 made the consequences unavoidable. Regulators demanded transparency and lineage, forcing organizations to confront how little understanding and accountability they had over their data.
Andrew Ross Sorkin’s Too Big to Fail captured this vividly: critical decisions were made without a coherent view of the underlying data. Exposure, ownership, and meaning surfaced only after systems failed. The Chief Data Officer role emerged in response, focused on compliance and risk mitigation.
That intervention solved an immediate problem, but it left the deeper one intact. Regulatory accountability was necessarily backward-looking and static. It ensured reporting accuracy, not operational coherence. Organizations could satisfy regulators while still relying on people to manually reconcile ambiguity inside everyday workflows.
For a time, that compromise worked. Intelligence informed decisions but humans remained the final gatekeepers.
That arrangement persisted not because organizations misunderstood the problem, but because fully resolving it was uncomfortable. Making foundations explicit forces decisions about ownership, authority, and tradeoffs that cut across silos. It requires naming who is accountable when definitions conflict, whose metrics prevail, and who bears responsibility when decisions go wrong. In many cases, it proves easier to tolerate ambiguity and rely on human judgment than to confront the organizational consequences of clarity. New roles, committees, and tools often served as substitutes for those harder conversations, allowing accountability to be deferred rather than resolved.
Traditional analytics and earlier machine learning systems operated largely around workflows. Humans interpreted results and compensated for inconsistency through judgment and institutional memory. Modern AI systems, particularly generative models, operate inside workflows. They act on definitions and trigger decisions automatically. Ambiguity that people once absorbed became operational risk overnight.
By 2025, foundations that had been good enough for years have been exposed as liabilities. Weak definitions, brittle pipelines, unclear ownership, and fragmented governance became immediate constraints the moment AI touched real-world decisions.
The proliferation of data and AI roles is not confusion about technology. It is evidence of organizations postponing decisions about ownership, meaning, and accountability.
As AI moves deeper into products, operations, and decisions, that postponement will become increasingly costly. The organizations that fall behind will not do so because they lack AI investments, but because they waited too long to decide who is truly accountable for the intelligence they had already put into motion.
About the Author:
Ben Ganzfried is a data leader who helps organizations modernize their analytics infrastructure and unlock business value through scalable data platforms. He has served as a board advisor for multiple companies on data strategy and has built infrastructure and models, and led data teams at Hungryroot (current role), Anaconda, Wayfair, PwC, and the New England Patriots.
Ganzfried holds an MBA from Carnegie Mellon and an AB from Harvard. His work has been featured by Google Cloud, ESPN, and Oxford University Press, and he speaks and writes regularly on data strategy, enablement, and AI adoption.