AI News Bureau

GE HealthCare AI Chief on Why AI Must Fade Into the Background — and What It Takes to Get There

avatar

Written by: CDO Magazine Bureau

Updated 2:00 PM UTC, February 18, 2026

GE HealthCare operates at the center of modern medicine, supporting more than one billion patient interactions annually through its imaging systems, ultrasound, patient monitoring, and digital solutions portfolio. Since its spin-off as a standalone public company in 2023, GE HealthCare has sharpened its focus on precision care, investing heavily in AI-enabled medical devices and clinical workflows. The company has amassed more than 115 FDA-authorized AI-enabled devices, the highest in the industry, positioning it as a leader in applied, regulated AI at scale.

That scale is not theoretical. From MRI acceleration technologies such as AIR Recon DL, which significantly reduce scan times while improving image quality, to AI-powered workflow orchestration tools in radiology and maternal care, GE HealthCare is embedding intelligence directly into the operational backbone of hospitals. As healthcare systems grapple with workforce shortages, aging populations, and mounting cost pressures, AI is shifting from pilot projects to essential infrastructure.

In this first part of a two-part conversation, Parminder Bhatia, Chief AI Officer at GE HealthCare, speaks with Cindi Howson, Chief Data Strategy Officer at ThoughtSpot, about what this inflection point truly means. Moving beyond assistive AI and isolated generative tools, Bhatia outlines the transition toward agentic AI systems that coordinate complex clinical workflows, operate within bounded autonomy, and scale safely through layered guardrails. At the heart of the discussion is a defining question for healthcare’s next phase: how do we move from AI that assists to AI that orchestrates, without compromising trust, safety, or accountability?

Edited Excerpts

Q: You’re working at the forefront of AI in one of the most critical industries, healthcare. We’ve seen a clear shift from traditional, assistive machine learning models to more agentic systems. What changes are you seeing on the ground?

We are at an inflection point for AI in healthcare. Healthcare systems are moving beyond experiments and beginning to treat AI as essential infrastructure. It’s becoming embedded into the systems that keep hospitals running every day, and we are already seeing that shift in real deployments across multiple areas.

Agentic AI, in particular, has come to the forefront over the last year and a half. What’s changing now is how it’s being applied. Instead of jumping straight into full automation, agentic AI systems are starting to make an impact by coordinating both routine and complex tasks in healthcare. From there, they move toward structured decision-making and orchestration.

That orchestration is the real differentiation. Historically, generative AI over the last two years has been treated as a single assistant responding to a prompt. Healthcare doesn’t work that way. It is delivered by teams and complex systems that must coordinate across people, workflows, and technologies. That’s the model we are building toward. Think of agents as coordinators rather than replacements.

A good example is imaging backlog management. This occurs when scan volumes and exam complexities outpace radiology capacity, leaving a significant number of studies unread and delaying downstream care.

Now imagine an orchestration where an agent looks into the worklist, applies clinical prioritization rules, routes cases to available radiologists based on expertise, monitors aging exams, notifies when thresholds are exceeded, and escalates issues as the backlog grows.

Each of these steps is simple on its own, but when orchestrated together, they remove significant operational friction. That’s the shift underway, from AI that summarizes or assists to AI that coordinates tasks across systems and teams. That transition from assistance to orchestration will define the next phase of agentic AI in healthcare.

Q: If we look at the broader patient journey, insurers often have richer data, while providers have historically struggled due to legacy systems and underinvestment. Do you see that changing now?

It is going to change. When you look across these pillars, you start to see opportunities emerging in multiple areas.

A lot of the AI we’ve been building at GE HealthCare is focused not just on improving patient care, but also on streamlining hospital operations. For the third year running, we have the highest number of FDA-authorized AI-enabled devices. The question we constantly ask is: how do we use AI to improve care while also improving operational efficiency?

A good example is AIR Recon DL, which helps create higher-quality MRI scans while reducing scan time by up to 50%. The implications are significant.

Imagine a hospital that could previously perform three scans an hour. Now it can perform six scans in the same duration, with the same workforce. That improves throughput, reduces backlogs, and improves access to care.

It also changes access for patients who previously couldn’t tolerate long scan times. Patients who struggled to remain in an MRI for 40 or 50 minutes can now complete scans in a much shorter time. That fundamentally changes who can receive care.

This is where AI starts to impact the bottom line for hospitals, improve operations, and deliver better care and access for patients at the same time.

Q: GE HealthCare now has more than 115 FDA-authorized AI-enabled devices, which is remarkable. A few years ago, regulatory approvals moved much more slowly. Are you seeing a convergence of faster innovation and a recognition that AI can enable better preventive care?

The core question is how we scale responsibly. Healthcare innovation requires guardrails. When we talk about agentic AI and autonomy, autonomy never means the absence of oversight. The way we think about it is bounded, accountable autonomy, with humans firmly in the loop. Autonomy is deliberately scoped, and accountability always remains with clinicians.

One way to think about this is through a layered guardrail approach across four areas.

First, we define a bounded action space with explicit human-in-the-loop checkpoints. Every agent is clearly scoped for what it is allowed to do and where it must stop. An agent may flag early signs of deterioration or route a case for urgent review, but confirmation and clinical action always require human approval.

Second, we implement technical guardrails. With the rapid evolution of large language models, it’s critical to filter inputs for relevance and safety, and to validate outputs through hallucination checks, safety checks, and consistency checks before anything reaches a clinician.

Third, before anything goes live, we rely heavily on red teaming in sandbox environments. Building the technology may take 10-15% of the effort, but 70% of the time goes into validation. That includes multi-site validation, depending on the use case, to ensure systems are ready for real-world care. We actively try to break our own systems long before a patient is involved.

Finally, guardrails don’t end at deployment. These systems must be continuously monitored in production for performance, drift, and unintended behavior, with adjustments made as needed. Responsibility is ongoing.

The path forward is scale, but not unchecked autonomy. It’s progressive autonomy, where systems earn trust through evidence, constrained scope, human oversight, and measurable outcomes. That’s how we innovate safely and at scale in healthcare.

Q: Healthcare systems are under pressure from aging populations, workforce shortages, and the need for preventive care. How are patients responding to this increased use of AI?

There are two perspectives to consider. From the patient’s point of view, patients are ready for the outcomes of AI, even if they’re cautious about the mechanism. A useful analogy is navigation. Most people don’t need to understand how GPS works. They care about clarity, confidence, and fewer surprises.

Patient agency works the same way. True patient agency isn’t about patients managing AI tools. It’s about AI removing friction so patients feel more informed, supported, and in control of their care journey.

That means clearer explanations, fewer delays, and better coordination across appointments. Ultimately, it comes down to trust. Patients need to know their data is protected, AI is used responsibly, and clinicians remain accountable. When AI is done right, it fades into the background.

AIR Recon DL, for example, has already been used in more than 50 million scans globally. We don’t even talk about it as AI anymore. It’s simply an enabler that reduces scan times from 30 minutes to 10 or 15 minutes and helps reduce backlogs.

We’re seeing a similar impact in maternal care. One of our newer offerings, CareIntellect for Perinatal, helps streamline labor and delivery operations by providing a longitudinal view of data from multiple devices at the point of care. Clinicians feel more informed, and mothers and families feel more connected throughout the process.

That’s what patient agency looks like. Patients don’t feel like they’re interacting with technology. They feel like the system is working around them.

CDO Magazine appreciates Parminder Bhatia for sharing his insights with our global community.

Related Stories

March 19, 2026  |  In Person

Atlanta Leadership Summit

The Westin Atlanta Perimeter North

Similar Topics
AI News Bureau
Data Management
Diversity
Testimonials
background image
Community Network

Join Our Community

starElevate Your Personal Brand

starShape the Data Leadership Agenda

starBuild a Lasting Network

starExchange Knowledge & Experience

starStay Updated & Future-Ready

logo
Social media icon
Social media icon
Social media icon
Social media icon
About