AI News Bureau

How AI Is Enhancing Decisions Across Financial Services, Truist AI and Data Architect Explains

avatar

Written by: CDO Magazine

Updated 11:27 AM UTC, April 3, 2026

Truist Financial Corporation, one of the largest financial institutions in the United States, was formed through the 2019 merger of BB&T and SunTrust. Today, it serves millions of customers across retail, commercial, and wealth segments, operating in one of the most tightly regulated industries. With a strong regional footprint and growing digital ambitions, Truist represents the kind of complex, legacy-heavy enterprise where AI must prove its value under real operational constraints, not just controlled pilots.

In this second part of the conversation, Sanjay Sankolli, an architect in AI and data at Truist, speaks with Karan Jain, Founder and CEO of NayaOne, about where AI is actually delivering measurable impact today. While Part 1 explored why AI initiatives often stall, this discussion focuses on where momentum is building and what’s enabling early success.

AI is driving augmentation

Despite the hype around autonomous AI, Sankolli is clear about where real progress is happening today: “The winds are mostly in augmentation and workflow acceleration, and not autonomous decision-making.”

Across the enterprise, AI is not replacing decision-makers. Instead, it is enhancing human decisions and speeding up workflows across key functions.

Breaking it down across the banking value chain:

  1. Front Office
  • Customer service deflection and augmentation
  • Predictive servicing and retention
  • Underwriting support
  1. Middle Office
  • Fraud detection refinements
  • KYC and AML case handling
  • Claims processing and triage
  1. Back Office
  • Document intelligence and unstructured data extraction
  • Structuring previously inaccessible data for downstream use

Sankolli highlights a consistent pattern across all of these: “You’re looking at decision augmentation and workflow acceleration, and there are significant gains here.”

From RPA to agentic workflows

Many of these processes were previously automated using RPA and rule-based systems. What is changing now is the intelligence layer on top of automation.

“Driving AI technologies in there and creating agentic workflows where now you can understand the organization context, augment the decisions, and based on that, you can plan and act on it, to an extent, with human-in-the-loop validation,” explains Sankolli.

This shift introduces a few important capabilities:

  • Context awareness: AI systems can interpret organizational data more holistically
  • Decision support: Not just executing tasks, but guiding outcomes
  • Adaptive workflows: Systems that can plan and act, not just follow rules

The result is not full automation, but more effective human-machine collaboration.

Unlocking value from unstructured data

One of the biggest opportunities lies in tackling unstructured data, which dominates enterprise environments: “Most of your information is buried in these documents, either as images or unstructured data.”

As Sankolli elaborates, AI is enabling:

  • Extraction of data from documents and images
  • Conversion into structured, usable formats
  • Improved accessibility for downstream analytics and decision-making

This is particularly impactful in banking, where documentation underpins everything from compliance to customer onboarding.

Developer productivity as a force multiplier

While many AI discussions focus on business use cases, Sankolli points to a horizontal enabler that often gets overlooked — improved developer experience (DevX) and developer productivity: “It’s a horizontal enabler across an enterprise, and I can guarantee you the wins there are pretty significant.”

Improving DevX is accelerating:

  • Time to build and deploy AI-enabled solutions
  • Experimentation across teams
  • Reusability of components and workflows

This creates a compounding effect across the organization, enabling faster scaling of AI initiatives.

Bottom line first

When it comes to measurable impact, the current wave of AI investment is still largely focused on efficiency: “Most of the AI initiatives right now are driven by bottom-line efficiency gains.”

This is shaped by:

  • Regulatory pressure
  • Cost optimization mandates
  • The need to build internal trust in AI systems

Sankolli emphasizes that trust is the gating factor: “We build organizational muscle that’s built to trust the outputs, but there’s a lot that goes into creating the trusted output.”

Only once that trust is established does AI begin to influence revenue growth: “Once you get that right, you will see it being leveraged for the top line growth as well.”

The real constraint: Fragmented data architectures

When asked about what limits AI at scale, Sankolli doesn’t point to algorithms or tools. He points to infrastructure shaped by history: “These data and platform architectures are heavily shaped by decades of M&As, regulatory patchwork, project-by-project decisions.”

The result is a deeply fragmented landscape:

  • Siloed systems tied to past acquisitions
  • Isolated “automation islands”
  • Data locked within legacy core systems

“All of your organizational context and intelligence is buried in these islands of automation,” says Sankolli.

Data fragmentation is the core bottleneck

At the heart of the problem is how data is treated: “The data is actually treated there as a project asset rather than an enterprise asset.”

This leads to:

  • Limited data accessibility
  • Inconsistent data quality
  • Inability to generate high-fidelity insights

“The high fidelity that’s needed to actually drive a significant amount of high-quality decision augmentation is just a dream if you’re dealing with this infrastructure,” Sankolli notes.

Batch systems, real-time expectations, and compute gaps

Modern AI workloads demand capabilities that many legacy systems weren’t designed for:

  • Batch-optimized systems struggling with real-time needs
  • Immature cloud strategies
  • Compute constraints for AI workloads
  • Vendor lock-in limiting flexibility

“Your existing data center infrastructure doesn’t actually put you in a very competitive position to leverage AI.”

The path forward: Simplify, standardize, rationalize

Sankolli outlines a clear path forward: “It’s key to simplify, standardize, and rationalize, and then get your data to be AI-ready.”

This means:

  • Breaking down silos
  • Treating data as an enterprise asset
  • Rationalizing platforms and tooling
  • Building architectures aligned to AI workloads

“That’s when you can harness significant value out of AI,” Sankolli concludes.

*Reference: Why Enterprise AI Adoption Is Slower Than the Technology

CDO Magazine appreciates Sanjay Sankolli for sharing his insights with our global community.

Related Stories

March 25, 2026  |  In Person

New York CDO Financial Forum

New York Marriott Downtown

Similar Topics
AI News Bureau
Data Management
Diversity
Testimonials
background image
Community Network

Join Our Community

starElevate Your Personal Brand

starShape the Data Leadership Agenda

starBuild a Lasting Network

starExchange Knowledge & Experience

starStay Updated & Future-Ready

logo
Social media icon
Social media icon
Social media icon
Social media icon
About