Data Privacy & Ethics
Written by: CDO Magazine Bureau
Updated 4:16 PM UTC, Wed February 26, 2025
The UN International Computing Center (UNICC) is at the forefront of digital transformation, helping UN agencies and global humanitarian efforts harness data, AI, and cybersecurity. As a trusted technology partner, UNICC ensures operations run efficiently and securely, directly contributing to the UN’s mission of sustainability and global collaboration.
In a discussion with Informatica’s Amy McNee, UNICC’s Chief Data & AI Services Officer, Anusha Dandapani, breaks down what responsible AI truly means. She explains how UNICC crafts purpose-driven AI strategies, builds strong governance frameworks, and aligns AI initiatives with real-world needs.
From a humanitarian perspective, Dandapani shares how UNICC develops ethical AI systems that don’t just meet regulations but also create real, meaningful impact.
Edited Excerpts:
AI is everywhere these days—you can’t go anywhere without hearing about it. From your perspective, how crucial is it to focus on purpose-driven use cases in an AI strategy? What key factors do you consider when deciding how to apply AI effectively?
I would first focus on identifying a purpose-driven use case because that is absolutely critical to the success of an AI strategy. It ensures that AI initiatives are aligned with both the organization and its overall goals. A purpose-driven use case should have clear business objectives and well-defined outcomes that we hope to achieve with AI solutions or the broader strategy.
At the end of the day, the biggest challenge most organizations face in driving their AI strategy is data-related. Regardless of the use case, the real pain point—or opportunity—often stems from a lack of high-quality data. Having the right taxonomy within datasets and ensuring strong data quality are essential for achieving meaningful outcomes.
We believe that starting with purpose-driven use cases is the most effective approach, making it a critical component of our AI strategy.
Governance is crucial for scaling AI initiatives. What advice would you give someone building a sustainable and scalable governance program? What are the key factors they should focus on?
AI governance is crucial for organizations that are further along in their data maturity journey. From our humanitarian perspective, we see it as an essential element for ensuring that we start with responsible and ethical AI practices.
We begin with a minimum-risk mindset, ensuring that the AI systems we build are not only effective but also grounded in a strong foundation of data governance. This foundation is key to the success of our AI solutions. Establishing clear policies around data quality, data privacy, and security is at the forefront of everything we do.
Our governance approach is always based on established frameworks, particularly those guiding international AI governance. As AI technologies evolve, so must our governance frameworks. The effectiveness of our approaches depends on our ability to adapt alongside technological advancements—without this, we wouldn’t be able to keep pace with the rapid changes happening today.
We follow five overarching guiding principles, and our governance framework consists of approximately seven functions. These five principles include:
Governing AI inclusively, ensuring it benefits all.
Prioritizing data governance in everything we develop for the public interest.
Approaching data with a “data commons” mindset to maximize its purpose.
Emphasizing multi-stakeholder collaboration, recognizing that an organization like ours must adhere to numerous human rights laws, policies, and international community guidelines.
Aligning AI governance with the Sustainable Development Goals (SDGs).
Ensuring these principles are embedded within our governance framework is our starting point. Ultimately, having a strong foundation in data governance is a prerequisite for implementing effective AI governance.
CDO Magazine appreciates Anusha Dandapani for sharing her insights with our global community.