
Microsoft
View Brand PublisherMicrosoft-YourStory Webinar: Building trust in AI governance for India's digital future
Industry leaders from Microsoft and SAP chart pathways for responsible AI adoption while balancing innovation with accountability in government services.
As artificial intelligence transforms public service delivery across the globe, one critical question emerges: How do governments build citizen trust while harnessing AI's transformative potential? A recent YourStory webinar, in partnership with Microsoft and SAP, brought together leading voices to tackle this challenge, revealing how different nations are approaching AI governance with vastly different strategies.
The session, titled "AI in Governance: Building Trust, Ensuring Accountability, and Shaping Policy in the Digital Age", featured Sandeep Aurora, Group Director and Head of Public Policy at Microsoft India & South Asia, and Dr Lovneesh Chanana, Asia Pacific Head of Government Affairs at SAP. Moderated by Gunjan Patel, Director of AI Skills for Social Impact at Microsoft India, the discussion unpacked regulatory trends, policy design best practices, and collaborative strategies for government stakeholders.
India's pragmatic path forward
India's approach to AI governance stands out for prioritizing both innovation and risk mitigation simultaneously. Unlike regions that have implemented stringent regulations early on, India is focusing on what Aurora calls "AI diffusion", ensuring widespread understanding and adoption alongside safety measures.
"India has taken a very enabling and pragmatic approach towards AI development, as opposed to very stringent regulations some regions might have taken that might be very premature," Aurora explained. He highlighted India's AI mission, which balances the "safe and trusted" pillar with equal emphasis on how to make AI work for people through skilling, awareness, and adoption incentives.
The government's establishment of an AI Safety Institute demonstrates this balanced approach, addressing technical safety concerns while avoiding heavy-handed regulation that could stifle innovation. This strategy has already shown practical results, such as the Reserve Bank of India's AI committee for the banking sector, which encourages AI adoption while establishing specific safeguards for high-impact applications like credit scoring.
Global governance: A tale of different strategies
Chanana used a compelling cricket analogy to illustrate how different regions approach AI governance. "India's AI governance is agile, inclusive, and developmental; it's more like T20 cricket than a test match," he observed, contrasting this with the European Union's more disciplined, structured approach embodied in the EU AI Act.
Each region has developed distinct strategies: Japan focuses on precision and voluntary compliance while maintaining international alignment; Korea has introduced a bold, risk-based AI act that applies even to foreign players; Singapore has positioned AI as a necessity rather than a luxury, becoming the first country to develop a model governance framework for generative AI.
"While Europe has taught us the value of structure and consistency, Asia Pacific is teaching us the power of adaptability and local relevance," Chanana noted. "India reminds us that governance has to be scalable, inclusive, and responsive to rapid growth."
Transparency vs explainability: The critical distinction
One of the session's key insights centered on the difference between transparency and explainability in AI systems. Chanana used a GPS analogy: "Transparency is knowing what map the GPS is using. Explainability is understanding why it chose a particular route."
This distinction becomes crucial for government AI systems serving citizens. When systems make decisions without clear reasoning, public trust erodes. However, when citizens understand both the data being used and the logic behind decisions, confidence increases.
The experts emphasized that achieving true transparency requires embedding these principles into design and development processes, not just policy documents. This includes establishing multi-layered governance structures with ethical AI advisory committees, implementing human oversight panels for critical decisions, and maintaining audit trails for accountability.
The unlearning challenge in capacity building
Perhaps the most striking insight involved rethinking capacity building for the AI era. While traditional approaches focus on learning new skills, Chanana argued that government officials need to embrace "unlearning and relearning".
"When it comes to AI, we need to talk about unlearning and relearning, rather than just learning and relearning," he explained. "I need to learn that many tasks will be done by a machine. It will give me a draft, and I will check for accuracy, bias, domain experience, and safety."
Microsoft's partnership with Mission Karmayogi, which has trained 1.5 million civil servants, demonstrates the scale of this challenge. Aurora emphasized focusing on outcomes rather than credentials: "The skilling piece - is it being used? Are people feeling encouraged to actually use it, versus just a tick mark and a certificate?"
Inclusivity as a strategic imperative
Both speakers identified inclusivity as fundamental to India's AI success. Chanana positioned fairness and non-discrimination as "not just a moral imperative but a strategic necessity for India's inclusive growth," highlighting that 65% of India's population is rural and faces persistent digital access challenges.
Aurora emphasized accessibility principles, noting the need for AI systems supporting voice-based communication, multiple languages, and representative datasets. "How do we make sure that the benefits of AI are distributed equally and equitably?" he asked, pointing to the importance of ensuring government schemes reach marginalized communities through technology.
Continuous monitoring and adaptive governance
Looking toward implementation, the experts outlined sophisticated monitoring requirements for government AI systems. Chanana compared AI governance to monitoring a self-driving car: "You don't just check the engine. You check speed, fuel, alerts, everything."
Essential monitoring components include real-time performance dashboards, anomaly detection engines, comprehensive audit trails, human oversight panels, and crucially, feedback loops from citizens themselves. "This will democratize the oversight part," said Chanana, emphasizing how public input can drive continuous system improvements.
Trust through iterative improvement
The discussion concluded with both experts emphasizing that building trust requires treating innovation and compliance as complementary rather than competing priorities. As Aurora said, "Innovation and compliance with law go hand in hand. They can't be treated as two contrasting things."
The key to maintaining public trust when issues arise lies in robust feedback channels and grievance mechanisms. "We have to make sure that we keep that process flowing, that no feedback or grievance is left unaddressed," Aurora stressed.
The path forward
The webinar revealed that successful AI governance isn't about choosing between innovation and safety, but about building adaptive systems that can evolve with the technology. India's approach to balancing enabling policies with appropriate safeguards while prioritizing inclusivity and transparency offers a potential model for other nations navigating similar challenges.
As governments worldwide grapple with AI integration, the session's insights suggest that the most effective governance frameworks will be those that remain agile, inclusive, and responsive to both technological capabilities and citizen needs.

