Engineering GenAI for impact: Industry leaders on moving from POCs to production
At TechSparks 2025, industry leaders revealed how enterprises are building scalable, secure GenAI systems that deliver real business value.
Here's a stat that might surprise you: EY processes 1.8 billion AI prompts daily from 86% of its 300,000 employees. But scale isn't the full story.
At TechSparks 2025, YourStory's flagship tech event, three industry leaders from EY GDS, Publicis Sapient, and Catalyst Brands revealed what's actually holding enterprises back from GenAI success.
Spoiler: it's not the models.
The panel, titled ‘GenAI engineering for the enterprise: From hype to high-impact systems’, featured Kaushik Das, Managing Director of Catalyst Brands India; Raghuveer Subodha from EY GDS; and Rakesh Ravuri, CTO of Publicis Sapient. Moderated by Madanmohan Rao of YourStory, it unpacked the real challenges of moving from proof-of-concepts to production-grade AI systems.
The reality of AI at scale
The impressive numbers from EY tell only part of the story. With 300,000 employees using large language models daily and 1.2 million lines of code written by Copilot (documented as a live case study on GitHub), the organization has clearly achieved scale. But as Raghuveer Subodha emphasized, "A good POC, a good demo, shows capability. What is there in production truly represents engineering at scale and governance at scale."
This distinction between demonstration and deployment is critical. Subodha outlined three pillars for success: engineering at scale, governance at scale, and selecting high-value use cases. "Most of the time, the use cases seem like AI, but it is, end of the day, it is like an ERP on a booster dose," he cautioned. The real challenge lies in defining metrics and KPIs that drive measurable value rather than letting countless experiments dilute program focus.
The infrastructure gap nobody talks about
Ravuri brought a sobering perspective to the conversation: Enterprises can't leverage AI's full potential without first modernizing their systems. A recent Publicis Sapient survey revealed 1.3 trillion lines of legacy code, some dating back 40 to 50 years, running critical systems from insurance claims to digital banking cores.
"You can build great cars, but if you don't have roads, how will you drive?" Ravuri asked. This infrastructure challenge has become so pressing that legacy modernization is now Publicis Sapient's fastest-growing business. The company is using AI itself to understand decades-old COBOL and CORBA code, identify security vulnerabilities, and enable modern feature development.
The pace of technological change compounds the problem. "What you developed last year is legacy already," Ravuri noted. This reality demands what he called an "industrial scale modernisation pipeline" to constantly upgrade applications.
Building for six brands under one roof
Das offered insights on managing AI infrastructure across six international brands under Catalyst Brands. His approach centers on three distinct layers: a foundation layer for shared, high-cost components; an intermediate adoption layer for customisation; and an application layer where brand distinctions come alive.
"Before we go and use AI, getting ready for AI becomes very important," Das explained. This means distributing skills and bandwidth strategically across all three layers to avoid over-indexing on one at the expense of another.
Security, governance, and cost as equals
When it comes to managing scale while maintaining security and compliance, Subodha suggested thinking of these elements as a triangle with equal sides. "You can't have any one side which is not equal," he said. This means architecting for compliance from the beginning, choosing the right models for specific tasks to control costs, and embedding red teaming as part of the CI/CD process.
The evolving leadership question
The panel tackled the thorny question of who owns AI at the leadership level. Rather than creating endless new titles, Ravuri argued that existing leaders must evolve their focus. "If you are a Chief Technology Officer, now AI is the technology. So you need to play that role," he stated.
What matters more than titles is responsibility. Das emphasized the importance of having someone, whether through a formal role or a cross-functional core group, to anchor the multiple tracks involved in becoming a GenAI-enabled organisation.
For talent development, the consensus pointed toward building a "full-stack AI" learning approach, training existing employees across the entire spectrum from infrastructure to user interface rather than seeking narrow specialists.
The path forward
As enterprises move from experimentation to execution, the panel's message was clear: Success requires strengthening foundations, implementing robust governance, and maintaining focus on high-value outcomes. The challenge isn't just about adopting AI; it's about building the infrastructure, processes, and capabilities to make AI work at scale.
In Kaushik Das's words, organizations must centralize resources while decentralizing knowledge and learning across the entire spectrum. Only then can the promise of GenAI translate into production-grade impact.


