
TiDB, powered by PingCAP
View Brand PublisherWhy real time, resilience, and readiness are the new baselines for scalable data systems
In an AI-driven world, real-time responsiveness, resilience, and readiness define modern data infrastructure. In this TiDB-YourStory roundtable, supported by AWS, top engineering leaders share how they’re reimagining data systems to enable intelligent, secure, and scalable business growth.
Global generative AI spending is projected to hit $644 billion in 2025, marking a 76% increase from 2024, according to Gartner. This explosive growth is fueling an urgent need for robust, scalable data infrastructure, as digital businesses across sectors race to meet the demands of real-time operations and intelligent systems. At a recent roundtable hosted by TiDB in collaboration with YourStory and supported by AWS, engineering leaders from leading companies came together to unpack the challenges and best practices of building resilient, responsive, and intelligent data systems at scale.
Supreet Singh, Senior Director - Engineering, PhysicsWallah; Mukesh Solanki, Head Infra & DevOps, KreditBee; Ramesh Kumar Saxena, Head Data Platforms and Engineering, Pharmeasy; Ganesh Rajagopalan, Associate Director of Engineering, Zeta; Vaibhav Magon, VP Engineering, Acko Insurance; Bansilal Haudakari, Director of Engineering, Paytm; Satyavathi Divadri, Deputy CISO & AI, Freshworks; Vivek Srikantan, CTO, Spocto X (A Yubi Company); Indrani Goswami, Director of Analytics, Nykaa; Vikrant Saini, Head and VP of Engineering, Digit Insurance; Shivendra Singh, Senior Director - Engineering, JioHotStar; and Satyendra Yadav, Associate Director of Engineering, MakeMyTrip participated in the discussion.
The data dilemma: real-time vs reliable
One of the most common trade-offs surfaced during the discussion was the tension between real-time performance and data consistency. For businesses operating with high volumes of structured data, relying on a single pipeline can be risky. The solution? Embracing redundant pipelines for resilience. However, this introduces operational complexity, especially when managing replication lag in systems like MySQL, which can have a ripple effect on application performance.
As datasets grow to terabyte scale, many teams are evaluating newer database engines such as PostgreSQL and TiDB to better handle high-concurrency workloads and support real-time analytics. But panelists concurred that transitioning from legacy systems isn’t always simple. Business stakeholders often prioritize stability, creating a tension between technical ambition and organizational risk appetite.
For many fast-scaling companies, the architectural choices made in the early days continue to shape their data infrastructure today. But as products diversify and user expectations evolve, tech teams are revisiting foundational decisions. The push is toward supporting real-time, subscription-based experiences at scale without compromising on cost-efficiency.
What’s driving this urgency? The need for instant responsiveness has moved from a differentiator to a baseline expectation.
Scaling for variety and velocity
In sectors like healthtech and ecommerce, where data variety is immense, leaders shared the struggle of integrating diverse data sources and transforming them into actionable insights – often within milliseconds.
The challenges are particularly acute in OLAP (analytical) environments, where datasets vary in structure and require heavy transformation before analysis. The latency this introduces can make insights obsolete by the time they surface. Meanwhile, OLTP (transactional) systems are being pushed to embed analytical capabilities directly; adding further strain on real-time performance.
Rather than chase a mythical one-size-fits-all system, teams are strategically combining specialized tools to serve different layers of the stack such as balancing scale, responsiveness, and cost.
The panelists also emphasized that security and observability are integral to infrastructure design. Unauthorized access attempts, credential misuse, and anomalies in behavior are difficult to detect without intelligent alerting and monitoring systems. In high-scale environments, these incidents must be surfaced in real time to ensure quick, decisive action.
Moreover, as AI and automation are increasingly used to detect and mitigate threats, the focus is shifting to correlating alerts across systems and reducing noise so that root causes can be diagnosed faster.
The case for real-time compliance
In the fintech space, infrastructure decisions are often shaped by regulatory mandates and compliance risks. A minor transaction anomaly can lead to significant penalties. This makes strongly consistent, ACID (Atomicity, Consistency, Isolation, and Durability)-compliant systems a non-negotiable requirement.
To address reporting lags, some organizations are moving toward real-time analytics stacks – writing transactions directly to streaming platforms like Kafka and processing them within minutes. This shift is not just about agility; it’s about risk mitigation and operational trust.
As database ecosystems evolve, another pressing challenge which the leaders highlighted is managing fragmentation across systems. Many organizations run multiple database engines tailored for different needs; but this creates overhead in terms of maintenance, training, and data governance. The long-term vision is to consolidate without sacrificing performance, aiming for a singular, efficient architecture that supports varied workloads seamlessly.
Observability, identity, and infrastructure resilience
In industries like media and travel, workload spikes and identity-related services during peak events make performance optimization a constant challenge. At the same time, teams are investing in LLM-powered monitoring platforms to automate root cause analysis. The vision is clear: detect cascading failures early, trace their origin instantly, and respond before the user feels the impact.
Across ecommerce and digital services, personalization is emerging as a critical driver of customer engagement. Achieving this requires stitching together data across siloed systems – advertising, inventory, supply chain, and fulfillment – in near real time. Teams are working to ensure that infrastructure choices support fast, context-aware decisions, delivering tailored experiences as users move through dynamic journeys.
Looking ahead
A second wave of transformation is underway: one where data infrastructure must support complexity, flexibility, and intelligence. As businesses scale, the challenge is no longer just storing or querying data but also about acting on it instantly, securely, and meaningfully.
The takeaways from this roundtable are clear:
- Real-time is the new default; whether for transactions, insights, or security.
- Resilience requires redundancy, but it must be balanced with cost and complexity.
- No single tool will do it all, but thoughtful architecture can bring harmony.
- Security, observability, and compliance are now core pillars of infrastructure.
- And most importantly, business and tech teams must align so that innovation is both sustainable and scalable.
In today’s landscape, data is the real differentiator; and how you manage it will define how far and fast your business can grow.
Modernize your MySQL workloads to keep up with scale, speed, and complexity—without the legacy trade-offs.
