
Dell Technologies
View Brand PublisherFrom cloud dependence to desktop power: Dell's GB10 takes center stage at Bengaluru AI mixer
At a recent YourStory mixer in Bengaluru, industry leaders explored how desktop AI infrastructure is reshaping the developer experience.
YourStory recently hosted 'AI at Work: The Builder's Edition' at Conrad Bengaluru, bringing together Dell Technologies and India's AI, deeptech, and SaaS builders for an evening centered on one question: How is infrastructure shaping the way teams build, scale, and deploy AI?
The event came days after Dell's global launch of the Pro Max with GB10, making it a timely opportunity to explore what desktop supercomputing means for India's rapidly expanding developer community. With AI domain registrations surging from 248,000 in 2023 to 645,000 in 2025, the conversation around accessible, powerful development tools has never been more relevant.
Rethinking the infrastructure equation
"Right now, the builders are focusing on operationalizing AI, which involves more than just compute.” Vivekanandh N R, Distinguished Member Technical Staff at Dell Technologies, said. "The infrastructure costs would be lingering on their mind, whether to go for cloud-based infrastructure, or move towards the edge?"
The discussion highlighted how compute has transitioned from being an IT cost center to becoming the foundation for producing intelligence. This shift changes how teams budget, plan, and think about their technical architecture.
One key theme that emerged was balancing three critical factors: performance, cost, and scalability. "When it comes to the builders, it is the time to market that is the key," Vivekanandh explained. "The more you slow down and delay the offering to go-to-market, somebody else is already there competing with you."
The GB10's technical proposition
The Dell Pro Max with GB10 brings Grace Blackwell architecture, previously exclusive to data centers, into desktop environments. The system delivers up to 1 petaflop of FP4 performance with 128 GB of unified memory, enabling developers to work with models up to 200 billion parameters.
During the event, attendees saw live demonstrations of GB10 running complex AI workflows. One demo showcased a multi-agent system that personalized news feeds by collecting content based on user interests, processing it through a vector database, generating summaries, and delivering customized newsletters.
Another demonstration illustrated podcast generation using IBM Granite models. The system took a PDF document through a complete MLOps pipeline to create an audio podcast in approximately 14 minutes, emphasizing both capability and thermal efficiency. The entire workflow ran exclusively on the GB10 without internet connectivity.
The device measures 150mm by 150mm by 50.5mm and weighs 1.2kg, making it genuinely portable for a system of this computational class. The compact form factor represented a recurring point of interest throughout the evening with several attendees noting the contrast between the device's size and its capabilities.
From experimentation to production
The GB10 ships with DGX OS and includes pre-installed tools like CUDA, JupyterLab, Docker, and AI Workbench. This out-of-the-box readiness addresses a pain point that came up repeatedly during the networking portion of the evening: the time lost to environment setup and configuration.
For teams requiring more computational power, two GB10 systems can be stacked to act as a single node, accommodating models up to 400 billion parameters. This stacking capability provides a scalability path that allows organizations to start with a single unit and expand as needs grow.
The architecture's 273 GB/s memory bandwidth and unified memory design eliminate complexity often associated with distributed computing solutions, providing developers with a single, cohesive memory space.
The cloud versus on-premise debate
One of the discussions during the Q&A session centered on infrastructure deployment decisions. Should teams invest in on-premise hardware like the GB10, or continue leveraging cloud resources?
The answer, as explored during the session, depends heavily on use case and organizational context. Cloud computing operates on a pay-per-use model where costs can be difficult to predict, particularly when running extensive experiments. Every inference, whether input or output, incurs charges that accumulate during the experimentation phase.
GB10 represents a different economic model: a known upfront investment with predictable operating costs. For teams running continuous experiments to measure model accuracy and refine responses, this predictability offers advantages in budgeting and planning.
The session introduced a three-E framework for evaluating infrastructure decisions: experiment, evaluate, and evolve. Teams should assess how much experimentation they're currently conducting, evaluate whether those experiments are meeting their goals, and then decide whether to evolve their infrastructure approach.
For organizations in regulated industries like banking and healthcare, data sovereignty requirements often make the decision clearer. GB10 enables these organizations to train and deploy advanced AI models entirely within their own environment, maintaining data privacy while delivering performance comparable to cloud solutions.
Industry-specific applications
Throughout the session, the conversation touched on practical applications across various sectors. Several startup founders in attendance described challenges around balancing product development velocity with infrastructure costs, particularly during the experimentation phase, where teams iterate rapidly on models.
Academic researchers expressed interest in the ability to run large models directly on their desks, enabling hypothesis testing and model adaptation without competing for access to shared computing clusters. This immediacy could compress research timelines significantly.
One startup founder raised a question that resonated across the room: when one is already part of cloud provider startup programs with substantial credits, does investing in GB10 make sense? The response highlighted that cloud credits serve production and scaling needs well, but local development hardware accelerates the iteration cycle. Having both options gives teams flexibility to choose the right environment for each phase of their workflow.
Building blocks for India's AI ecosystem
The event highlighted India's evolving position in the global AI landscape. Developer participation in accelerated computing platforms has grown from 60,000 just a few years ago to nearly a million today, reflecting both expanding talent and increasing access to appropriate tools.
The support ecosystem around AI development in India has matured substantially. College incubation centers are encouraging students to launch startups, creating innovation pipelines at the grassroots level. Validated blueprints and reference architectures are readily available, including publicly accessible GitHub repositories.
Dell's AI Factory with NVIDIA, which has supported over 3,000 customers in its first year, provides infrastructure options spanning from fully on-premise deployments to hybrid environments. This flexibility allows organizations to keep sensitive data and training pipelines on-premise while running some inference workloads in the cloud.
The changing developer landscape
With AI writing increasingly large portions of code, developers are evolving from primarily writing syntax to orchestrating systems, understanding which models to apply to specific problems, and architecting solutions that move rapidly from experimentation to production.
This evolution demands different skills and different tools. Developers need the ability to test multiple models quickly, evaluate performance across various architectures, and iterate without waiting for shared resources or battling infrastructure limitations.
Looking forward
As the evening concluded, Clint Fabian Carvallo, Inside Sales Leader for India Small Business at Dell Technologies, emphasized the company's commitment to the builder community. "At Dell, we basically talk about driving human progress," he said. "We put people and your company at the core of our strategy. The progress basically stems from harnessing human ingenuity and combining that with advanced AI and data tools."
The event underscored a broader shift in AI development: infrastructure decisions are no longer purely technical considerations but strategic choices that directly impact innovation velocity. For India's growing community of AI builders, access to powerful, flexible development tools represents a critical enabler.
By the end of October 2030, industry analysis suggests that 98% of PCs will be AI-enabled devices. Dell Pro Max with GB10 represents an early step in that transition, bringing capabilities that were exclusive to data centers just years ago into environments where developers actually work.
For organizations evaluating their AI development approach, the key question isn't whether to choose cloud or on-premise infrastructure, but rather how to combine both strategically. Dell Pro Max with GB10 provides developers with tools to work efficiently, experiment freely and move from ideas to implementation without artificial constraints.
