Aravind Srinivas: Why On-Device AI Could Kill Centralised Data Centres
Perplexity CEO Aravind Srinivas explains how AI models running locally on user devices could disrupt trillion-dollar investments in centralised data centres.
The biggest disruption to the global data centre industry may not come from faster chips or larger server farms, but from intelligence that never leaves your device. That is the argument put forward by Aravind Srinivas, co-founder and CEO of Perplexity AI, who believes the future of artificial intelligence lies in models that run locally, adapt personally, and remain fully owned by users.
From centralised AI to personal intelligence
According to Srinivas, the traditional model of AI relies heavily on massive, centralised data centres where inference and intelligence workloads are processed at scale. But this model starts to weaken if advanced AI models can be “packed locally on a chip” and run directly on personal devices such as laptops, phones, or wearables.
Once intelligence moves on-device, the need to route every query, task, or workflow to a remote server diminishes. AI becomes decentralised by design, reducing dependency on cloud infrastructure while increasing resilience, privacy, and personalisation.
AI that watches, learns, and adapts locally
A key idea Srinivas highlights is test-time training on local systems. In this model, AI does not rely on sending user data back to servers for retraining. Instead, it observes workflows directly on the device and adapts over time, with the user’s consent and without data leaving the machine.
Routine tasks, repeated actions, and personal preferences can be learned organically. Retrieval, reasoning, tool use, and contextual understanding can all happen locally, drawing from data stored on the device itself.
The result is AI that feels less like a generic assistant and more like an extension of the user’s own thinking.
“You own it. It’s your brain.”
Srinivas frames this shift in starkly personal terms. When intelligence lives entirely on your device, he argues, it becomes something you truly own. It is not mediated by remote servers, not monetised through data extraction, and not limited by cloud latency.
Over time, such an AI could automate significant parts of a user’s daily work, reducing repetition and cognitive load. This, in his view, represents a fundamental redefinition of intelligence from a service you access to an asset you possess.
A direct challenge to the data centre economy
The implications for the data centre industry are profound. If high-value AI workloads move to edge devices, the economic logic behind multi-trillion-dollar investments in centralised AI infrastructure begins to erode.
Srinivas questions whether it makes sense to spend hundreds of billions, or even trillions, building global data centres if personal devices can handle a growing share of intelligence workloads independently.
The decentralised AI future
While large-scale models and cloud infrastructure will continue to play a role, especially for training and heavy computation, Srinivas’ vision suggests a hybrid future one where personal intelligence runs locally, privately, and adaptively.
If realised at scale, this shift could redefine not only how AI is built and deployed, but also who ultimately controls it. In that future, intelligence is no longer rented from the cloud. It lives with you, learns with you, and works for you.

