AWS unveils the building blocks for enterprise-ready AI agents at DevSparks Chennai 2025
As the AI landscape shifts from experimentation to execution, AWS is arming enterprises with the tools to build, deploy, and scale autonomous agents with confidence.
Is the process of building and scaling agents a complex endeavour? Or is it manageable with the right strategy and resources at hand? Praveen Jayakumar, Head of AI/ML Solutions Architecture, Amazon Web Services India dove deep into a session on building intelligent, production-ready AI agents at DevSparks Chennai.
The session explored how enterprises can build and scale AI agents as the industry moves from pilots to production. Jayakumar outlined the evolution from generative AI assistants to fully autonomous agentic AI systems, discussed the challenges of deploying agents at scale and introduced Amazon Bedrock AgentCore, a platform offering primitives for runtime hosting, secure API access, memory management, identity, browser, and code interpretation, along with deep observability.
The evolution of agents
Jayakumar started by examining the evolution of agentic AI in the last two years. While 2023 was the year of Generative AI pilot projects and proof of concepts from different organisations, late 2024 was when these pilots went into production. Fast forward to 2025 - which AI pioneer Andrew Ng termed the ‘year of agents’ - and -autonomous AI agents are now transforming -enterprise workflows.
Citing Gartner research, Jayakumar noted that 15% decisions will be taken by agentic AI by 2028 compared to virtually none in 2024. “Initially, GenAI applications could read your documents, summarise them and answer your questions. However, most of these applications were powerful but single turn. It would do what you asked it to,” said Jayakumar. The next stage of evolution came in the form of GenAI agents, who typically used more reactionary frameworks. Users would give it a problem statement, which the agent would then break down into smaller tasks, identify the right API to call and execute function-based reasoning. However, these GenAI agents were restricted to a single, narrow purpose.
Modern agentic AI, said Jayakumar, is a maturation of enterprise technology, where simple GenAI agents have advanced to multi-step, reasoning systems that can manage workflows, APIs and business logic.
The agentic AI systems of today are completely autonomous, capable of understanding the environment they operate in and can be leveraged for extremely complex use cases. For instance, developers can simply provide a prompt and agents can build an entire application based on these simple specifications. Site Reliability Engineers can leverage agents to look into an organization’s logs, identify issues and take action to remedy the problem. Jayakumar attributes the development of fully autonomous systems to four factors: sharp improvement in reasoning capabilities of large language models (LLMs), robust data and microservices architectures that provide clean, callable APIs, 100x reduction in infrastructure costs driven by next-gen hosting technologies and, finally, an expanding ecosystem of tools—from AWS Q developer to market offerings like Cursor—that make it possible to build prototypes in days. “When you start to create an agent, it turns out to be very easy. With the tools that are available, you could probably create your own agent application in a day or two. It's very easy to do a POC with the current set of tools. The real agentic challenge comes when you want to take it into production, when both the number of users and tools start increasing,” Jayakumar shares.
The challenges of scaling to production
Bringing AI agents into production is where most enterprises face hurdles. Hosting agentic applications built with frameworks like LangGraph or Crew AI demands - careful management of scalability and session isolation. Memory becomes another critical factor—agents must retain and retrieve past context across conversations, a tricky task in distributed systems.
Authentication and authorization also present additional layers of complexity, as different users and agents require tailored access to APIs. Tool orchestration is another area of friction, where agents must decide when to browse, calculate, or query external systems securely. Above all, observability—the ability to track and interpret every agent action—is essential for trust and performance.
To simplify these operational challenges, Jayakumar introduced Amazon Bedrock AgentCore, a comprehensive platform that enables developers to deploy and operate AI agents at scale. Offering purpose-built infrastructure for dynamic agent workloads, powerful tools and controls for real world deployments, AgentCore is designed to help enterprises build, host, and manage production-grade AI agents efficiently.
Amazon Bedrock AgentCore: A platform for production-ready agents
AgentCore serves as a foundational layer for developers building intelligent agents on AWS. It provides a suite of primitives - managed, modular services - that streamline deployment, security, and monitoring. These include the AgentCore Runtime for hosting agents built with any framework or model, AgentCore Gateway for exposing enterprise APIs through MCP protocols, and a Browser and Code Interpreter for external searches and computational tasks.
Complementing these are AgentCore Identity for authentication, AgentCore Memory for long- and short-term contextual recall, and AgentCore Observability for detailed telemetry and debugging. Together, these components enable developers to manage agents securely and transparently at scale.
Observability and the road ahead
At the heart of trustworthy AI operations is observability: bringing transparency and deep insights into how autonomous AI agents make decisions, interact, and perform tasks. Through built-in integrations with observability tools and frameworks like AWS CloudWatch and OpenTelemetry, developers can gain visibility into every decision and API call made by their agents. This transparency ensures not only reliability but also compliance and performance optimization.
According to Jayakumar, AWS’ long-term vision is to become the best environment for building, deploying, and running useful AI agents.
As the industry steps into what many are calling “the year of agents,” platforms like Bedrock AgentCore are setting the stage for a future where autonomy, scalability, and security converge—turning intelligent systems into active collaborators in the enterprise landscape.


