Brands
Discover
Events
Newsletter
More

Follow Us

twitterfacebookinstagramyoutube
Yourstory

Brands

Resources

Stories

General

In-Depth

Announcement

Reports

News

Funding

Startup Sectors

Women in tech

Sportstech

Agritech

E-Commerce

Education

Lifestyle

Entertainment

Art & Culture

Travel & Leisure

Curtain Raiser

Wine and Food

YSTV

ADVERTISEMENT
Advertise with us

The AI opportunity for startups

In this special podcast episode, Jerome Manuel chats with Prime’s in-house AI experts Shripati Acharya and Pankaj Agarwal. They talk about the massive AI opportunity, how it has evolved since the introduction of the first GPT models, and what the future looks like.

The AI opportunity for startups

Sunday July 14, 2024 , 6 min Read

Shripati Acharya and Pankaj Agarwal, who lead investments in AI (artificial intelligence) at Prime, share the most important elements required for today’s entrepreneurs building a startup in India. With OpenAI achieving the Holy Grail of consumer tech companies and amassing ~200M users (and $2B annual revenue) in less than two years, AI has stormed into mainstream conversation and taken over the psyche of enterprises and the average consumers alike.

Investments in the space have gone up massively. In 2023, $50 billion venture money was invested in AI startups globally. Since the launch of GPT in 2022, $100 billion + has been invested in AI startups.

This podcast provides a comprehensive overview of the foundational technologies (LLMs, GPTs, tokens) driving AI, the impact on startups, and the business models that will create and reshape trillion dollars’ worth of economic value in the process.

The evolution of AI

“Some say AI is the next electricity. Some say AI is the next internet. But, very interestingly, in the late 1940s or during World War II, Alan Turing, the father of artificial intelligence, was working on the basics, the foundations, of this particular technology. Claims state that he was using this technology to decipher Nazi codes, which helped win the war,” says Manuel.

Acharya narrates, “Back in 2009, there was a database of images called ImageNet which was released by Stanford. The idea here was that you’d have a bunch of images of cats. You need to correctly recognise the images, and so the machine learning algorithms were tested against that.

“There were 14-15 million images in that database, and that was the benchmark for it. And then as the algorithms got better and better and better, they got better and better at actually recognising those images versus a human, and now, of course, the algorithms have soared past the accuracy of humans.”

Agarwal explains, “In 2017, a revolutionary research paper titled ‘Attention is All You Need’, authored by eight scientists working at Google, introduced a new deep learning architecture known as the transformer, based on attention mechanisms proposed by Bahdanau et al. in 2014.

“This paper introduced a new neural network architecture called the Transformer, which is based solely on an attention mechanism. The Transformer has since become the dominant model for machine translation and other natural language processing (NLP) tasks, such as text summarisation, question answering, and natural language inference.”

The landscape of AI startups

“The whole tech stack is getting revisited, unlike the traditional software development, AI models are non-deterministic in nature. It means that two people can ask the model the same question, and it will throw a different output.

“For business, we at Prime think there are three layers where opportunity exists: there’s a model layer, a data layer, and then there is a deployment layer. Model we have spoken a lot about, GPT-4, GPTs are just one example.

“Think of the above three layers of an inverted pyramid; the middle layer will be the tools–data abstraction, data transformation, and similar ones; top of that will be actually the applications.

“Obviously, the applications will be the maximum number right, and the way to think about it is similar to what is happening in the cloud. Right at the bottom, there are a small number of chip providers to the cloud provider’s right, the machines, the hardware. On top of it are the cloud providers, and we know there are only three or four of them, right, but the applications are like thousands and thousands of applications.

“From a venture capital lens, the investability from a startup lens, the opportunity will be on the application layer. But, at the same time, just the layer below that—which is what we call a developer tools layer or the tooling layer—will also have a lot of opportunity, and it really depends on the DNA of the founders—on which area they want to operate in,” summarises Acharya.

Enterprise perspective about AI

Acharya, with his deep experience and network shares, “Enterprises are justifiably taking a wait-and-see approach on where they’re going to put their bets.

“So they are not going to put their bets on one model, right? Which means that they are unlikely to start hosting their own models and start developing applications. They’re much more comfortable, probably, with an architecture which enables them to switch and play with different models.

“Enterprises are very concerned about data privacy, specifically exfiltration of data, which is data from the enterprise going out of the enterprise. So, if you are a startup and you actually say, ‘Hey look, I’m going to help you create this new model, we’ll fine-tune a model.’

“Fine-tune is the process by which the model itself weighs the data in that model and is changed with respect to the data which is fed into it. It’s something which enterprises will fundamentally be uncomfortable with, because you're sending the data out of the enterprise, which is not something which is going to work.

“Another primary area of concern is hallucinations. Hallucination is just a fancy word for saying incorrect output and somewhat random output. It’s as if, you know, the model got drunk and started saying stuff which it might regret, right?

“So obviously from that standpoint, if you have a customer-facing AI product, enterprises will be very careful with deploying such a thing, because obviously customer-facing elements cannot have a lot of errors in it. It means that the things which they are going to be most comfortable with, where they are eager to deploy, is stuff in which there is a human in them. So that’s why we see all these co-pilots taking a lot of adoption.”

The insights in this episode could be your roadmap to a successful career in AI. Don’t miss this opportunity to understand the future of AI and its monumental implications for business and society.

Timestamps:

0:00 - The evolution of artificial intelligence

5:36 - Understanding machine learning and AI

13:20 - Why $100 billion+ is invested in AI since 2022!

27:17 - Which AI startups will get funded?

37:22 - Future of AI applications and workforce


Edited by Swetha Kannan