How three IIT Kharagpur graduates built a platform that recognises Indian native languages
Bengaluru-based Liv.ai uses deep artificial intelligence and neural networks to communicate with machines in 10 different regional languages.
Asking Google a quick question on which restaurant serves Mediterranean food in your city is hardly a novel thing now. Giving commands to one’s phone/device using your voice and letting it do the job happens without much thought. All of us have heard of or used the Google assistant, iOS Siri, or even Amazon’s Alexa. But, imagine giving the same commands in Punjabi or even Kannada. Liv.ai makes it possible by using artificial intelligence (AI) and neural networks.
A brainchild of three IIT Kharagpur alumni—Subodh Kumar, Sanjeev Kumar and Kishore Mundra—Liv.ai came into existence in 2015.
Subodh says, “We spent a lot of time together since college, and kept in touch later too. There are so many tasks that we find mundane and there are machine interfaces that are unnatural or unintuitive. With recent advancements in AI, we can create significant impact in both areas.”
The startup’s mission is to free up human minds by automating routine tasks, and to increase human productivity in complex tasks. Another motivation to start Liv.ai was, as Subodh explains, is the “inability to communicate with the machines in our own language. We want to make machines more humane so that our communication with them is as natural and stress-free as possible.”
Liv.ai uses speech recognition technology to provide speech application programme interface/ software development kit (API/SDKs) to enable developers to convert speech-to-text by using neural network models.
Ten Indian languages
The API currently recognises 10 languages - English, Hindi, Bengali, Punjabi, Marathi, Gujarati, Kannada, Tamil, Telugu and Malayalam. Subodh says, “Our system works with most accents and performs remarkably well in noisy environments, and work perfectly well with telephonic conversations as well.”
Prior to starting Liv.ai, Subodh was an engineer with Microsoft, and an investment banker at Citigroup. He has a BTech (CSE) from IIT Kharagpur and an MBA from IIM Bangalore. Sanjeev has worked with startups Avaak Inc and Qualcomm Inc in San Diego, CA, and holds a PhD in Electrical Engineering from the University of California.
Kishore comes with more than 10 years of experience in software product development, and holds multiple patents as a researcher with Samsung India, and has helped Beceem Comm develop 4G solutions (acquired by Broadcom in 2011).
Liv.ai is currently a team of 15, most of whom are engineers or scientists. It is in an expansion mode, and is looking to hire both researchers as well as people with strong product background.
With hundreds of dialects and accents in India, how does the app recognise and act accordingly? Subodh says, “We have collected a lot of data from all parts of India and we are constantly improving our tech with feedback, which takes care of the different accents.”
Business model
Liv.ai makes money on per API/SDK call. It also works with large channel partners for the distribution of the product, and charges are customised on a need-basis. The clientele primarily is B2B customers, but the company serves some B2C customers as well. The service starts from Rs 30 and goes up depending on the need.
Liv.ai has seen traction in multiple domains and industries, the early adopters being in the areas of e-commerce, manufacturing, speech analytics, robotics, and consumer applications.
It has partnered with multiple large companies to take its technology to masses. Subodh says, “We are getting more than a million API hits per day with strong growth every week. We are also talking to multiple device manufacturers to use our technology in their products.” It has witnessed a 100-percent growth month-on-month.
Having signed up with multiple paying clients, Liv.ai is looking for a path to profitability soon. It has also raised capital from multiple investors from India and abroad. “Our current priority is to accelerate the revenue growth. That said we might bring on board like-minded investors who can add value to our company in addition to helping accelerate our growth,” added Subodh.
Future outlook
The Asia-Pacific speech and voice recognition market was valued at $5.15 billion in 2016, and is expected to reach $18.30 billion by 2023, at a CAGR of 19.80 percent during the forecast period.
Talking about its differentiator Subodh says, “We are laser-focused on Indian languages and dialects. Here we are better than Google as our whole team is based in India and being the real users we can empathise with our product much better.”
About the future, Subodh says, “We have seen strong demand for our tech and now we want to scale from one million API hits per day to 100 million API hits by the end of next year.”