Neuroscientist-designer couple bring the power of vision to devices through machine learning
Teaching our phones to see & understand
Mad Street Den’s goal is to bring computer vision to every device, from high-end displays to low-end smartphones using its cloud-based machine learning-platform. Imagine driving by a store and seeing a dress or a toy you want to buy. You whip your phone out, take a picture and your phone instantly finds a match at your favorite online store. Imagine your child interacting with a fun game, making silly expressions that your phone actually responds back to. And if you’re a business, imagine the power of understanding what your consumers see and feel. This is the dream that drove the couple Anand and Ashwini to launch Mad Street Den.
Computer vision is really beginning to get exciting with the launch of Amazon’s Firefly, Facebook’s Oculus Rift and Google’s Project Tango. But many of them are still limited because performance has largely been tied to specialized hardware. Mad Street Den is one of few companies disrupting this industry with its software-focused solution. And while there are a handful of companies out there that make apps with expression detection, facial gestures and object detection, a quick survey of the apps in the market show us how far they are from accurately detecting any of the above or finding meaningful applications. MSD’s proprietary stack incorporates Deep Learning into its framework but Anand (CTO) is quick to point out that Deep Learning is not a solution unto itself, and it requires much more to actually put things together ‘intelligently’. Their proprietary framework promises to deliver real time video analysis in a plug-n-play manner. It allows customers to easily incorporate various pattern recognition tasks into a wide range of applications. The company is also working on a hybrid device-cloud solution to cater to different markets.
Launching the MAD Stack
MAD Stack, the company’s cloud platform goes live this week and their SDK is available for developers and companies. The stack currently detects facial landmarks, expressions and emotions, and facial & head gestures. You can try these on their developer portal developers.madstreetden.com. They also have a bunch of videos showing their technology in use. The videos depict fun and engaging applications that provide developers ideas for use cases. Make sure to check out the adorable emote demo. Their technology page also outlines a wide range of use cases for computer vision.
Their plans
As the head of product development (and CEO) Ashwini points out “We understand that Computer Vision is comparatively a new field. More than anything else, our goal is to bring about scale and adoption of this otherwise niche technology by making it fun, usable and natural for people around the world to interact with. To make this a reality, we’re reaching out and working with partners across e-commerce, gaming, and mobile analytics around the world (and specifically in India). This is a really exciting space to be in and we’re hoping to lead some of the technology innovation in this space.”
The Inside Story
Ashwini says, “It all started with a pair of shoes. I was shopping at this neighborhood in San Jose and saw a pair of grey shoes that I wanted to buy. An urgent call from home had me rushing back before I could buy them. I took a picture of the pair thankfully, making it my mission to hunt for them online. After hours of searching on a ton of apps and sites I regularly shopped at, I gave up frustrated and angry. I remember asking Anand, “What’s the point of you trying to build intelligence if my phone can’t find me a damn shoe?” The search was particularly frustrating because I had no way of describing those shoes beyond grey & size 7. There was no way to just show the shoes to the app and have it automatically find them.
“Our devices are not just stupid, they’re blind! Teach them to see, not just look!”, I remember raging. And the idea of Mad Street Den was born!
Ashwini and Anand are extremely driven to make a change. Having achieved a lot in their respective fields they believe its time to transform how we do things, and what better way than to use their skills to bring about an ‘emotive’ change.
And that is how change happens. One gesture. One person. One moment at a time. Libba Bray
Check out MadStreetDen: www.madstreetden.com
Featured image credits: conferenceacademia.com