Nirma University students use Leap Motion to convert sign language into speech
Technology has captured the imagination of millions across the globe and the advancements in areas like the Internet of Things and Artificial Intelligence leaves one spellbound. We recently wrote about a bunch of hardware startups which had some great IOT companies as well. There are interesting companies in the domain of gesture recognition itself like Fluid Motion and Nayi Disha, etc.
What recently caught our eye is the implementation of Leap Motion technology by two students from the Nirma University in Ahmedabad. Utsav Shah and Darshan Shah are final year students, who picked up the implementation as their project. “Our mentor had asked us to work on an application that would have a social impact,” says Utsav. They started working with multiple technologies, including Kinect till they settled on the use of Leap Motion Controller. The idea was to create a working model of a system that would let mute people communicate by gestures. The technology converts motion into speech, here is a working demo:
The thought process behind choosing the Leap Motion controller
With respect to controllers like Nintendo Wii Remote and Microsoft Kinect, which are more focused on body movements, Leap Motion provides a fine-grained hand control, which is clearly promising for Sign Language Recognition using hand only. Encased in a shell of glass and aluminum, the Leap Motion hardware consists of two infrared cameras and three LEDs. “The device can minimize errors from tools, fingers and hand features by using the stereoscopy from both cameras. Leap Motion controller is built on a unique mathematical model to maximize speed and precision,” says Utsav.
How it worksThe Leap Motion system detects and tracks hands, fingers and pointable tools. The device operates in an intimate proximity with high precision and tracking frame rate. After recognizing hands, fingers and pointable tools, it reports discrete positions, gestures, and motion. As the effective range of the Leap Motion Controller extends from approximately 25 to 600 millimeters above the device (1 inch to 2 feet), it is essential to mark the gestures in this range only. The Leap Motion API measures physical quantities like distance in millimeters, time in milliseconds, speed in millimeter/second and angle in radians. By using these data, one has to recognize the gesture/ sign. The Leap Motion field of view is an inverted pyramid centred on the device.
As the Leap Motion Controller tracks hands, fingers, and tools in its field of view, it provides updates as a set, or frame of data. Each frame contains lists of the basic tracking data, such as hands, fingers, and tools, as well as recognized gestures and factors describing the overall motion in the scene. This allows Leap Motion to recognize hand features such as hand palm orientation, fingers’ length, width and orientation and hand openings.
Keeping the technicalities aside, Leap Motion was launched last year and despite the early skepticism, seems to be picking up. Nine startups pitched at the AXLR8R demo day in San Francisco in January and they proved that there are a lot of niche applications brewing that make better use of Leap Motion’s abilities than gaming or consuming news (read more). Closer home, interest seems to be picking up and the coming years will reveal how these advancements are received. As for Utsav and Darshan, they don't have any firm plans to start up yet but you never know.
More Nirma University Alumni stories:
Kno acquires Cruxlight and Intel acquires Kno: The acquisition story of Ahmedabad-based Nirmit Parikh
Youngsters buck the trend to startup in textile: The rising duo behind ‘Maku’