Follow Us











Startup Sectors

Women in tech







Art & Culture

Travel & Leisure

Curtain Raiser

Wine and Food


This is a user generated content for MyStory, a YourStory initiative to enable its community to contribute and have their voices heard. The views and writings here reflect that of the author and not of YourStory.

The Braille Glove

The Braille Glove

Wednesday October 24, 2018,

3 min Read

We are a team of two 17-year-old students currently in Grade 12, studying at The International School Bangalore. We first conceived of this idea about a year ago and since then, we have been working on creating a working prototype. Both of us are extremely passionate about the idea as we firmly believe that it has the potential to change millions of lives all around the world.

It is estimated that approximately 92% of visually impaired people cannot read Braille - partly because, on average, it can take up to 3-4 years of learning to become proficient. Braille has become a medium of communication very closely associated with the visually impaired population, which has led to the existence of Braille keyboards, Braille signboards and Braille books among others. However, what is the use of such assistive systems if a large majority of the impaired population is Braille illiterate?

Our Solution is a unique wearable device that can be worn by a visually impaired person and can scan Braille when swiped over it and output its conversion in the form of audio in the user’s language of choice. The glove consists of a microprocessor, camera, light source and an AUX port for audio output. The light source is placed on the underside of the wrist, and it shines light, parallel to the user's fingers, at the the Braille text and thus, lighting it up from an oblique angle. This causes shadows of each spot to form on the paper (akin to black dots). The camera is attached on one of the fingers of the glove and captures a video feed of the pattern of the black dots. This is sent to the processor after a set interval of time. The processor converts the video feed into a single long image by stitching the frames together. It then processes this image to understand the pattern of dots and converts each set of dots to its equivalent English character/character set. It then outputs this using a text to speech mechanism to the user’s earphones or speakers, connected via the AUX port.

We intend to create 10-15 similar devices while exploring the scope of reducing costs by simplifying the components used and also improving durability. We will then distribute these prototypes to a few select schools for the visually impaired in Kanpur, Hyderabad, and Bangalore - giving 2-3 devices per school. Upon distribution, we will also run a small workshop for educators at these schools, to inform them about how to use the device, and what to do in case of a malfunction or software bug. Depending on the feedback we will go back to the drawing board and try to improve any weaknesses encountered, while exploring the possibility of mass manufacturing the device.

More information and images can be found here:

Video of prototype:

Patent Number: 201811019727