The Smallest Virtual Keyboard That Can Fit Into Any Wearable.
The story of my patent (at IBM) on smallest virtual keyboard that will make the gadgets like smart eye-wears, smart-watches independent in data input without the need for the user to carry a secondary data input mechanism.
With the advent of evolution of semiconductor technology, digital devices have becoming small in size. Keyboard layouts have improved over time to cater to the need of the new age devices with ever shrinking smaller display size. But none of the keyboard solutions/concepts is useful for smart devices like wrist wears, fitness wearable-devices, and watches. This is mostly due to lack of sufficient real estate in these slimmer and reals estate constrained devices.
The major challenges we face while designing UI interaction for screen real-estate constrained devices is that, when we use touch enabled UI, we do it via our finger tips and pads, which there by requires a minimum size of UI elements/buttons on screen that are expected to be tapped or touched to trigger some actions. While using touch enabled devices, standard recommended minimum touch area is defined to ensure the UI area is usable.
For example Apple recommends a minimum target size of 44 pixels wide 44 pixels tall on a 3.5 inch display at 164ppi. Similarly in the Windows Phone UI Design and Interaction Guide (PDF), Microsoft goes further and suggests: a recommended touch target size of 9mm/34px; a minimum touch target size of 7mm/26px; a minimum spacing between elements of 2mm/8px; and the visual size of a UI control to be 60-100% of the touch target size. Nokia's developer resources suggest that touchable interface elements should not be smaller than the smallest average finger pad, that is, no smaller than 1 cm (0.4") in diameter or a 1 cm × 1 cm square.
So on an average for usable UI control the minimum size is around 44point (pixel free unit) and is approx. 7mm x 7mm area. When a keyboard layout is designed, this minimum area of touchable surface on UI matters the most, there by restricting us from using a keyboard based input system on small or slim wearables like smart watches, wrist wears or any other device that has limited real estate.
During past few years many cell phone device makers came up with multiple approach to deal with small UI area while designing keyboards on smaller devices. One example is T9 keyboard.
When iPhone attempted providing QWERTY type keyboard, it used multiple views of keyboard to accommodate all required keys . This was followed by Android and almost all touch enabled phones.
But The evolution of devices resulted into even smaller devices with minimal possible touch enabled displays / panels – many examples are the smart Watch, Wrist bands , medical equipments and many smaller and slimmer devices.
This has gave raised to a new problem. Now even the T9 or any other such keyboards do not have enough space in the screen area to fit into these devices. The typical physical dimension of the touch enabled displays of these devices come in different types – some are slim ones, some are oval or round shapes. For example main display size of Samsung Fit(slim) is 1.84 inch with 128 x 432px. Similarly the iWatch is around 2.5inch.
When I initially tried to explore the existing solutions available, I bumped upon Minuum which needs at least a 1.63-inches (that is almost the same display area of Samsung Gear ) -- it is due to the implementation provided where the sub panels appear to provide selection for character based on earlier selected character. So it was even not useful in slim-wears as well as any touch surface below the 1.63 inch surface.
So practically there was no significant keyboard was used in wearable devices with constraint real-estate. Rather most of them used on alternative methods of input mechanisms like voice and a secondary bigger device like a smartphone.
Most of the smart devices use voice as the major input systems due to lack of real-estate to accommodate a keyboard on them . However the voice based input systems have their own problems such as – (i) In noisy environments (e.g. out-doors, or in crowd) its really difficult to enter the texts via voice commands in an error free way (ii) due to variations in accent, tone of the speaker the voice input based system may not be totally effective and give raise to the scope of error. Surprisingly new age smart devices are more used as wearable and are used out doors which frequently are operated in noisy and distracted environments. Also the processing power in small devices makes it a thumb-rule to have the voice processed in cloud rather than in the device itself, for which the device needs to be connected to network.
Using voice as an input system has it’s own problems:
1. Voice input systems are not 100% error free and accurate. As voice of different persons are different due to the use of pitch , tone and cultural influence, there are significant chances that such voice recognition systems may fail at certain times.
2. Having a full fledged voice recognition system is resource heavy and consumes lot of CPU and require heavy processing. So, practically , all of these wearble-devices now-a –days depend on cloud based voice recognition systems . This means, To use voice input, you need to be connected to internet, else you will not be able to input data to your system. In addition to this staying connected to cloud comes with additional issues, like high battery consumption and data charges. Specially power is an issues with smart watches and similar wearable-devices, so it becomes critical for the user . Many companies like Apple, Google are still fighting with challenges of reducing power consumption and improve battery life of wearable-devices.
3. Third issue with voice is it is prone to error in distracted and noisy environments. As wearable devices are expected to be used in motion and out doors, this becomes a critical issue for effective data input into the systems.
So all these remind us of good old keyboards, where the data entry is lot easier and error free.
Some wearable-devices use alternative approach for text inputs – use of a mobile phone as the main input system. In such scenarios, the user uses the mobile phone as the primary device where he enters the text using phone keyboard . Many popular smart watches use this approach as this is more convenient for the user to input texts than voice mode. Samsung Gear, Microsoft Band, Apple iWatch and Moto 360 are examples where these devices are packaged as secondary devices to Samsung and windows phone.
The problem with this approach is the smart wear device is never plays the role of the standalone device. It always acts as an auxiliary devices. This strictly limits the device functionality and usage . Also the user is forced to carry additional hardware and a mobile phone to control and enter texts.
In such cases the smaller warbles most of the time act only as readable devices. For example, the user can read a “Shopping list” that was compiled on a phone. On the phone he can check and un-check the items from the list, how ever he won’t be able to alter the list by adding new items to it on the wearable device. He needs additional phone or hardware to make changes to the list. This kind of usage of the wearable are severely limiting the functionality of the wearable.
So in such cases a dimension of goodness that one would be aspiring for is to be looking forward to a future of human machine interaction, where wearable-devices, super small display-enabled or display-less smart devices will play an important role, it is highly important that we need to fix such major limitations of these devices, by providing a easy to use and implementable solution for text input method to the system.
Other dimensions of goodness should also take care of the following :
1. We need an effective keyboard that can work with super real estate constrained devices – especially like a smart watch or wrist wear etc. for effective data entry.
2. And the solution must be (i) Compatible with different display sizes with real estate constraint and (ii) can work without the need to relay on voice or cloud (iii) can work in standalone way without the need of any additional hardware or secondary devices like a phone (iv) must be flexible enough to accommodate different language (v) must be scalable to meet different needs and work in various environments (vi) must work on touch enables displays or panels.
So here it is – the answer to this problem we face – the BlueSlide keyboard, a patent assigned by IBM about a keyboard that works effectively on real estate constrained devices. Also this is the keyboard that is the smallest one as it can be enabled with a surface of a square millimeter touch surface.
The core idea behind the “BlueSlide” keyboard is based on the principle that when one or more fingers are swiped across the touch enabled area, the system records the direction and the number of fingers involved in the swipe/slide action. Based on the a predefined key mapping with the finger count and the direction of swipe, the system concludes a text input character and records it.
Ergonomically swipe gesture is lot easier than point and click/tap -- as point and focus requires attention and focus during operation . It also adds cognitive load on user. Persons wit shaky fingers , elders and people who are in distracted environments and in motion (where most of the wearable are expected to be used), will have difficulty in tapping -- specially in small display areas. Swiping is less focused, easier to handle even in motion..as it requires less focus and accuracy than a point and focus element.
When initially I conceived the idea, I tried to implement to test the concept and see if it is really effective. To implement the prototype , I put a paper with a wearable printout where the display area is cut out. Placed this paper on a Samsung note 2 phone display , so that the interaction area on the display is now limited by the cut-out area -- this is the actual area we will get in a similar wearable watch. Now I run an app implementing the concept and interacted using fingers to type in some character and changing keyboard view through touch interactions like double tap and double tap. Just to note: the video shows the basic prototype, that tries to showcase that the basic principles of the invention was intended to be put to practical use. As per the final concept & the patented invention the layout and UI elements might change based on the need of the case where it has to be implemented.
When I tested the results of accuracy and speed, it turned out well in similar set of touch surface real-estate. There was no accuracy issue, as all characters are mapped to different finger count and direction, which results in fairly good amount of error free operation.
The “BlueSlide”keyboard concept utilizes combination of multiple fingers slide across the touch enabled display or panel to trigger key-input to the system for based on predefined key mappings. The proposed concept uses different combinations of minimum one to any number of fingers to slide over the touch panel. The minimum touch enabled surface dimension can be the width of one finger tip area or less in physical dimension.
So how the thin touch panel counts the number of fingers sliding across within a duration?
Each finger is sliding across the touch panel and the system records that count. There will be intervals between each finger is sliding across the thin touch panel consecutively within a duration.
There are many areas this new keyboard layout solved challenges --
1. Solves problem of input of text on a real-estate constrained devices like wearable-devices (watches, wrist wears) or mobile devices with or without the touch enabled displays.
2. Simpler implementation that does not need to identify the finger and track them . Also does not need to track hand orientation. As less parameters are required to processed by the system to map the characters, it is lightweight and effective in it's implementation and can be used in smaller less resource consuming devices .
3. Can work on a touch panel that is only single-touch sensitive --It uses sequence of consecutive touch inputs and their directions to simulate a multi-finger touch case to map to a wider number of characters.
4. Completely unique input text keyboard embodiment that uses directional swipe gestures of single/multi-fingers swipe gestures and does not require on conventional virtual keyboard approach to tap on characters to input text /characters.
5.The complete keyboard solution can work on smallest display area/touch area which can be as small as just one tap area.
6.The invention proposes the complete text input method based on swipe gestures (using any single or multiple fingers without requiring to identify each finger ) and interaction paradigm to cover all sets of characters of any language including special characters
7. The embodiment suggests the use of multiple views of the keyboard to accommodate any number of characters in keyboard along with the interaction approach on how to switching between these views.
Alternate implementation of BlueSlide: Using super minimal real estate on a non-display type thin touch panel.
This is useful when in an hardware, we just want to provide a display-less touch-panel to reduce cost. The non-touchable display might show a the key mapping, where as the user will use a touch panel strip (which does not have display and can deploy pressure sensitivity or any other technology to detect finger count and directions).
This implementation even though not optimal, can be practical to be used in super slim devices or in devices where use of a touch enabled display is not possible due to cost or some other constraint.
Each finger is sliding across the touch panel and the system records that count. There will be intervals between each finger is sliding across the thin touch panel consecutively within a duration. The count of consecutive slide of finger (and gap between each finger) is counted to determine the gesture. (e.g. 3 fingers sliding consecutively across, in the same direction is determined as 3-finger swipe in that direction.)
The BlueSlide can be used beyond the smart-watch and smart-wrist devices. It can be now also be used in case of a smart eye wear (e.g. Google Glass), where the touch panel will be in the rim of the eye wear and the display will be in the smart eye-wear’s projected image. This is a new improved addition as in such scenarios, typically the user does not directly sees the touch panel of the device. He rather focuses on the UI being displayed/projected to him.
The touch panel is situated outside the display area. While wearing the eye-wear the user can type in text without the need to concentrate on any keyboard.
The rim of the eye wear holds the touch panel and user can use one or multiple fingers to type in as describe in the invention.
Another non-optimal special type of implementation of BlueSlide is provided in the following, where to allow real estate for other UI elements like more input fields /information etc., the keyboard touch area is reduced even further to somewhat around 7mm x 7mm (i.e. the touch area of single finger tip) on a screen area constrained device. The following image shows this example, where only single finger swipe with increased number of keyboard views are used to input data into the system. Depending on the implementation this can be further reduced to one square millimeter of touch surface.
Similarly any number of fingers can be put to use to create alternative embodiment of BlueSlide keyboard to work across various devices of different dimensions and nature.
Read the complete Patent/invention (US 2017/0003872 A1 - Touch Encoded Keyboard) here: http://patents.justia.com/patent/20170003872
Disclaimer: Samir Dash is working as Associate Design Director at IBM based in Bengaluru, India. The above article is personal and does not necessarily represent IBM’s positions, strategies or opinions.