As devices become increasingly mobile, the interfaces are also smaller and smaller. I was thinking this morning (in the shower, where I usually do my day’s thinking) about how to make more effective input devices. Way back in my undergrad days, there was a lot of talk about the direction of chording keyboards and the like, and I even worked on a chording keyboard research project. I used a Twiddler for a while, then accidentally left it behind in the lab at UW and it was stolen. But I also knew that there was a better solution. To really increase typing speed, you need something that allows you to enter more than a single letter at a time.
So I started thinking about how to increase the speed by giving more than binary options. Ideally, I’d like to be able to type in words or phrases. To do that, you need a broad set of options for each “hit.”
The first question is how to do this. If you had a system that could recognize the form a word had to take, like a probabilistic grammar checker, you could have a workable system with just a few thousand gestures. Like in Japanese, many combinations of such gestures could lead to more complicated words. The system I was thinking of consists of a glove with air bladders that lead back to pneumatic switches (either binary or with a range of pressures). These could be correlated to hand gestures. The learning curve would be tremendous, but people could type at the speed of thought.
There are a number of non-Twiddler options out there right now, including the very interesting DataHand, but the pricing is silly. I thought this was largely a dead area of development until I saw this article about a Sony haptics lab on the FT site.
Back to my pneumatic hand… How many air bladders on each hand would you need? The finger tips are obvious, but then you can add others; at least two for the heal of the hand, for example. Perhaps several on the thumb.
The really interesting question is whether you would be able to use these same bladders to answer the typist–i.e., to read. Reading with the fingertips is not a strange idea, even when using touch as a computer display. But could you learn to interpret the pressures on your hand as words and sentences?
And what about more graphical information. I have been waiting patiently for many years for the virtual retinal display to become a commercial reality, but it doesn’t look like that will be happening soon. I wonder whether standard video projectors, which are getting smaller and smaller, might be a solution. They wouldn’t need to throw an image more than a foot or two (though they could be boosted for more high-power tasks). You could then compute anywhere you could find a square foot of empty wall or table.