You've probably seen this video already, but here it is again if not... This is great work from Microsoft Research exploring the possibilities of combined pen and touch input. It was presented last week at the CHI 2010 conference.
You can read more about this work at Ken Hinckley's blog: Alpine Inker.
Manual deskterity: an
exploration of simultaneous pen + touch direct input
Authors: Ken Hinckley, Koji Yatani, Michel Pahud, Nicole Coddington, Jenny
Rodenhouse, Andy Wilson, Hrvoje Benko, Bill Buxton.
I didn't go to CHI but I've just started browsing through the proceedings. There are many, many touch-related papers and I'll try to highlight several of them here in the coming days.
(Image from Ken Hinckley's blog.)
Don Norman has posted an article he wrote for Interactions magazine about gestures and the "Natural User Interface." Here's an excerpt, though it's best to read the whole thing:
systems are no different from any other form of interaction.
They need to follow the basic rules of interaction design, which means
modes of expression, a clear conceptual model of the way they interact
system, their consequences, and means of navigating unintended
a result, means of providing feedback, explicit hints as to possible
and guides for how they are to be conducted are required. Because
unconstrained, they are apt to be performed in an ambiguous or
manner, in which case constructive feedback is required to allow the
learn the appropriate manner of performance and to understand what was
with their action. As with all systems, some undo mechanism will be
situations where unintended actions or interpretations of gestures
undesirable states. And because gesturing is a natural, automatic
system has to be tuned to avoid false responses to movements that were
intended to be system inputs. Solving this problem might accidentally
more misses, movements that were intended to be interpreted, but were
Neither of these situations is common with keyboard, touchpad, pens, or
do I conclude? Gestures will form a valuable addition to our
repertoire of interaction techniques. But they need time to be better
developed, for us to understand how best to deploy them and for standard
conventions to develop so that the same gestures mean the same things in
different systems. And we need to develop the supporting infrastructure
handle guides, feedback, error correction, and the other consequences of
gestures, some of which can use well-known procedures, some of which
This is an interesting study from researchers at Berkeley published last year. Abstract:
Multitouch workstations support direct-touch, bimanual, and multifinger interaction. Previous studies have separately examined the benefits of these three interaction attributes over mouse-based interactions. In contrast, we present an empirical user study that considers these three interaction attributes together for a single task, such that we can quantify and compare the performances of each attribute. In our experiment users select multiple targets using either a mouse-based workstation equipped with one mouse, or a multitouch workstation using either one finger, two fingers (one from each hand), or multiple fingers. We find that the fastest multitouch condition is about twice as fast as the mouse-based workstation, independent of the number of targets. Direct-touch with one finger accounts for an average of 83% of the reduction in selection time. Bimanual interaction, using at least two fingers, one on each hand, accounts for the remaining reduction in selection time. Further, we find that for novice multitouch users there is no significant difference in selection time between using one finger on each hand and using any number of fingers for this task. Based on these observations we conclude with several design guidelines for developing multitouch user interfaces.
Here are the guidelines they give (but please read the paper for the limitations/caveats).
Design Guidelines: Based on our experiment we recommend the following set of design guidelines for developing applications for multitouch workstations. Since our studies focus on multitarget selection, all of these guidelines are aimed at applications where target selection is the primary task.
- A one finger direct-touch device delivers a large performance gain over a mouse-based device. For multitarget selection tasks even devices that detect only one point of touch contact can be effective.
- Support for detecting two fingers will further improve performance, but support for detecting more than two fingers is unnecessary to improve multitarget selection performance.
- Reserve same-hand multifinger usage for controlling multiple degrees of freedom or disambiguating gestures rather than for independent target selections.
- Uniformly scaling up interfaces originally designed for desktop workstations for use with large display direct-touch devices is a viable strategy as long as targets are at least the size of a fingertip.
Bimanual, and Multifinger Input on a Multitouch Workstation
(Via multitouchup and google alerts.)
Of the many blog posts written about the iPad by people who haven't yet touched one, Luke Wroblewski's stand out as well worth your time: iPad articles. He investigates the new UI design elements and discusses some parts of Apple's iPad design guidelines.
MOTO has done another benchmarking test comparing mobile touchscreens: Robot Touchscreen Analysis. (I wrote about the previous one here.) This time they've used a robot-controlled (simulated) finger instead of a human finger. The test involves drawing diagonal lines and looking at how linear the response is on the screen.
I think it's great to get data, but these results are being overhyped in my opinion. (And -- disclosure! -- I work at Synaptics, though I don't speak for Synaptics here.) This test measures one performance characteristic but misses others.
At the Clever Dog Lab in Vienna, researchers use a "computer-automated touchscreen testing procedure for studying learning, social, and physical cognition in the dog."
I learned about this in a radio documentary called King Solomon's Ring from CBC's Ideas show (you can find the audio here: Ideas podcast or on iTunes). The documentary was about ethology -- the study of animal behavior, and about Konrad Lorenz, one of the field's founders. King Solomon's Ring is also the name of a classic book by Lorenz.
The dogs use their noses to activate the touchscreen, and apparently dogs do it well but it takes some instruction. I'm guessing it's a little like touchscreen usability studies with humans, but with more screen wipes. From the page for dog owners interested in participating:
We employ the
computer-automated touch-screen testing procedure to study physical
cognitive abilities (knowledge of how the physical world works). But
first of all, it is necessary to find out whether dogs behave similarly
when they are confronted with a similar problem in reality and on the
From previous studiesDogs need a
we know that dogs are able to find a hidden object even if considerable
time has passed since they witnessed the hiding event. Dogs also show
typical errors in their searching behaviour when a human experimenter
hides the object. Thus, in this project, we want to investigate whether
dogs can solve a hide-and-seek task on the touch-screen and whether
they have similar error patterns on the touch-screen as in reality. I
test the dogs’ performance in ‘real’ and ‘virtual’ (Touch-screen)
conditions. In the virtual condition, I test them either with or
without the presence of a hiding agent.
considerable amount of time to learn to work with the touch-screen. For
optimal learning performance they and they owners should visit the lab
at least once a week. A training occasion consists of 2-4 sessions.
Each session has 29 trials. The auto feeder gives a dog a dry food
pellet for every correct trial so a dog gets maximum 120 pellets per
training occasion (altogether a small cup of dry food). A training
occasion for a dog last from half an hour to one hour.
The lab has a web page showcasing some of their most enthusiastic study participants: Computer Freaks.
Nicholas Nova has a write-up of a recent Lift Lab seminar on gestural interfaces: Lift Seminar @ Imaginove about gestural interfaces (from which I grabbed the above slide). The talks were about free-form gestures in video games.
I think I've posted this picture before -- it's from Steve Portigal's blog, from a post on input device workarounds. This user is obviously not too happy with the touchpad and with accidental contact causing problems. I recently saw a similar "fix" on a laptop belonging to a famous HCI professor. As someone who works at a company that makes touchpads, this is of course a bit embarrassing.
I was reminded of this again by a Lifehacker post about the latest Windows utility someone has written to help with the problem. Called Autohotkey, it disables your touchpad for a short time after you press a key.
Most touchpads in fact do something like this already (not just touchpads from Synaptics but from others as well), but it's obviously not enough. Accidental contact is a hard problem that is only getting worse as touchpads get larger. Not only is it a hard problem to solve, it's a hard problem to measure in a conventional usability test. It's something I've been involved with and hope to write more about here later.
If you're frustrated by your touchpad because of accidental contact, you're welcome to write a comment below or contact me. I'm interested in knowing more about the situations that cause the greatest trouble.