Search
About

Kevin Arthur does user experience research and design. This blog is a personal project and the opinions here are strictly my own.

Usability Books
  • Cost-Justifying Usability, Second Edition: An Update for the Internet Age, Second Edition (Interactive Technologies)
    Cost-Justifying Usability, Second Edition: An Update for the Internet Age, Second Edition (Interactive Technologies)
    Morgan Kaufmann
  • Designing for the Digital Age: How to Create Human-Centered Products and Services
    Designing for the Digital Age: How to Create Human-Centered Products and Services
    by Kim Goodwin
  • Designing Gestural Interfaces
    Designing Gestural Interfaces
    by Dan Saffer
  • Designing Interactions
    Designing Interactions
    by Bill Moggridge
  • The Design of Design: Essays from a Computer Scientist
    The Design of Design: Essays from a Computer Scientist
    by Frederick P. Brooks
  • The Design of Everyday Things
    The Design of Everyday Things
    by Donald A. Norman
  • The Design of Future Things: Author of The Design of Everyday Things
    The Design of Future Things: Author of The Design of Everyday Things
    by Donald A. Norman
  • Designing the iPhone User Experience: A User-Centered Approach to Sketching and Prototyping iPhone Apps
    Designing the iPhone User Experience: A User-Centered Approach to Sketching and Prototyping iPhone Apps
    by Suzanne Ginsburg
  • Designing the Mobile User Experience
    Designing the Mobile User Experience
    by Barbara Ballard
  • Designing with the Mind in Mind: Simple Guide to Understanding User Interface Design Rules
    Designing with the Mind in Mind: Simple Guide to Understanding User Interface Design Rules
    by Jeff Johnson
  • Emotional Design: Why We Love (or Hate) Everyday Things
    Emotional Design: Why We Love (or Hate) Everyday Things
    by Donald A. Norman
  • Handbook of Usability Testing: Howto Plan, Design, and Conduct Effective Tests
    Handbook of Usability Testing: Howto Plan, Design, and Conduct Effective Tests
    by Jeffrey Rubin, Dana Chisnell
  • The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, Second Edition (Human Factors and Ergonomics)
    The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, Second Edition (Human Factors and Ergonomics)
    CRC Press
  • The Inmates Are Running the Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity
    The Inmates Are Running the Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity
    by Alan Cooper
  • Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics (Interactive Technologies)
    Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics (Interactive Technologies)
    by Thomas Tullis, William Albert
  • Moderating Usability Tests: Principles and Practices for Interacting (Interactive Technologies)
    Moderating Usability Tests: Principles and Practices for Interacting (Interactive Technologies)
    by Joseph S. Dumas, Beth A. Loring
  • Rocket Surgery Made Easy: The Do-It-Yourself Guide to Finding and Fixing Usability Problems
    Rocket Surgery Made Easy: The Do-It-Yourself Guide to Finding and Fixing Usability Problems
    by Steve Krug
  • Sketching User Experiences: Getting the Design Right and the Right Design (Interactive Technologies)
    Sketching User Experiences: Getting the Design Right and the Right Design (Interactive Technologies)
    by Bill Buxton
  • Tapworthy: Designing Great iPhone Apps
    Tapworthy: Designing Great iPhone Apps
    by Josh Clark
  • Text Entry Systems: Mobility, Accessibility, Universality (Morgan Kaufmann Series in Interactive Technologies)
    Text Entry Systems: Mobility, Accessibility, Universality (Morgan Kaufmann Series in Interactive Technologies)
    by I. Scott MacKenzie, Kumiko Tanaka-Ishii
  • The Trouble with Computers: Usefulness, Usability, and Productivity
    The Trouble with Computers: Usefulness, Usability, and Productivity
    by Thomas K. Landauer
  • Usability Engineering
    Usability Engineering
    by Jakob Nielsen
  • The Usability Engineering Lifecycle: A Practitioner's Handbook for User Interface Design (Interactive Technologies)
    The Usability Engineering Lifecycle: A Practitioner's Handbook for User Interface Design (Interactive Technologies)
    by Deborah J. Mayhew
  • User-Centered Design Stories: Real-World UCD Case Studies (Interactive Technologies)
    User-Centered Design Stories: Real-World UCD Case Studies (Interactive Technologies)
    by Carol Righi, Janice James
  • Usability Testing Essentials: Ready, Set...Test!
    Usability Testing Essentials: Ready, Set...Test!
    by Carol M. Barnum
Monday
Jun292009

Article about iPhone's new accessibility features

A great article about the little-advertised but very impressive accessibility features Apple has added to the new iPhone: My first experience using an accessible touch screen device (Marco's Accessibility Blog).

Marko writes mostly about the new voice over features. There is also a zoom feature and white-on-black color mode.

Kudos to Apple for putting serious effort into accessibility and (again) being way ahead of the pack.

Monday
Jun222009

MacBook clickable trackpad: What do you think?

If you've used one of the new MacBooks that has a trackpad that is clickable instead of having a separate button, I'm interested in what you think of it. I'd appreciate it if you could answer the following totally unscientific poll. If you want to explain your opinion or share any other thoughts, please leave a comment or email me directly. Thanks!

Friday
Jun052009

The Hand by Frank Wilson

Thehand David Birnbaum of Immersion has posted a good review of Frank Wilson's 1999 book The Hand: How its use shapes the brain, language, and human culture, a book that's going on my to-read list... Excerpt:

The Hand by Frank Wilson is a rare treat. It runs the gamut from anthropology (both the cultural and evolutionary varieties), to psychology, to biography. Wilson interviews an auto mechanic, a pupeteer, a surgeon, a physical therapist, a rock climber, a magician, and others—all with the goal of understanding the extent to which the human hand defines humanness.
Wilson is a neurologist who works with musicians who have been afflicted with debilitating chronic hand pain. As he writes about his many interviews, a few themes emerge that are especially relevant to my interests.

Link: The Hand (tactilicio.us)

Friday
Jun052009

Mobile Gesture Design at Nokia

Article and video at the Nokia Conversations blog: Mobile Gesture Design at Nokia - developing a new dialect of interaction.

Friday
Jun052009

Nice Skin from Guifx for DVR Remote App

Wednesday
Jun032009

Crocodile touchscreen keyboard with triangular buttons

From The Register:

A British inventor has submitted a patent application for a wacky touchscreen keyboard design which, he claims, could spell the end for accidental key presses.

Baker told Register Hardware today that each triangular key has significantly more dead space around it than you’d find on a standard Qwerty layout. Consequently, users are more likely to press the correct key each time they tap.

Link: Triangular buttons key to touchscreen typing success -- inventor.

Crocodile_keyboard_iphone_002

Hmm. The hexagonal grid design is useful, I think (see previous post) but I'm not sure that the extra dead space is always going to be so helpful, especially if you were to shrink that keyboard down to regular iPhone size so that it covers only half the screen (or, worse, to its size in portrait orientation).

There are (at least) two important aspects of soft keyboard buttons. There's the displayed button shape, and it may be true that drawing them smaller makes people more accurate, and there's also the active area's shape. The active area is the region where a touch gets accepted as that particular key. The two areas are typically not the same, and the iPhone actually resizes the active areas dynamically so that more likely letters are easier to press. (e.g. if you type 't', 'h', then the active area for 'e' will be expanded in expectation that you'll type that next.)

It gets even more complicated when you consider that there isn't just a single xy location associated with a keypress. There is a touch-down position, a lift-off position, and a stream of positions in-between. The key event is triggered when lift-off happens (typically -- because this works best for capacitive touch). But you don't want to just take that single lift-off position. It needs to be filtered somewhat to account for the fact that the trailing end of position data may be a little skewed when people lift their fingers. That filtering and other tricks mean that effectively there is not even a simple fixed "active area" for the button -- it's really dynamic and determined by the whole interaction.

All of which is to say that designing good touchscreen keyboards is a heck of a lot more complicated than creating the shape of the keys. (I'm sure this inventor knows all this -- I'm not trying to pick on him, just on the impression that this news story gives. And I haven't read his patent application.)

Monday
Jun012009

Limitations of touch, and touch input vs. output

There's a good post and video by Steven Hoober over at the Little Springs Design blog about how current touch interfaces are very impoverished compared to our tactile experience in the real world. Link: Humans are tool users.

I left a comment over there but I don't think it's up yet. I agree with Steven's points for the most part, but I'm not quite as pessimistic about touch input without haptics -- I think it works pretty well if it's designed well and there is good visual feedback.

Monday
Jun012009

Evaluating TouchPad Gesture Usability (part 2)

This is the second part of a rough draft article describing our experiences at Synaptics doing usability testing on touchpad gestures. Part 1 is here. This part describes a typical usability test session in detail.

I'm posting this hoping to get some idea of how interesting this topic is to other practitioners so I welcome any comments.

Session Plan

The
following four steps are performed for each gesture and on each device tested.
In most cases we've found it best to study one gesture completely before
proceeding to the next.


  1. Introduction and
    practice

  2. Familiarization
    Task

  3. Accuracy Task

  4. Satisfaction
    Questionnaire

The
following sections describe these phases in more detail.

Variations
of this might work better in some cases, depending on timing and number of
gestures and devices -- e.g. If there are few gestures being tested it might be
more efficient to administer one questionnaire at the end rather than one for
each gesture.

Introduction and practice

Introduce
the user to the gesture using documentation or by demonstration/explanation,
depending on what is available and the context for use. Have the user practice
the task on their own until they believe they understand how to perform it.

Familiarization tasks

The
purpose of the familiarization tasks is to give the participant additional
guided practice in the context of a realistic scenario. This ensures that each
person has had a base level of practice with the task before rating it, and has
exposure to the gestures in a typical situation.

Participants
should be instructed to think aloud and describe any problems or issues they
notice. During this activity, the moderator should note any problems seen and
any comments the participant made.

All
of the gestures we're concerned with here can be used in a photo gallery
application, so that's the platform we chose for our tasks. Specifically, we
used the Windows Live Photo Gallery and set up several photo albums.

Sample
familiarization task: Pinch

To
familiarize people with the Pinch Zoom gesture, we asked participants to
perform the following tasks on four large photos. The idea was to simulate a
real-world zooming task on maps and similar large images. All images measured
3000 or more pixels in x and y dimensions, and took at least five Pinch Zoom
gestures to zoom all the way in on or out of.

For
these tasks the moderator speaks the instructions. Each instruction has two
parts: first it gives the general area of the target, then it names the target.
Giving it in two steps prevents people from having to do a lot of searching,
which is not what we want to study here.


  1. “Going Under” news
    graphic (on New Orleans flooding): Zoom into the small map on the left side and find the city named 
    Hopedale. Zoom all the way in on Hopedale
    and then zoom all the way back out of the image.

  2. Stanford campus
    map: Zoom in on the South Residences (near the top of the map) and find
    the building called The Knoll. Zoom all the way in on The Knoll and then
    zoom all the way back out of the image.

  3. New York State
    Theater seating chart: Zoom in on the Orchestra Right section and find row
    seat 140. Zoom all the way in on this seat
    and then zoom all the way back out of this image.

  4. World map: Zoom
    all the way in on Rome, Italy and then zoom all the way back out of the image.

To
reposition the map (if needed), participants were shown how to click-and-drag
the map. They could also use the scrollbars if they preferred.

Sample
familiarization task: Swipe/flick and rotate

To
practice the Pivot Rotate and the Three-Finger Flick gestures, we devised a
compound task that asks participants to re-orient and add captions to a
sequence of images within Windows Photo Gallery. It's similiar to the way one might sort through vacation
photos after a trip. Each picture was a flashcard with a picture and a title,
as shown below.

Scrshot

The
images were pre-configured with random orientations (in the screenshot above, the picture needs to be rotated counter-clockwise by 90 degrees). The participant was asked
to do the following:


  1. Rotate the image
    to be upright using the Pivot Rotate gesture.

  2. Type the image’s
    title into the caption field and press the Enter key.

  3. Three-Finger Flick
    right to go to the next image.

Entering
text was included as part of the task so that participants would need to move
their hands back and forth between the TouchPad and the keyboard, rather than
resting their hands in the same place throughout the task. Each participant performed this task for
eight to ten images.

Familiarization
for other gestures

The
other gestures we have tested have included swipe up or down to enter or exit
slideshow mode, and a "three finger press" gesture to launch an application.
Since these are simple actions we felt it was sufficient just to practice the
task a number of times instead of devising a more complicated scenario.

Scrolling
gestures are a special case that we haven't mentioned here. For scrolling we use a more complex document scrolling experiment.

Accuracy Tasks

To
obtain accuracy measures, the moderator asked the participant to perform 10
repetitions of each gesture in each direction (e.g. Swipe left, Swipe right,
Pinch in, Pinch out, etc.). For each attempt, the 
moderator recorded the system response as either
"correct," "no response," or "incorrect." In the
"incorrect" case, meaning the system responded with a different
gesture than intended, the moderator also notes what unintended gesture happened
(e.g. a rotate happened when the user made a pinch gesture).

The
accuracy tasks were performed in Windows Live Photo Gallery. To allow for multiple
repeated zooms, we used a very large photo. To make note taking easier, we found it was best
to ask the participant to do only five or fewer gesture attempts at a time.

The
resulting measures were averaged participants and presented as percentages. In
a stacked bar chart you can show percent correct, incorrect and no response.
Additional charts can be used to break out by gesture direction and to show
details of incorrect responses, which can be useful information for developers
working to optimize the gestures.

Additional measures/procedures


  • Measures of task
    performance. You could time the flick-rotate photo tagging task on each device
    for comparison. In our experience so far it has been more profitable to
    use the accuracy and user observations than to directly measure task
    performance.


  • Videotaping -- we
    have found case-specific video taping to be useful, when a particular
    person is having much difficulty with a gesture in a formative study.
    Videotaping otherwise did not seem as beneficial. We have also on occasion
    used logging tools to capture individual gesture attempts for developers.

Measuring user satisfaction

With
some exceptions, noted below, we have used the following rating questions for
all gesture and device tests. The questions are answered on a 7-point Likert
scale from Strongly Disagree to Strongly Agree and the set of questions applies
to one gesture on one device. Our questionnaires also include comment fields.

  1. This gesture was
    easy to perform.

  2. This gesture was
    fast.

  3. This gesture was
    accurate.

  4. This gesture was
    tiring. (note reverse scaling)

  5. This gesture felt
    comfortable.

  6. The help
    information for this gesture was accurate.

  7. I would use this
    gesture if it were available.

Question
2 can be ambiguous for some gestures, such as pinch, so we have used two
alternative questions instead:


  1. The amount of
    motion required to activate this gesture was: (7-point scale from Too little -> Too much).

  2. The action
    produced by this gesture was: (7-point scale from Too slow -> Too fast).

For
device comparisons we have organized the questions by gesture then by device.
I.e. the user answers the questions for gesture A for all devices, then gesture B for all devices, etc. This can be too large a questionnaire, so if three or
more devices are being tested we have instead had users answer the questions
only once for each device, applying to all the gestures. We then ask users to
comment on any gesture-specific differences that they find. An alternative is
to reduce the number of gesture questions. We have used this approach as well,
asking users for a single rating for a device per gesture.

Discussion/lessons learned

We
have found that this combination of practice, familiarization, and accuracy
tasks yields reliable measures and we have used the method to track progress at
two stages of development and to compare performance between different devices.

The sorts of problems we have identified using this test are: speed/distance differences for triggering gestures, speed of zoom,
difficulty initiating gestures, difficulty learning gestures, gesture
misrecognition problems.

What
this method leaves out:


  • For consistency,
    we relied primarily on Windows Photo Gallery (and also Apple's Preview
    application photo gallery), but have also tested inside Adobe Reader, and,
    for scrolling tests, inside the Firefox web browser. It's important to
    consider application-specific differences in implementation and usability
    but for our purposes we have left this to separate software QA testing
    instead of usability testing.

  • We have so far only tested a few familiar gestures (pinch, swipe, rotate) and haven't tested
    any more abstract or user-defined gestures. New issues may arise when testing those types of gestures.


Friday
May292009

ThinkPad W700ds with dual touch and related thoughts

Lenovo-thinkpad-w700dsDarren Rowse at the site Digital Photography School posted a review of Lenovo's ThinkPad W700ds notebook. You may not recognize the model number but you've probably seen a photo of it -- it's the beast with two screens and a built-in Wacom digitizing tablet.


I haven't tried this machine but I've been curious about what users think of it so I enjoyed reading Darren's review. It prompted a few thoughts about touchpad size and the palm-rest area:

"the touchpad really is small"

Yesterday's touchpads are small. The one on the W700ds looks like a standard pre-2008 (or so) touchpad that measures a little under 60 mm wide by 40 mm high. MacBooks now have trackpads that are about 100 mm x 75 mm and even Windows machines are getting up there, like Lenovo's Ideapad Y650 which has a 104 mm x 64 mm touchpad. (Shown below with home keys and touchpad highlighted -- ignore the orange lines please. The images are from Lenovo's site)

Y650-2   


Bigger is better for any kind of multifinger gestures and people seem to like the extra space for pointing as well. But it has its disadvantages. The biggest problem is that you're much more likely to accidentally touch it when you're typing. This is especially likely if the touchpad is not centered to the keyboard. Unfortunately the Y650 is centered to the palmrest. That may be more aesthetically pleasing, but it means your right hand very often brushes the touchpad. When it does the cursor moves and scrolling gets activated if you brush the scroll-zone on the right side. (Touchpad drivers generally have some algorithms built in to filter out accidental touches but they're not perfect.)

Back to the W700ds, which has a Wacom tablet right where you might rest your hand. It isn't sensitive to finger input so there's no risk of accidental input, but users may still not like it. Darren writes:


"at times it felt ‘wrong’ to have my right wrist leaning against it as I typed as it is placed directly in front of the keyboard area."

That observation makes me wonder whether we shouldn't just leave the palmrests for the palms. Nevertheless, I'm very interested in what you could do with huge touchpads covering the palmrests. (And lots of other people have thought about this previously too.) Here's a mockup:

Y650-2tp-2

To my thinking there is a benefit in dividing the touchpad into two regions, one for each hand. The dividing line could either be real or imaginary -- there could just be two touchpads side-by-side or there could be one wide touchpad with a virtual invisible dividing line.

Two-handed input has been the subject of a lot of HCI research that argues that we'd have better user interfaces if we did a better job of balancing the interactions across both hands (see for example Mackenzie and Guiard, 2001).

Much of that research builds on Yves Guiard's kinematic chain model that says (approximately) that people often use their two hands in concert like two links in a chain. The dominant hand is like the bottom link and does finer manipulations, while the non-dominant hand is like the top link and frames what the other hand does. A common illustration of this is handwriting: people tend to use both hands when writing on paper -- one hand to write and the other to continually reposition the paper.

There are some possible dual-touchpad interaction techniques that are analogous to this. For example, you might have a mode where the left touchpad allows scrolling and panning while the right allows pointing (for right-handed users). Bill Buxton and others have shown that even though users are still not likely to scroll and point in parallel, the interface is more efficient because of a cognitive benefit in splitting the tasks across the hands (see Leganchuk et al., 1998).

Another asymmetric technique that you could implement on dual touchpads is the Toolglass technique (see Bier et al., 1993) where the left hand positions a toolbox and the right hand selects tools from it.

Some gestures also might work better with two hands instead of one (see Moscovich 2008 for an experiment on this). Scaling an image is likely to be done more precisely with two index fingers than with the thumb and finger on one hand.

I'm hoping to set up a dual-touchpad prototype and test some of these ideas. I welcome any comments... Would you buy a laptop with two touchpads?

Thursday
May282009

Gabriel White on Gesture Languages

Gabriel White of Punchcut has a good article about gestures over at the Adobe Inspire XD blog: Gestures: Shaking Up Mobile Interaction.

Gabriel and others from Punchcut are guest blogging at the Inspire blog all week -- lots of great stuff to check out.

Page 1 ... 6 7 8 9 10 ... 25 Next 10 Entries »