[CHI’18 / Google] Analysis and Modeling of Grid Performance on Touchscreen Mobile Devices

At Google in Mountain View I worked on modelling and prediction of user performance, using the tools of machine learning / Tensorflow. The work also includes an eye-tracking study that provides an in-depth analysis of visual attention on mobile phones. It was presented April at the CHI conference in Montreal, Canada.

The work is essentially trying to model every little subtask that is involved in interacting with a grid interface. Scrollable grid interfaces are common on mobile phones, be it the gallery, app homescreen, or any other. Understanding user performance is key to improving the usability of such UIs. With our model, we can predict how much time it takes to scroll & select an item in a grid, further we uncover many performance characteristics that are involved, ranging from visual search (using eye-tracking), manual scrolling (touch gestures), and tapping to select an item (Fitts Law).

One of the most interesting findings is that users have two strategies — they can either start scrolling down from the top (top-down strategy), or immediately do a “hard” swipe to the bottom of the UI, and then scroll up (bottom-up strategy). How to model such contrasting strategies? We found that for our tests, 20% of the sessions involved the bottom-up strategy, and it was strongly affected by the name of the target. I.e., when searching for “Twitter” in the app-list, the app icon is naturally closer to the bottom of the list (as it is sorted alphabetically). The user knows that, and thus swipes down to improve the search.

However, from the bottom, when scrolling up steadily, users showed  performance that is linear depending on the row that the target is located at. The same for the top-down strategy, only that it is starting at the top. Thus, the performance of the strategies is equal, with the difference being that initial swipe down gesture for the bottom-up strategy. In sum:

Top down strategy = a + row * b

Bottom up strategy = swipe_down + a + (maxRows – row) * b

There are much more performance factors involved in the user interaction, so if you are still reading until here, I refer to the paper:

Analysis and Modelling of Grid Performance on Touchscreen Mobile Devices
Ken Pfeuffer, Yang Li. 2018. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, Montreal, QC, Canada. doipdf

Advertisements

[PhD Thesis] Extending Touch with Eye Gaze

The thesis is finished! It’s the proof of 4 years of living the life of a lab-rat, it’s a manual on how to build a gaze-interactive landscape of user interfaces, it’s a most (un-)likely vision of a gaze based future, and it’s an Inception-like design space of design spaces exploration.

Continue reading “[PhD Thesis] Extending Touch with Eye Gaze”

[PuC] Look Together: Using Gaze for Assisting Co-located Collaborative Search

This work continues on the multi-user work, and studies user performance for a collaborative search task. The work proposes four different ways to represent the gaze cursor to the users, between subtle (and less noticeable by others) and strong visuals (more noticeable by other, but more distracting too).

Continue reading “[PuC] Look Together: Using Gaze for Assisting Co-located Collaborative Search”

[MUM’16] GazeArchers: Playing with Individual and Shared Attention in a Two-Player Look&Shoot Tabletop Game

A fun project to develop a game, that can be considered the most fun game for eye-tracking! It’s a tabletop UI where we attached two Tobii eyex trackers. The game has some interesting concepts: what if users look at the same target? What if they don’t?

Continue reading “[MUM’16] GazeArchers: Playing with Individual and Shared Attention in a Two-Player Look&Shoot Tabletop Game”

[UIST’16] Gaze and Touch Interaction on Tablets

Another design space exploration for gaze input. Here it’s about how gaze can support touch interaction on tablets. When holding the device, the free thumb is normally limited in reach, but can provide an opportunity for indirect touch input. Here we propose gaze and touch input, where touches redirect to the gaze target. This provides whole-screen reachability while only using a single hand for both holding and input.

Continue reading “[UIST’16] Gaze and Touch Interaction on Tablets”

[UIST’15] Gaze-Shifting: Direct-Indirect Input with Pen and Touch Modulated by Gaze

Most input devices, be it mouse, pen, touch, are already extensively investigated and have become a big part of everyday life. Rather than adding a new UI mode, gesture, or some additional sensors, how can we make all of them substantially more expressive?

Here we explore the idea of Gaze-shifting, using gaze to add an indirect input mode to existing direct input devices. In essence, you can use any input device for both modes, as shown in the video examples.

Really happy that this work got nominated for best paper award!

Continue reading “[UIST’15] Gaze-Shifting: Direct-Indirect Input with Pen and Touch Modulated by Gaze”

Blog at WordPress.com.

Up ↑