Stuart Reeves

Recently I have been working on a couple of ongoing investigations. One is based around examining what UX practitioners–as distinguished from HCI researchers–do when they are performing user research. The other, with Martin Porcheron and Joel Fischer, is exploring how people use speech-based ‘assistants’ in the home, such as the Amazon Echo.

Since its earliest days, a core concern of HCI research has been invested in developing ways of locating and dealing with the problems users of interactive systems might encounter. The terminology and concepts have changed over time, from notions of usability to more recent ideas about user experience. But much of this work has overlooked a couple of important aspects. Firstly, how problem identification is practically achieved (e.g., via user testing), and secondly that there is significant daylight between academic and practitioners’ applications of various evaluation methods. Recently I’ve combined these interests by examining how industry practitioners perform user testing as one piece of a broader project lifecycle. The reason why I’m doing this work is that HCI has only a limited (but growing) understanding of ‘what practitioners do’ in spite of much lip service being paid to the importance of academic HCI outputs and their potential operationalisation in practice. If we don’t really know what they are doing, then it is arguable whether HCI can reasonably make such claims. Another reason to focus on practitioners’ work practices is because I feel that HCI itself can learn much from various innovations and ways of working that might be taking place in that work.

Recently I have been involved in examining empirical data about the way that people interact with voice user interfaces, conversational assistants and so on. In particular we (Martin Porcheron, Joel Fischer) have spend some time examining the organisation of talk with / around the Amazon Echo, that Martin has deployed to various homes as part of his PhD work. While there is a paper on this work that has been recently accepted to CHI 2018, I can detail a few of the key findings here. The Amazon Echo is positioned as a voice assistant for the home that can answer questions, help with cooking, and play music (amongst other things). Martin’s data shows how users of the Echo must develop particular ways of talking ‘to’ the device and managing the various interactional problems that they encounter (e.g., recognition problems, or knowing what they can actually do with it). Users also seem to approach the device more as a resource to ‘get something done’ than an agent that one can have a ‘conversation’ with. Finally, we believe that there are some potential implications for design in this work, such as ways that one might move towards considering ‘request and response design’ instead of potentially misguided ‘conversation design’ metaphors.

Stuart Reeves, Transitional Assistant Professor, Horizon Digital Economy Research

Leave a Reply

Your email address will not be published. Required fields are marked *