Horia to join Max Wilson at this year’s Royal Society Summer Science Exhibition

Horizon Transitional Assistant Professor, Horia Maior will be joining Dr Max Wilson, Associate Professor of Human-Computer Interaction, School of Computer Science, University of Nottingham at the Royal Society Summer Science Exhibition in London (2-7 July) to demonstrate different types of wearable brain scanning devices.  The ‘Brain Team’ will be gathering people’s opinions on how they feel about using consumer neurotechnology.  More information about this event can be found in this University of Nottingham Press Release.

 

Horia Maior – new paper – consumer wearable technologies

The CHI’24 Workshop on the Future of Cognitive Personal Informatics

While Human-Computer Interaction (HCI) has contributed to demonstrating that physiological measures can be used to detect cognitive changes, engineering and machine learning will bring these to application in consumer wearable technology. For HCI, many open questions remain, such as: What happens when this becomes a cognitive form of personal informatics? What goals do we have for our daily cognitive activity? How should such a complex concept be conveyed to users to be useful in their everyday lives? How can we mitigate potential ethical concerns? This is different to designing BCI interactions; we are concerned with understanding how people will live with consumer neurotechnology. This workshop will directly address the future of Cognitive Personal Informatics (CPI), by bringing together design, BCI and physiological data, ethics, and personal informatics researchers to discuss and set the research agenda in this inevitable future.

Read more from this paper here

 

Stuart Reeves

Recently I have been working on a couple of ongoing investigations. One is based around examining what UX practitioners–as distinguished from HCI researchers–do when they are performing user research. The other, with Martin Porcheron and Joel Fischer, is exploring how people use speech-based ‘assistants’ in the home, such as the Amazon Echo.

Since its earliest days, a core concern of HCI research has been invested in developing ways of locating and dealing with the problems users of interactive systems might encounter. The terminology and concepts have changed over time, from notions of usability to more recent ideas about user experience. But much of this work has overlooked a couple of important aspects. Firstly, how problem identification is practically achieved (e.g., via user testing), and secondly that there is significant daylight between academic and practitioners’ applications of various evaluation methods. Recently I’ve combined these interests by examining how industry practitioners perform user testing as one piece of a broader project lifecycle. The reason why I’m doing this work is that HCI has only a limited (but growing) understanding of ‘what practitioners do’ in spite of much lip service being paid to the importance of academic HCI outputs and their potential operationalisation in practice. If we don’t really know what they are doing, then it is arguable whether HCI can reasonably make such claims. Another reason to focus on practitioners’ work practices is because I feel that HCI itself can learn much from various innovations and ways of working that might be taking place in that work.

Recently I have been involved in examining empirical data about the way that people interact with voice user interfaces, conversational assistants and so on. In particular we (Martin Porcheron, Joel Fischer) have spend some time examining the organisation of talk with / around the Amazon Echo, that Martin has deployed to various homes as part of his PhD work. While there is a paper on this work that has been recently accepted to CHI 2018, I can detail a few of the key findings here. The Amazon Echo is positioned as a voice assistant for the home that can answer questions, help with cooking, and play music (amongst other things). Martin’s data shows how users of the Echo must develop particular ways of talking ‘to’ the device and managing the various interactional problems that they encounter (e.g., recognition problems, or knowing what they can actually do with it). Users also seem to approach the device more as a resource to ‘get something done’ than an agent that one can have a ‘conversation’ with. Finally, we believe that there are some potential implications for design in this work, such as ways that one might move towards considering ‘request and response design’ instead of potentially misguided ‘conversation design’ metaphors.

Stuart Reeves, Transitional Assistant Professor, Horizon Digital Economy Research