Upcoming TAS Hub – Living with AI Podcast

Horizon TAP Horia Maior talks about his research and involvement in the Open All Senses project and the TAS Art project in a TAS Hub Podcast session entitled ‘Living with AI: Telepresence the Human in the Robot’, release date 5th July

Also participating in the podcast, Praminda Caleb-Solly discusses the Digital Twins for Human-Assistive Robot Teams project  and Ayse Kucukyilmaz the Cognitive HumAn in the looP TEleopeRations (CHAPTER) project.

Privacy-preserving & machine-learned catchment models for national dietary surveillance via digital footprint data

Horizon’s Transitional Assistant Professor, Georgiana Nica-Avram has co-authored a paper ‘Privacy-preserving & machine-learned catchment models for national dietary surveillance via digital footprint data’ which was submitted to the 2022 IEEE International Conference on Big Data.

Abstract

Big data from food retail stores is increasingly being used for population dietary surveillance, epidemiological studies of diet-related diseases, and evaluations of public health interventions. However, for retail data to be useful it is necessary to understand the spatio-temporal variation of when and where food is purchased and consumed. While some customers willingly share home location data with retailers as part of loyalty programs such data is typically too fine-grained/sensitive to be applied for research purposes. The aim of this study was to analyse differences between privacy-preserving models and actual retail catchments, and investigate if machine learning techniques could improve the accuracy of such catchment models. Based on a UK-wide sample of 4 million grocery store loyalty card holders, covering 485 million transactions over 29 months (2019-2021) and distributed across 33,000 neighbourhoods (Lower Super Output Areas, or LSOA), the study demonstrates how models trained on geolocated data perform at predicting, per store, catchment areas which contain 50, 80, and 95% of its customers’ primary location. Through comparative assessment of machine learning approaches, we find better performance from tree-based models (RF, XGB) with the best performance from an XGB model achieving an R2 of 0.72 and MAE of 1.06. To conclude, we review variable importance measures using SHAP values and discuss the relative merits of including specific features when modeling catchment areas. © 2022 IEEE.

TSDF: A simple yet comprehensive, unified data storage and exchange format standard for digital biosensor data in health applications

Digital sensors are increasingly being used to monitor the change over time of physiological processes in biological health and disease, often using wearable devices. This generates very large amounts of digital sensor data, for which a consensus on a common storage, exchange and archival data format standard, has yet to be reached.

Horizon TAP Yordan Raykov contributed to research featured in this paper which posed a series of format design criteria and reviewed in detail, existing storage and exchange formats. When the researchers judged against these criteria, they found the existing formats lacking, and proposed Time Series Data Format (TSDF), a unified, standardized format for storing all types of physiological sensor data, across diverse disease areas.

Find out more by accessing the paper here.

Ethical Risk Assessment for Social Robots

Helena Webb explained:

“Smart toys are becoming increasingly popular and commercially available. Whilst they can be fun to use and also benefit people’s well-being, they present certain safety and ethical risks too. The Ethical Risk Assessment is an important tool to identify these risks and prompt consideration of ways to address them. Our Ethical Risk Assessment exercise brought together researchers with an interest in smart toys and Responsible Robotics. This included members of the RoboTIPS study, the Responsible Technology Institute at Oxford and researchers working on the Purrble toy.”

Ethical Risk Assessment for Social Robots: Case Studies in Smart Robot Toys

£1.2m EPSRC Award for “Fixing The Future: The Right to Repair and Equal-IoT”

Consider this future scenario:

Home insurance firms now mandate installation of smart smoke alarms. One popular low-cost alarm has developed a speaker hardware fault and recently stopped providing software security updates after just 2 years. Adam, who can financially afford to, simply buys another alarm and throws the broken one away. Ben however, cannot afford a new alarm and faces unexpected financial consequences. The lack of security means his home network has been compromised and hackers intercepted his banking log-in credentials. Worse still, the faulty alarm did not warn him of a small housefire in the night, causing damage he cannot afford to repair. The insurance company is refusing to pay the claim, as unknown to Ben, they recently removed the device from their list of accredited alarms. 

October 2022 is the start of Fixing The Future: The Right to Repair and Equal-IoTa £1.2m EPSRC project that examines how to avoid such future inequalities due to the poor long-term cybersecurity, exploitative use of data and lacking environmental sustainability that defines the current IoT. Presently, when IoT devices are physically damaged, malfunction or cease to be supported, they no longer operate reliably. Devices also have planned obsolescence, featuring inadequate planning for responsible management throughout their lifespan. This impacts on those who lack the socio-economic means to repair or maintain their IoT devices, leaving them excluded from digital life for example, with broken phone screens or cameras.

This project is an ambitious endeavour that aims to develop a digitally inclusive and more sustainable Equal-IoT toolkit by working across disciplines such as law and ethics, Human Computer Interaction, Design and technology.

Law and Ethics: To what extent do current legal and ethical frameworks act as barriers to equity in the digital economy, and how should they be improved in the future to support our vision of Equal-IoT?  We will examine how cybersecurity laws on ‘security by design’ shape design e.g. the proposed UK Product Security & Telecoms Infrastructure Bill; how consumer protection laws can protect users from loss of service and data when IoT devices no longer work e.g. the US FTC Investigation of Revolv smart hubs which disconnected/bricked customer devices due to a buy-out by Nest.

Design: How can grassroots community repair groups help inform next-generation human-centred design principles to support the emergence of Equal-IoT?  We will use The Repair Shop 2049 as a prototyping platform to explore the legal and HCI related insights in close partnership with The Making Rooms (TMR), a community led fabrication lab working as a grassroots repair network in Blackburn. This will combine the expertise of local makers, citizens,   civic leaders and technologists to understand how best to design and implement human-centred Equal-IoT infrastructures and explore the lived experience of socio-economic deprivation and digital exclusion in Blackburn.

Human Computer (Data) Interaction (HDI): How can HCI help operationalise repairability and enable creation of Equal-IoT? We will create prototype future user experiences and technical design architectures that showcase best practice on how Equal-IoT can be built to be more repairable and address inequalities posed by current IoT design. Our series of blueprints, patterns and frameworks will align needs of citizens technical requirements and reflect both constraints and opportunities manufacturers face.

Image created and provided by Michael Stead

This is an exciting collaboration between the Universities of Nottingham, Edinburgh, Lancaster and Napier, along with industry partners –Which?, NCC Group, Canadian Government, BBC R&D, The Making Rooms.

For more information about this project, contact Horizon Transitional Assistant Professor Neelima Sailaja

 

 

 

 

 

Principled machine learning – new paper

Yordan led this co-authored paper, with David Saad – published in the IEEE Journal of Selected Topics in Quantum Electronics.

The paper introduces the underlying concepts which give rise to some of the commonly used machine learning methods and points to their advantages, limitations and potential use in various areas of photonics.

 

Governing online communities

An area of research that I am very interested in relates to online communications and controversies over how they should be governed. The past 15 years have seen a massive increase in Internet mediated communications, including in the use of personal websites, blogs, discussion forums, and social media platforms. The ease with which users all over the world can post content and interact with others bolsters freedom of speech and creates many opportunities for civic participation. At the same time, the accessibility of online communications, in particular in the form of anonymous posting – also enables the spread of harmful content online such as misinformation and hate speech. Interventions to limit the negative impacts of online communications often risk limiting their benefits too, and proposed policy interventions, such as the UK government’s Online Harms White Paper[1], can be highly controversial.

In a previous research project[2] colleagues and I considered the dilemmas surrounding the responsible governance of online social spaces. How can the harms of online communications be addressed without impeding rights to freedom of speech? Legal mechanisms are in place to punish individuals for their posts in certain circumstances, but they lack capacity to deal with the frequency and volume of online communications. Furthermore, they are enacted retrospectively, after the harms have already been caused. Similarly, online platforms often rely on users reporting inappropriate content for investigation and potential removal. Once again this does not capture all of  the potential harmful content and in the period of time before content is removed, it can often spread and cause considerable harm.

Given the limitations of these legal and platform mechanisms, our project team became very interested in the potential of user self-governance in online spaces. User self-governance relates to individuals moderating their own behaviours and also moderating those of others. We conducted an examination of Twitter data[3] to analyse the effects of counter speech against hateful posts. This revealed that multiple voices of disagreement can quell the spread of harmful posts expressing racism, sexism and homophobia. When multiple users express disagreement with a post this can serve to discourage the original poster from re-posting and encourage others to reflect carefully before sharing the content. So, this user self-governance appears to be an effective real-time mechanism that upholds freedom of speech rather than undermining it.

It is likely that user actions to moderate themselves and others online will increase in significance. More and more users are choosing services which provide end-to-end (E2E) encryption, preserving the privacy of their interactions. In these contexts, new challenges and dilemmas emerge around governance. How can platforms and legal mechanisms deal effectively with misinformation and hate speech in E2E encrypted spaces whilst respecting rights to privacy?[4].  It is crucial for research to investigate these questions and for this reason I am very excited to be part of the Horizon project Everything in Moderation[5]. This focuses on the moderation of interactions in private online spaces. We explore the alternative strengths and weaknesses of legal mechanisms, technical mechanisms and user self-governance for moderation. As part of this we intend to engage with online communities that choose to interact in private spaces. We will explore how they establish their own norms and practices for communication, and how they deal with troublesome or harmful content when it is posted. Through this we will map out various forms of user self-governance practices that are adopted in private online spaces. This will contribute to current debates around best practices for online governance and the future of Internet mediated communications.

[1] https://www.gov.uk/government/consultations/online-harms-white-paper/online-harms-white-paper

[2] http://www.digitalwildfire.org

[3] https://arxiv.org/pdf/1908.11732.pdf

[4] https://unesdoc.unesco.org/ark:/48223/pf0000246527

[5] https://www.horizon.ac.uk/project/everything-in-moderation/

Written by Helena Webb

Horizon TAPs attending ACM CHI 2022

 

The ACM CHI Conference on Human Factors in Computing Systems is the premier international conference of Human-Computer Interaction (HCI). CHI – pronounced ‘kai’ – annually brings together researchers and practitioners from all over the world and from diverse cultures, backgrounds, and positionalities, who have as an overarching goal to make the world a better place with interactive digital technologies.

CHI 2022 is structured as a Hybrid-Onsite full conference which runs from April 30–May 5 in New Orleans, LA.

Two Horizon Transitional Assistant Professors will be presenting their work at CHI 2022.

Horia Maior: Moving from Brain-Computer Interaction to Personal Cognitive Informatics.

Consumer neurotechnology (a method or device in which electronics interface with the nervous system to monitor or modulate neural activity) has arriving even as algorithms for user state estimation are still being actively defined and developed. Consumable wearables that aim to estimate cognitive changes from wrist data or body movement are now readily available.  But does this data help people? It’s now a critical time to address how users could be informed by wearable neurotechnology, in a way that’s relevant to their needs and serves their personal well-being. This special interest group will bring together the key HCI communities needed to investigate this topic: personal informatics, digital health and wellbeing, neuroergonomics, and neuroethics.

In addition, Horia also presented: The Impact of Motion Scaling and Haptic Guidance on Operator’s Workload and Performance in Teleoperation.

Neelima Sailaja: Where lots of people are sharing one thing, as soon as one person does something slightly different it can impact everyone: A Formative Exploration of User Challenges and Expectations around Sharing of Accounts Online.

Users often share their accounts with others; however most accounts support only single user interactions. Within industry, Netflix and Disney+ providing profiles within accounts are evidence that popular services are identifying and responding to user needs in this context, however the solutions here are mostly naïve. Within academia, while sharing of practices are of interest, the practicalities of dealing with them are yet to be studied.  Neelima’s paper highlights the said gap in research and presents preliminary findings from a series of focus groups that revealed practical challenges and future expectations around the experience of sharing, social implications and user privacy. Research in this area will continue by integrating these findings with expert interviews – held with ‘makers’ who research and work on such technologies. The outcome will be a set of holistic design recommendations that form a practical guide for support around account sharing.

 

 

Congratulations to Horia Maior

Earlier on in the week Horizon Transitional Assistant Professor, Horia Maior, was named by the Foundation for Science and Technology as one of their 2002 Future Leaders.

Horia explained:

“I am very excited about joining the Future Leaders 2022 cohort. The Foundation Future Leaders programme brings together a cohort of around 30 mid-career professionals over the course of a year, with approximately 10 representatives each from the research community, industry, and the civil service and wider public sector. Over a 12-month period, the group meet and discuss with senior figures from government, parliament, universities, large industry, SMEs, research charities and others (more information here https://www.foundation.org.uk/Future-Leaders/Foundation-Future-Leaders-2022).

During my research career I have acquired a good understanding of how research and innovation are used in academia, and the challenges the sector is facing in a rapidly changing world. The Future Leaders program is an excellent opportunity for me to make links and understand how science, research and innovation are used not only in academia, but also in other sectors, including the government, wider public sector, and industry; and how a diverse range of stakeholders can support and enrich policy development.

I see the opportunity provided by the Foundation of Future Leaders program as a platform to generate an ongoing, open dialog with a group of people from diverse backgrounds and experiences, including government, industry, and academia. I will make use of meetings, drop-in sessions and conference time offered through the Foundation of Future Leaders program to lead pin-point discussions that will address research challenges within academia in general, as well as in-depth discussions about some of our ongoing  work in Horizon, the Cobot Maker Space and the Trustworthy Autonomous Systems Hub at the University of Nottingham.

I am very excited to join and engage in discussions led by other researchers, industry practitioners, civil servants, parliamentary members and members of the wider public sector. I will try to be active in writing and publishing blogs and social media posts about the events and ongoing discussions. Moving forwards beyond this program, I would like to build a network of trusted colleagues through the programme and maintain an active link that will facilitate knowledge exchange and impact across multiple sectors.”