TSDF: A simple yet comprehensive, unified data storage and exchange format standard for digital biosensor data in health applications

Digital sensors are increasingly being used to monitor the change over time of physiological processes in biological health and disease, often using wearable devices. This generates very large amounts of digital sensor data, for which a consensus on a common storage, exchange and archival data format standard, has yet to be reached.

Horizon TAP Yordan Raykov contributed to research featured in this paper which posed a series of format design criteria and reviewed in detail, existing storage and exchange formats. When the researchers judged against these criteria, they found the existing formats lacking, and proposed Time Series Data Format (TSDF), a unified, standardized format for storing all types of physiological sensor data, across diverse disease areas.

Find out more by accessing the paper here.

Ethical Risk Assessment for Social Robots

Helena Webb explained:

“Smart toys are becoming increasingly popular and commercially available. Whilst they can be fun to use and also benefit people’s well-being, they present certain safety and ethical risks too. The Ethical Risk Assessment is an important tool to identify these risks and prompt consideration of ways to address them. Our Ethical Risk Assessment exercise brought together researchers with an interest in smart toys and Responsible Robotics. This included members of the RoboTIPS study, the Responsible Technology Institute at Oxford and researchers working on the Purrble toy.”

Ethical Risk Assessment for Social Robots: Case Studies in Smart Robot Toys

£1.2m EPSRC Award for “Fixing The Future: The Right to Repair and Equal-IoT”

Consider this future scenario:

Home insurance firms now mandate installation of smart smoke alarms. One popular low-cost alarm has developed a speaker hardware fault and recently stopped providing software security updates after just 2 years. Adam, who can financially afford to, simply buys another alarm and throws the broken one away. Ben however, cannot afford a new alarm and faces unexpected financial consequences. The lack of security means his home network has been compromised and hackers intercepted his banking log-in credentials. Worse still, the faulty alarm did not warn him of a small housefire in the night, causing damage he cannot afford to repair. The insurance company is refusing to pay the claim, as unknown to Ben, they recently removed the device from their list of accredited alarms. 

October 2022 is the start of Fixing The Future: The Right to Repair and Equal-IoTa £1.2m EPSRC project that examines how to avoid such future inequalities due to the poor long-term cybersecurity, exploitative use of data and lacking environmental sustainability that defines the current IoT. Presently, when IoT devices are physically damaged, malfunction or cease to be supported, they no longer operate reliably. Devices also have planned obsolescence, featuring inadequate planning for responsible management throughout their lifespan. This impacts on those who lack the socio-economic means to repair or maintain their IoT devices, leaving them excluded from digital life for example, with broken phone screens or cameras.

This project is an ambitious endeavour that aims to develop a digitally inclusive and more sustainable Equal-IoT toolkit by working across disciplines such as law and ethics, Human Computer Interaction, Design and technology.

Law and Ethics: To what extent do current legal and ethical frameworks act as barriers to equity in the digital economy, and how should they be improved in the future to support our vision of Equal-IoT?  We will examine how cybersecurity laws on ‘security by design’ shape design e.g. the proposed UK Product Security & Telecoms Infrastructure Bill; how consumer protection laws can protect users from loss of service and data when IoT devices no longer work e.g. the US FTC Investigation of Revolv smart hubs which disconnected/bricked customer devices due to a buy-out by Nest.

Design: How can grassroots community repair groups help inform next-generation human-centred design principles to support the emergence of Equal-IoT?  We will use The Repair Shop 2049 as a prototyping platform to explore the legal and HCI related insights in close partnership with The Making Rooms (TMR), a community led fabrication lab working as a grassroots repair network in Blackburn. This will combine the expertise of local makers, citizens,   civic leaders and technologists to understand how best to design and implement human-centred Equal-IoT infrastructures and explore the lived experience of socio-economic deprivation and digital exclusion in Blackburn.

Human Computer (Data) Interaction (HDI): How can HCI help operationalise repairability and enable creation of Equal-IoT? We will create prototype future user experiences and technical design architectures that showcase best practice on how Equal-IoT can be built to be more repairable and address inequalities posed by current IoT design. Our series of blueprints, patterns and frameworks will align needs of citizens technical requirements and reflect both constraints and opportunities manufacturers face.

Image created and provided by Michael Stead

This is an exciting collaboration between the Universities of Nottingham, Edinburgh, Lancaster and Napier, along with industry partners –Which?, NCC Group, Canadian Government, BBC R&D, The Making Rooms.

For more information about this project, contact Horizon Transitional Assistant Professor Neelima Sailaja

 

 

 

 

 

Principled machine learning – new paper

Yordan led this co-authored paper, with David Saad – published in the IEEE Journal of Selected Topics in Quantum Electronics.

The paper introduces the underlying concepts which give rise to some of the commonly used machine learning methods and points to their advantages, limitations and potential use in various areas of photonics.

 

Governing online communities

An area of research that I am very interested in relates to online communications and controversies over how they should be governed. The past 15 years have seen a massive increase in Internet mediated communications, including in the use of personal websites, blogs, discussion forums, and social media platforms. The ease with which users all over the world can post content and interact with others bolsters freedom of speech and creates many opportunities for civic participation. At the same time, the accessibility of online communications, in particular in the form of anonymous posting – also enables the spread of harmful content online such as misinformation and hate speech. Interventions to limit the negative impacts of online communications often risk limiting their benefits too, and proposed policy interventions, such as the UK government’s Online Harms White Paper[1], can be highly controversial.

In a previous research project[2] colleagues and I considered the dilemmas surrounding the responsible governance of online social spaces. How can the harms of online communications be addressed without impeding rights to freedom of speech? Legal mechanisms are in place to punish individuals for their posts in certain circumstances, but they lack capacity to deal with the frequency and volume of online communications. Furthermore, they are enacted retrospectively, after the harms have already been caused. Similarly, online platforms often rely on users reporting inappropriate content for investigation and potential removal. Once again this does not capture all of  the potential harmful content and in the period of time before content is removed, it can often spread and cause considerable harm.

Given the limitations of these legal and platform mechanisms, our project team became very interested in the potential of user self-governance in online spaces. User self-governance relates to individuals moderating their own behaviours and also moderating those of others. We conducted an examination of Twitter data[3] to analyse the effects of counter speech against hateful posts. This revealed that multiple voices of disagreement can quell the spread of harmful posts expressing racism, sexism and homophobia. When multiple users express disagreement with a post this can serve to discourage the original poster from re-posting and encourage others to reflect carefully before sharing the content. So, this user self-governance appears to be an effective real-time mechanism that upholds freedom of speech rather than undermining it.

It is likely that user actions to moderate themselves and others online will increase in significance. More and more users are choosing services which provide end-to-end (E2E) encryption, preserving the privacy of their interactions. In these contexts, new challenges and dilemmas emerge around governance. How can platforms and legal mechanisms deal effectively with misinformation and hate speech in E2E encrypted spaces whilst respecting rights to privacy?[4].  It is crucial for research to investigate these questions and for this reason I am very excited to be part of the Horizon project Everything in Moderation[5]. This focuses on the moderation of interactions in private online spaces. We explore the alternative strengths and weaknesses of legal mechanisms, technical mechanisms and user self-governance for moderation. As part of this we intend to engage with online communities that choose to interact in private spaces. We will explore how they establish their own norms and practices for communication, and how they deal with troublesome or harmful content when it is posted. Through this we will map out various forms of user self-governance practices that are adopted in private online spaces. This will contribute to current debates around best practices for online governance and the future of Internet mediated communications.

[1] https://www.gov.uk/government/consultations/online-harms-white-paper/online-harms-white-paper

[2] http://www.digitalwildfire.org

[3] https://arxiv.org/pdf/1908.11732.pdf

[4] https://unesdoc.unesco.org/ark:/48223/pf0000246527

[5] https://www.horizon.ac.uk/project/everything-in-moderation/

Written by Helena Webb

Horizon TAPs attending ACM CHI 2022

 

The ACM CHI Conference on Human Factors in Computing Systems is the premier international conference of Human-Computer Interaction (HCI). CHI – pronounced ‘kai’ – annually brings together researchers and practitioners from all over the world and from diverse cultures, backgrounds, and positionalities, who have as an overarching goal to make the world a better place with interactive digital technologies.

CHI 2022 is structured as a Hybrid-Onsite full conference which runs from April 30–May 5 in New Orleans, LA.

Two Horizon Transitional Assistant Professors will be presenting their work at CHI 2022.

Horia Maior: Moving from Brain-Computer Interaction to Personal Cognitive Informatics.

Consumer neurotechnology (a method or device in which electronics interface with the nervous system to monitor or modulate neural activity) has arriving even as algorithms for user state estimation are still being actively defined and developed. Consumable wearables that aim to estimate cognitive changes from wrist data or body movement are now readily available.  But does this data help people? It’s now a critical time to address how users could be informed by wearable neurotechnology, in a way that’s relevant to their needs and serves their personal well-being. This special interest group will bring together the key HCI communities needed to investigate this topic: personal informatics, digital health and wellbeing, neuroergonomics, and neuroethics.

In addition, Horia also presented: The Impact of Motion Scaling and Haptic Guidance on Operator’s Workload and Performance in Teleoperation.

Neelima Sailaja: Where lots of people are sharing one thing, as soon as one person does something slightly different it can impact everyone: A Formative Exploration of User Challenges and Expectations around Sharing of Accounts Online.

Users often share their accounts with others; however most accounts support only single user interactions. Within industry, Netflix and Disney+ providing profiles within accounts are evidence that popular services are identifying and responding to user needs in this context, however the solutions here are mostly naïve. Within academia, while sharing of practices are of interest, the practicalities of dealing with them are yet to be studied.  Neelima’s paper highlights the said gap in research and presents preliminary findings from a series of focus groups that revealed practical challenges and future expectations around the experience of sharing, social implications and user privacy. Research in this area will continue by integrating these findings with expert interviews – held with ‘makers’ who research and work on such technologies. The outcome will be a set of holistic design recommendations that form a practical guide for support around account sharing.

 

 

Congratulations to Horia Maior

Earlier on in the week Horizon Transitional Assistant Professor, Horia Maior, was named by the Foundation for Science and Technology as one of their 2002 Future Leaders.

Horia explained:

“I am very excited about joining the Future Leaders 2022 cohort. The Foundation Future Leaders programme brings together a cohort of around 30 mid-career professionals over the course of a year, with approximately 10 representatives each from the research community, industry, and the civil service and wider public sector. Over a 12-month period, the group meet and discuss with senior figures from government, parliament, universities, large industry, SMEs, research charities and others (more information here https://www.foundation.org.uk/Future-Leaders/Foundation-Future-Leaders-2022).

During my research career I have acquired a good understanding of how research and innovation are used in academia, and the challenges the sector is facing in a rapidly changing world. The Future Leaders program is an excellent opportunity for me to make links and understand how science, research and innovation are used not only in academia, but also in other sectors, including the government, wider public sector, and industry; and how a diverse range of stakeholders can support and enrich policy development.

I see the opportunity provided by the Foundation of Future Leaders program as a platform to generate an ongoing, open dialog with a group of people from diverse backgrounds and experiences, including government, industry, and academia. I will make use of meetings, drop-in sessions and conference time offered through the Foundation of Future Leaders program to lead pin-point discussions that will address research challenges within academia in general, as well as in-depth discussions about some of our ongoing  work in Horizon, the Cobot Maker Space and the Trustworthy Autonomous Systems Hub at the University of Nottingham.

I am very excited to join and engage in discussions led by other researchers, industry practitioners, civil servants, parliamentary members and members of the wider public sector. I will try to be active in writing and publishing blogs and social media posts about the events and ongoing discussions. Moving forwards beyond this program, I would like to build a network of trusted colleagues through the programme and maintain an active link that will facilitate knowledge exchange and impact across multiple sectors.”

 

Helena updates us on her latest research

 

 

 

As a socio-technical researcher I work on projects that combine elements of computer science and the social sciences. I focus on bringing in the human experience and perspective into our understanding of technology and enjoy highlighting how these human factors can positively shape technological development. Joining Horizon as a Transitional Assistant Professor allows me space to develop my research portfolio and I am excited to be bringing this approach into a new research collaboration, called “Artificial Intelligence Decision Support for Kidney Transplantation (AID-KT)”

AID-KT is funded by NIHR and is led by a team at the University of Oxford. The project co-leaders are Simon Knight in the Centre for Evidence in Transplantation and Tingting Zhu from the Oxford Computational Health Informatics Lab. The project seeks to improve outcomes in kidney transplantation by developing an AI decision support tool.

The kidney is the most transplanted organ – accounting for just over 65% of organ transplants. At any one time, there are around 5,000 patients on the waiting list for a kidney transplant in the UK. However, donor organs are often turned down due to fears of poor outcomes for patients. Currently there is an absence of support tools to help clinicians determine and discuss with their patients how likely the transplant of a specific donor kidney will be for them, plus how this might compare to not having the transplant and waiting for another kidney to become available. Being able to predict the graft survival of the kidney after transplant could greatly increase the transplant success rate, leading to better outcomes for, and making better use of the available organ pool and healthcare resources.

The aim of this project is to address this absence by developing and testing a clinical decision support tool for kidney transplant. It will be driven by machine learning techniques and will help answer this crucial clinical question for potential kidney donor recipients:

Will my outcome be better if I accept this transplant offer, or wait for the next offer in the future?

Much of the project focuses on the creation and validation of machine learning models that can accurately predict graft and patient survival following transplant and patient outcomes if a transplant offer is declined. Alongside this, we are conducting work to make these models explainable and transparent. Little existing research has investigated how to present these kinds of clinical predictions to patients and clinicians in ways they find accessible. Therefore, we will be involving clinicians and patients in our research – through qualitative interviews and other methods – to assess which data are useful to them and how data should be presented to support decision making and informed consent. As part of this we recognise that patients will differ in terms of the level of detail they want to have and the extent to which they prefer to lead their own decision making or defer to clinical judgement. As such, the clinical decision tool needs to be adaptable to the preferences of different individuals. In addition, it also needs to make its salient features visible and interpretable to clinician users so that they can understand how the model is using underlying data and explain the predictions made to patients clearly. By bringing in these human perspectives to the development of the tool we can optimise its usefulness and effectiveness in clinical settings. By extension we can also, hopefully, improve acceptance rates for kidney transplants as well as post-transplant outcomes.

Written by Helena Webb

 

 

Horizon TAPs

A key feature of Horizon is our transitional fellowship scheme – this recruits highly talented research fellows into the academic career track, providing time and space initially to allow a greater focus on developing their research portfolio and leadership skills. This mechanism also permanently embeds the practices of cross-disciplinary digital economy working into key academic units throughout the university of Nottingham. 

Find out more about the research interests of our current TAPs