Governing online communities

An area of research that I am very interested in relates to online communications and controversies over how they should be governed. The past 15 years have seen a massive increase in Internet mediated communications, including in the use of personal websites, blogs, discussion forums, and social media platforms. The ease with which users all over the world can post content and interact with others bolsters freedom of speech and creates many opportunities for civic participation. At the same time, the accessibility of online communications, in particular in the form of anonymous posting – also enables the spread of harmful content online such as misinformation and hate speech. Interventions to limit the negative impacts of online communications often risk limiting their benefits too, and proposed policy interventions, such as the UK government’s Online Harms White Paper[1], can be highly controversial.

In a previous research project[2] colleagues and I considered the dilemmas surrounding the responsible governance of online social spaces. How can the harms of online communications be addressed without impeding rights to freedom of speech? Legal mechanisms are in place to punish individuals for their posts in certain circumstances, but they lack capacity to deal with the frequency and volume of online communications. Furthermore, they are enacted retrospectively, after the harms have already been caused. Similarly, online platforms often rely on users reporting inappropriate content for investigation and potential removal. Once again this does not capture all of  the potential harmful content and in the period of time before content is removed, it can often spread and cause considerable harm.

Given the limitations of these legal and platform mechanisms, our project team became very interested in the potential of user self-governance in online spaces. User self-governance relates to individuals moderating their own behaviours and also moderating those of others. We conducted an examination of Twitter data[3] to analyse the effects of counter speech against hateful posts. This revealed that multiple voices of disagreement can quell the spread of harmful posts expressing racism, sexism and homophobia. When multiple users express disagreement with a post this can serve to discourage the original poster from re-posting and encourage others to reflect carefully before sharing the content. So, this user self-governance appears to be an effective real-time mechanism that upholds freedom of speech rather than undermining it.

It is likely that user actions to moderate themselves and others online will increase in significance. More and more users are choosing services which provide end-to-end (E2E) encryption, preserving the privacy of their interactions. In these contexts, new challenges and dilemmas emerge around governance. How can platforms and legal mechanisms deal effectively with misinformation and hate speech in E2E encrypted spaces whilst respecting rights to privacy?[4].  It is crucial for research to investigate these questions and for this reason I am very excited to be part of the Horizon project Everything in Moderation[5]. This focuses on the moderation of interactions in private online spaces. We explore the alternative strengths and weaknesses of legal mechanisms, technical mechanisms and user self-governance for moderation. As part of this we intend to engage with online communities that choose to interact in private spaces. We will explore how they establish their own norms and practices for communication, and how they deal with troublesome or harmful content when it is posted. Through this we will map out various forms of user self-governance practices that are adopted in private online spaces. This will contribute to current debates around best practices for online governance and the future of Internet mediated communications.






Written by Helena Webb

Horizon TAPs attending ACM CHI 2022


The ACM CHI Conference on Human Factors in Computing Systems is the premier international conference of Human-Computer Interaction (HCI). CHI – pronounced ‘kai’ – annually brings together researchers and practitioners from all over the world and from diverse cultures, backgrounds, and positionalities, who have as an overarching goal to make the world a better place with interactive digital technologies.

CHI 2022 is structured as a Hybrid-Onsite full conference which runs from April 30–May 5 in New Orleans, LA.

Two Horizon Transitional Assistant Professors will be presenting their work at CHI 2022.

Horia Maior: Moving from Brain-Computer Interaction to Personal Cognitive Informatics.

Consumer neurotechnology (a method or device in which electronics interface with the nervous system to monitor or modulate neural activity) has arriving even as algorithms for user state estimation are still being actively defined and developed. Consumable wearables that aim to estimate cognitive changes from wrist data or body movement are now readily available.  But does this data help people? It’s now a critical time to address how users could be informed by wearable neurotechnology, in a way that’s relevant to their needs and serves their personal well-being. This special interest group will bring together the key HCI communities needed to investigate this topic: personal informatics, digital health and wellbeing, neuroergonomics, and neuroethics.

In addition, Horia also presented: The Impact of Motion Scaling and Haptic Guidance on Operator’s Workload and Performance in Teleoperation.

Neelima Sailaja: Where lots of people are sharing one thing, as soon as one person does something slightly different it can impact everyone: A Formative Exploration of User Challenges and Expectations around Sharing of Accounts Online.

Users often share their accounts with others; however most accounts support only single user interactions. Within industry, Netflix and Disney+ providing profiles within accounts are evidence that popular services are identifying and responding to user needs in this context, however the solutions here are mostly naïve. Within academia, while sharing of practices are of interest, the practicalities of dealing with them are yet to be studied.  Neelima’s paper highlights the said gap in research and presents preliminary findings from a series of focus groups that revealed practical challenges and future expectations around the experience of sharing, social implications and user privacy. Research in this area will continue by integrating these findings with expert interviews – held with ‘makers’ who research and work on such technologies. The outcome will be a set of holistic design recommendations that form a practical guide for support around account sharing.



Mercedes Torres Torres – Unobtrusive Behaviour Monitoring via Interactions of Daily Living

Nowadays, most of our daily living entails constant interactions with different utilities and devices. Every time you turn on your kettle, take a shower, send a text or buy something with your card, information about this interaction is collected by different companies, such as your energy, water, telephone provider or bank. These companies may then use that data for different purposes, for example to recalculate your tariffs or to suggest further services.

The goal of the Unobtrusive Behaviour Monitoring project is to combine all the information that we are continuously generating through daily interactions with technology and to feed it back in a constructive and organised way so we can see different patterns in our behaviour. This type of analysis could be particularly insightful for people who suffer from mood disorders, who could use it to monitor their condition, provide early warning signs and facilitate the development of self-awareness of their triggers. Overall, the ultimate goal is to enable patients to take control of their condition. In other words, we want to allow people to benefit from the data they themselves are constantly generating and help them to use it, when they want, to monitor their mood. And, more importantly, we want to do this in a way that respects their privacy and keeps their data safe.

From a machine learning perspective, this project is interesting because it would require the development of algorithms that could work locally in order to protect people’s privacy (i.e. analysis should be done at each person’s home instead of in a centralised database), while also work with sparse or incomplete data (i.e. in the case that one person decided to turn off one of the sources of information).

This project is an on-going collaboration between the Faculty of Engineering, the Institute of Mental Health, and the Faculty of Science. Phase I was funded by  MindTech and its main goal was to explore the literature in the area and to engage with experts. We explored ethics, technical, legal and social aspects of four different sources of data (utility and device usage, car usage and driving style, banking and card transactions, and mobile phone usage) and carried out different engagement activities with patients groups and other researchers in the area.

These activities, particularly our engagement with patients’ groups, were extremely useful. Talking with people with mood disorders who are interested in self-monitoring their condition, helped us identify potential directions and issues from a usability perspective, such as the need for total customisation of sensor (i.e. the possibility of turning off particular sources of data, instead of all of them at the same time) –  or the possibility of adding additional sensors like sleep or activity sensors through wearables like Fit Bit.

For Phase II, we are hoping that we can carry out a pilot study in which we collect some data and develop machine learning algorithms able to cope with distributed information that might be scarce or incomplete.

Mercedes Torres Torres, Transitional Assistant Professor, Horizon Digital Economy Research