Mercedes Torres Torres – Land-use Classification Using Drone Imagery

Land-use classification is an essential environmental activity that involves mapping an area according to the characteristics of the land. This is very useful because it provides information on the types of human activity being carried out in an area. The information in these maps can be then used to study the environmental impact that different planning decisions may have on a community. For example, national agencies will use land-use maps showing industrial, rural and residential areas to decide where different facilities, such as hospitals, or energy plants, or access, such as roads, should be built.

Different classifications can be used, depending on the level of detail needed. Simple classifications may distinguish two classes: buildings and non-buildings. More detailed classifications can include up to nine classes: urban, industrial, formal residential, informal residential, slums, roads, vegetation, barren, and water bodies.

While simpler classifications are easier to obtain, more detailed classifications offer a wider range of information and give a richer picture of the underlying organisation of an area. Nevertheless, while extremely useful, obtaining detailed land-use maps is no easy feat. The most accurate option remains employing human surveyors, who must be trained and are then required to travel to the area to be mapped, pen and paper in hand. As a consequence, manual classification can be subjective, time-consuming, labour-intensive and expensive.

The caveats are specially challenging in fast-developing cities with under-resourced local government. In these areas, centrally-organised mapping is difficult to maintain up to date due to lack of personnel and fast-paced changes in land purpose. Consequently, these areas could benefit from having an automatic way to produce land-use classification maps that is cost-effective, accurate, fast and easy to produce and maintain.

That was the purpose of the DeepNeoDem project. This AGILE project was a collaboration between Horizon Digital Economy Research, the NeoDemographics Lab at the Business School, and the Computer Vision Lab at the School of Computer Science. We used Fully Convolutional Networks (Long et al.; 2015) and drone imagery collected as part of the Dar Ramani Huria project, to show that it is possible to obtain accurate detailed classifications of up to nine different land-use classes.

Interesting results showed that, in some cases, such as the one showed in the figure below, FCNs were able to even beat human annotators, by picking up vegetation and roads which had not been annotated by humans.

Figure 1. FCNs are able to learn roads (red) that are not present in the manual annotations, as well as finding patches of vegetation (green) than were not annotated.

You can find more information in: “Automatic Pixel-Level Land-use Prediction Using Deep Convolutional Neural Networks” by M. Torres Torres, B. Perrat, M. Iliffe, J. Goulding and M. Valstar. Link here.

References:

[1] Long, J., Shelhamer, E. and Darrell, T., (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3431-3440).

Mercedes Torres Torres, Transitional Assistant Professor, Horizon Digital Economy Research

Mercedes Torres Torres – Unobtrusive Behaviour Monitoring via Interactions of Daily Living

Nowadays, most of our daily living entails constant interactions with different utilities and devices. Every time you turn on your kettle, take a shower, send a text or buy something with your card, information about this interaction is collected by different companies, such as your energy, water, telephone provider or bank. These companies may then use that data for different purposes, for example to recalculate your tariffs or to suggest further services.

The goal of the Unobtrusive Behaviour Monitoring project is to combine all the information that we are continuously generating through daily interactions with technology and to feed it back in a constructive and organised way so we can see different patterns in our behaviour. This type of analysis could be particularly insightful for people who suffer from mood disorders, who could use it to monitor their condition, provide early warning signs and facilitate the development of self-awareness of their triggers. Overall, the ultimate goal is to enable patients to take control of their condition. In other words, we want to allow people to benefit from the data they themselves are constantly generating and help them to use it, when they want, to monitor their mood. And, more importantly, we want to do this in a way that respects their privacy and keeps their data safe.

From a machine learning perspective, this project is interesting because it would require the development of algorithms that could work locally in order to protect people’s privacy (i.e. analysis should be done at each person’s home instead of in a centralised database), while also work with sparse or incomplete data (i.e. in the case that one person decided to turn off one of the sources of information).

This project is an on-going collaboration between the Faculty of Engineering, the Institute of Mental Health, and the Faculty of Science. Phase I was funded by  MindTech and its main goal was to explore the literature in the area and to engage with experts. We explored ethics, technical, legal and social aspects of four different sources of data (utility and device usage, car usage and driving style, banking and card transactions, and mobile phone usage) and carried out different engagement activities with patients groups and other researchers in the area.

These activities, particularly our engagement with patients’ groups, were extremely useful. Talking with people with mood disorders who are interested in self-monitoring their condition, helped us identify potential directions and issues from a usability perspective, such as the need for total customisation of sensor (i.e. the possibility of turning off particular sources of data, instead of all of them at the same time) –  or the possibility of adding additional sensors like sleep or activity sensors through wearables like Fit Bit.

For Phase II, we are hoping that we can carry out a pilot study in which we collect some data and develop machine learning algorithms able to cope with distributed information that might be scarce or incomplete.

Mercedes Torres Torres, Transitional Assistant Professor, Horizon Digital Economy Research

James Pinchin – Strava

Strava is a service which bills itself as ‘the social network for athletes’. It allows users to store and share details of their activities, usually location records from runs or bike rides. Through sharing users can compare their performance and even (sometimes controversially1,2,3) compete over ‘segments’. Engineers at Strava are not ignorant to the value of their dataset. Staff in their ‘labs’4 are using the data to increase the size of their user base and sustain users engagement with their service.

Recently they released a global heatmap5 which they claim contains data from one billion activities and three trillion recorded locations. Given the size of the dataset, the lack of named users and the provision of (optional) ‘privacy zones’ in the Strava service5 you might expect this to be an interesting piece of internet ephemera, mostly of interest to runners and cyclists planning their next route. However users were quick to find other valuable information in the dataset.

As many global news outlets noted – not only the location, but also the layout of sensitive military sites were unwittingly revealed by the activities of fitness conscious personnel6,7. The suggestion from Strava8, the press and military leadership – military personnel should make use of privacy features, not use activity tracking apps in these places and don’t worry, Strava will work with military and government officials to filter ‘sensitive data’.

Nobody seems to be asking if Strava really needs to collect and store this location information. Can a service survive which provides locally calculated segment times, performance figures and comparative analyses without the need for users to reveal full location information in the first place? Is it possible to extract societal and monetary value from location time series without crossing security and privacy barriers? My answer is yes, and that’s what my research is about.

1http://nymag.com/daily/intelligencer/2014/09/did-a-cycling-app-contribute-to-bike-death.html

2http://www.latimes.com/business/la-fi-tn-strava-dopers-20160415-story.html

3 http://www.mamilsports.com/cycling/strava-beef-goes-viral-aussie-cyclists-trade-insults-kom-leaderboard/

4 https://labs.strava.com/

5 https://labs.strava.com/heatmap/#7.00/-120.90000/38.36000/hot/all

6 https://support.strava.com/hc/en-us/articles/115000173384-Privacy-Zones

7 https://edition.cnn.com/2018/01/28/politics/strava-military-bases-location/index.html

8 https://www.theguardian.com/world/2018/jan/28/fitness-tracking-app-gives-away-location-of-secret-us-army-bases#

9 https://blog.strava.com/press/a-letter-to-the-strava-community/

 

 

Tracks in the Strava global heatmap created by users exercising on the decks of ships moored at Portsmouth naval base, UK.

©Strava 2017 ©Mapbox ©OpenStreetMap

James Pinchin, Transitional Assistant Professor, Horizon Digital Economy Research

Stuart Reeves

Recently I have been working on a couple of ongoing investigations. One is based around examining what UX practitioners–as distinguished from HCI researchers–do when they are performing user research. The other, with Martin Porcheron and Joel Fischer, is exploring how people use speech-based ‘assistants’ in the home, such as the Amazon Echo.

Since its earliest days, a core concern of HCI research has been invested in developing ways of locating and dealing with the problems users of interactive systems might encounter. The terminology and concepts have changed over time, from notions of usability to more recent ideas about user experience. But much of this work has overlooked a couple of important aspects. Firstly, how problem identification is practically achieved (e.g., via user testing), and secondly that there is significant daylight between academic and practitioners’ applications of various evaluation methods. Recently I’ve combined these interests by examining how industry practitioners perform user testing as one piece of a broader project lifecycle. The reason why I’m doing this work is that HCI has only a limited (but growing) understanding of ‘what practitioners do’ in spite of much lip service being paid to the importance of academic HCI outputs and their potential operationalisation in practice. If we don’t really know what they are doing, then it is arguable whether HCI can reasonably make such claims. Another reason to focus on practitioners’ work practices is because I feel that HCI itself can learn much from various innovations and ways of working that might be taking place in that work.

Recently I have been involved in examining empirical data about the way that people interact with voice user interfaces, conversational assistants and so on. In particular we (Martin Porcheron, Joel Fischer) have spend some time examining the organisation of talk with / around the Amazon Echo, that Martin has deployed to various homes as part of his PhD work. While there is a paper on this work that has been recently accepted to CHI 2018, I can detail a few of the key findings here. The Amazon Echo is positioned as a voice assistant for the home that can answer questions, help with cooking, and play music (amongst other things). Martin’s data shows how users of the Echo must develop particular ways of talking ‘to’ the device and managing the various interactional problems that they encounter (e.g., recognition problems, or knowing what they can actually do with it). Users also seem to approach the device more as a resource to ‘get something done’ than an agent that one can have a ‘conversation’ with. Finally, we believe that there are some potential implications for design in this work, such as ways that one might move towards considering ‘request and response design’ instead of potentially misguided ‘conversation design’ metaphors.

Stuart Reeves, Transitional Assistant Professor, Horizon Digital Economy Research