This article was edited by SPRITE+ Events, Projects and Communications Assitant Katy Taylor and SPRITE+ Interim Project Manager Alan Munro, with the interview responses and edits made from the project team.
Today the spotlight is on the project titled 'Footprints to emissions: Exploring near-future digital vulnerabilities with creative methodologies' led by Dr David Ellis (University of Bath), with Dr Iain Reid (University of Portsmouth), Dr Philip Wu (Royal Holloway, University of London), and Dr Asad Ali. Following our 2021 Sandpit, the team explored how probe-based methods can elicit reactions and reflections about current and near-future vulnerabilities from digital emissions.
Do you want to introduce your project for the non-expert?
David: The project is about the amount of data that people create day-to-day that many of us are unaware of. We can refer to these as digital ‘emissions’. Every time someone gets on a bus, or visits their GP, goes to work - they're creating these little bits of data which are essential to the modern economy. This project came out of the idea that those ‘emissions’ are useful, but they also pose a risk and/or vulnerability to people.
People that build technologies tend to make things and then deal with the social consequences afterwards. The bigger picture to this project is how can we deal with these both now and in the future to prevent people from being more vulnerable in in those situations?
What do you think about the perception of harm?
David: It’s not that digital or online harm is inevitable, but that there’s always a risk of harm. Think about misinformation as a digital harm; it can cause harm but can also lead to end-user acknowledgement that what they’re seeing isn’t correct. Or it can have the opposite intended effect. The way I think of it is more in the sphere of a potential for harm. And that's true of digital emissions as well. They can be extremely valuable and useful, but they also pose risk to people and society.
Why exactly did you settle on the analogy of digital emissions?
Philip: It is important to use a metaphor or analogy here to tell a story, especially to the public and to policy makers. We use the term ‘emission’ to denote the similarity between the carbon footprint we generate offline and the digital footprint when we go online. In mainstream literature on privacy, there's little emphasis on the collective dimension of privacy - privacy is viewed as something to do with individual’s data and rights. So, privacy protection is to protect this autonomous self from the intrusion of others. By using the digital emission metaphor, we're trying to say that what you generate online, these digital footprints, are also having an impact on others, just like carbon footprints will have an impact on the environment and society.
The metaphor of ‘emissions’ - it’s a nice image that everyone can recognize, but is it also an element where there is no autonomy, that these emissions come from you that you cannot choose to emit?
Philip: I think on many occasions you are making conscious decisions in undertaking certain online activities to give away / give up your data in exchange for ‘free’ products or services. There's a well-known concept called the privacy paradox: on the one hand, when you ask people whether they are concerned about their privacy, they'll all say ‘yes’; but on the other hand, when you observe them do things online, they act as if they don't really care about privacy.
One explanation to this paradox is that people kind of have no choice but to give up more data than they’d ideally want to. But they do sometimes make calculated decisions they don't fully understand, which have future consequences that impact themselves and others.
David:This is where the metaphor sits alongside greenhouse gases. It is impossible for us to not give out any carbon emissions. You can’t go completely off the grid. So, you have to engage, which raises the question of how do you do that responsibly? In the same way you’d change means of transport to be conscious of carbon emissions, how do you make better decisions?
But there is a question as to what do you give away and how do people understand what they're giving away? And as Ian said, it often comes as a shock after the fact with people not even aware of what’s going on.
You collaborated with Louis Netter on the comic strip. How did you come to that idea? Or to lead on from Philip’s comment, what is the effectiveness of using visual metaphor for an issue such as near future digital vulnerabilities?
Iain: Back at the Sandpit itself, we were trying to think of ways to make very complicated ideas and discussions about the future of data emissions, and someone suggested using comic books. I asked around on my University’s Facebook group and someone suggested Louis Netter, who has been using comic books in his research for several years already – mainly about health behaviours in parts of Africa. He was very enthusiastic. We worked out a series of scenarios of what the future might look like, and Louis created a working script including panels we could use with the focus groups. He’s very keen on taking this next step further to then develop a couple of panels based around what people have been saying in the focus groups. It’s a particularly good example of the use of imagery for potentially very complicated subjects.
David: In a broader sense, the idea came from reading other papers that used creative means to elicit information from people. In previous work, researchers have used animation or comic related documents to disseminate complex information. In terms of dissemination, there are even journals in psychology where you can integrate videos into papers as a visual abstract. This project is quite a good example because the whole method becomes a dissemination technique.
With the ‘emissions’ metaphor in mind, would you say that you're trying to highlight in your project that that there is negotiability and choice, but that this choice is often masked?
David: I tend to agree, but I think it's also simpler than that. I think it's just about us trying to get a feel for what people are thinking about when they're generating ‘digital emissions’, or how they're managing it. What this project is doing is putting people in a real, current scenario with their own phone, and seeing how that helps elicit their current understanding; to get them engaged with that technology alongside future visions to see whether people may know more about this than we assume. That then helps to inform, to see how to make sure people are more aware of what they are/aren’t giving away, or how they can make informed choices.
Iain: One of my colleagues has been on various media platforms talking about the AirTag. It was initially developed as something for security. If you lost your keys, it's on your keychain - you can find your keys. But now it's been used for stalking. There’s little consideration of how some of these things might be exploited by cyber criminals.
As an early user of Facebook, for example, back in 2007-8, we had no idea of what information was given away from that point in our lives. And now it's only now we're aware of it but are looking to the future of things like smart cities, smart buildings. We're going to be just shedding this data everywhere around us as we go.
I took inspiration for wanting to get involved with this project from a German comic-book called QualityLand (1) where all these decisions are made for you in advance. Everything is based around algorithms that suggest what you want based upon your history, which they've got from all your data. And you have no choice but to accept it because it's correct. So that that's the scarier potential version of the future that as David says it's good to take a step back to find out how people use devices.
Would there be a point in ‘dumb’ technologies? Ones that don’t record everything, etc?
David: The problem is that the recording of things is often tied up with the functionality of the things that we use. It makes me think of WhatsApp. If you fully engage with WhatsApp, you can see when people read your messages and when they're online, and if you decide to hide that information then you also lose that from the other end as well. WhatsApp still records all that information anyway. And there's no way of changing that really, because if you don't have that, then WhatsApp can't operate as a service; it must have timestamps to know when it's sent information.
The other side is that there's safety in numbers of having those timestamps. If you use telegram as an example that are much more off the grid in some ways, then they come with other risks in terms of traceability (or lack thereof) with the potential to be used for criminal activity. There's an interesting middle ground of people being in control of a socio-technical system, but also the socio-technical system acting as a as a safety barrier due to the data that's generated as simultaneously. It's push-pull.
As mentioned earler, a lot of those emissions are not only fundamental for a service to work, and (particularly in things like healthcare or other information systems) are essential, but they can actively help us if things go wrong. If I spent something on my card down the road, and then suddenly it registered a transaction in France, the bank would know that that's geographically impossible, so the digital system would block the card, for example.
Potentially people can access ‘dumb’ technologies. You can still buy a ‘dumb’ mobile phone, but it may well become more of a nuisance.
Did you find strategies to maybe diffuse or split ‘emissions’ into traceable/non-traceable data i.e., autonomous privacy?
Philip: We have done our initial coding of the interview transcripts, in which we did find a variety of strategies that the users adopt to limit their ‘emission’. These include deleting cookies, using VPN etc. When they were setting up their phones, they demonstrated awareness of the potential risks in accepting default settings, for example.
The users we interviewed are highly knowledgeable and capable. They are Bath students, and they understand how technology works. So, for this particular group of people, I think they do have an awareness and general understanding of the potential harm that could be caused by using digital technologies.
David: The choices that people get when they first set-up a new device is bordering on too many and it's not as if those choices are often revisited. In some cases, like Facebook or Google, they will often re-ask users about privacy at various points. But with most people's mobile phones, once you've made your choices, they never give opportunity to change them. And they present the choices at an interesting time. It's done when someone gets something shiny and new, where they don't want to go through setup screens. That generates a cognitive burden at the wrong time for many people.
Asad, what was your particular interest in this project and digital ‘emissions’?
Asad: My interest is around the harms angle and the fact that we are aware that people view privacy more from the personal perspective. But they're not aware of the collective harm scope there is, what risks there are for harm based on large collection of data. There are many scenarios where this large collection of data is being used to tailor your services to your preferences to get you to engage, to be stuck onto the on the system and not leave. They’re fighting for your attention and that's where we see a lot of harms arise. It could be [as a ramification on your online activity] a decision on your eligibility for finance, or on which piece of content to show you. If a million people like you have viewed this very harmful content, then that's going to be recommended to you, and sometimes we are not aware of that. So, we are trying to understand how users view this collective data collection. We need to tackle the potential for harm in in every sphere and this piece of work is crucial at that foundational level, to start looking at it from this perspective.
There have already been questions (particularly in the context of the overturning of Roe vs. Wade in the US) about the use of tracking apps, such as period tracking apps, and the cyber security around them, with the [users’] handing over of very intimate personal data. Do you think the research around this will grow within these contexts?
Iain: There is already research in this area, but I agree that it's going to be blowing up massively now because of that.
David: It's a crossover challenge between cultures and law where they interact with what technology may or may not reveal. For instance, if something is legal, then it’s less of a deal that data may get out. But if it is suddenly incriminating depending on law, then the data being shared has much greater consequences. Therefore, the level of harm or risk to harm can change overnight suddenly. That's why when we're thinking about online harm, it's not just a socio-technical system, it's everything that sits around it (industry, government etc) because that then interacts and changes.
Therefore, this project is part of something that could become much bigger. My field (of psychology) often tries to narrow harm down to the individual perspective - that people use their devices too much, for example - which is fundamentally useless to policy. The extreme argument is to stop using technology, which isn’t going to happen. As soon as you take one step into that next level, it becomes infinitely more complicated; nothing's operating as an island, and that makes it really challenging to study. But that's why you need groups like ours and others where you're getting people from different areas who are feeding in their different expertise.
What do you think are the strengths of speculative and probe approaches?
Iain:Coming from a mixed methods background myself, I always find that if you're wanting to explore a new concept, speculative methods are an effective way of opening up ideas and getting rich information from people. I find it an especially useful way of starting to explore ideas to begin with. Rather than giving them a survey for example, which might not actually reflect what they wanted to discuss or talk about, these methods give them space to develop ideas, perspectives, or thoughts further.
David: For me as well, there’s something interesting about doing more qualitative research, because most of my career has involved statistical modelling. I think that's a nice thing about behavioural science. But yes, when you're faced with things like this that seem infinitely complicated, the only way to really get what you need is to talk to people, which makes sense from a psychological point of view.
Do you think then those speculative approaches when used correctly can help you find where problem spaces are?
David: Yes. If you look at the draft of the government's current online harm bill, if being critical, that's where that hasn't happened. The online harm bill is very unclear. The intention is completely sensible, but the unpacking hasn’t happened, nor has future potential of unpacking these issues been acknowledged. Government policy won’t mention how messy this area is; whereas speculative approaches and methods acknowledge that head on. People talking about these issues naturally becomes more complicated than a survey; our digital lives are quite hard to capture from a survey. So, I would say they are more effective, yes.
What are the key findings of the project as it stands?
Philip: We haven’t written up our empirical findings yet. But one thing that jumped out for me was how the students are aware of potential risks in terms of digital emissions, but they also just go about their lives. There’s definitely a discrepancy between the feeling/attitude and the actual behaviour they’re conducting. There’s a reoccurring theme of this so-called paradox.
Another interesting finding which we aim to supplement with the focus group discussions is that there were at least two people in the interview group whose parents regulate how they use their phones in terms of protecting themselves online. This means that when people use technologies, they’re not using them in isolation. There is an interesting micro-social environmental factor, in this case on the family level, which influences how some individuals use their technology.
David: To follow on from that, the privacy paradox remains a hot topic in both communications research and management. In the last few years there has been quite a lot of quantitative work that’s challenged this paradox, arguing this isn’t how people behave. But what is interesting about this project is that it says that maybe there is something in the paradox when talking to people. Maybe we have been asking the wrong questions previously.
Do you think that the multi-disciplinary approach is effective in tackling the clusters of issues in this sphere, where the problem space is difficult to identify?
David: I think it’s the only way of making progress at any speed that would be useful given the pace of technology development. Disciplines can tend to repeat each other. It takes people from other disciplines to intervene and identify other ways of thinking. That’s why having Asad and Ofcom as partners that inherently build into the project is so important. I could draw a parallel with work I've done previously in health; if the NHS staff are not working on the papers with us and you’re trying to change something, it’s a waste of time. The complicated problems are not likely to be solved by one discipline. That’s not to say we shouldn’t have single discipline research either of course!
And finally, tell us what would be your take-home message about what’s innovative in your research?
Iain: I’d say the fact that its origin is at the SPRITE+ Sandpit itself. Part of what we sold it on was using the comic books as the visual aid for the focus groups.
David: I agree. It’s innovative as it came from a Sandpit which we don’t do very often, and in the fact that it’s coming from many different disciplines, like psychology, geography, human-computer interaction, management. It’s innovative in that it acknowledges the complexity of the challenges that lay ahead, and because we elicited reactions from getting people to use their devices and making a video recording of their actions. This, again, doesn’t happen very often – particularly with the setup of a device – as well as using comic books in conjunction. It is innovative on many levels.