This article was edited by SPRITE+ Events, Projects and Communications Assitant Katy Taylor and SPRITE+ Interim Project Manager Alan Munro, with the interview responses and edits made from the project team.
Today the spotlight is on the project titled 'Revealing Young Learners’ Mental Models of Online Sludge' led by Dr Karen Renaud (University of Strathclyde), with Dr Bryan Clift (University of Bath), Dr Benjamin Morrison (Northumbria University), Dr Kovila Coopamootoo (King's College London) and Dr Cigdem Sengul (Brunel University), Dr Mark Springett (Middlesex University London) and Dr Jacqui Taylor (Bournemouth University). Following our 2021 Sandpit, the team explored the awareness of "dark patterns" amongst young learners, with the aim of designing an intervention to forearm them to resist online manipulative techniques.
Could you please offer a short, simple description of your project?
Karen: There are many bad actors online that use deceptive techniques to get people to click on things or to get people to buy things that don't really need. As adults, we have a better sense of what people are trying to get up to, but we were concerned about the fact that children are now online from the age of four, and we don't know what they know about these deceptive techniques, and that's what this project tries to ascertain. How aware are children of these techniques so that we can formulate an intervention? Because being forewarned is being forearmed.
So, what do we need? What do the kids already know, and what do we need them to know now? Revealing mental models is not as simple as just asking people. So, we formulated this method, where we showed the children pictures of screens and then asked them to draw what would happen next if they clicked on this link. We're busy still analysing all the amazing drawings that we got from these children, and that will reveal their mental models, which will then help feed into an intervention.
Cigdem:We have complemented the drawings with verbal descriptions of what they have drawn as well, and slowly converged our methodology in our workshops with the children. The workshops took place in classrooms, accompanied by their teachers, with us online as remote observers. We also introduced a semi-formal interview led by one of our researchers to dig deeper and understand the reasoning behind their drawings. So, it was a two-pronged exploration with a semi-formal interview.
For those who are unaware, could you please define dark patterns, and give us some everyday examples of them?
Karen: When you see a cookie request where they have a big green button saying yes, I accept and then a little text at the bottom saying no, I want to see the options. That's a clear deceptive pattern.
And a lot of the time [these dark patterns] work on our basic instincts, so they'll offer you something for nothing. And if it's children, they might offer them some ‘Robux’ [an in-game currency] to play on their latest game. And so, by enticing people, they get you to click on the link where you might download malware onto your machine, for example. Mostly, dark patterns are those very persuasive techniques that get you to do something that's not in your best interests.
You use the term ‘sludge’ – what does that mean?
Karen: It comes from the book ‘Sludge’ by Cass R. Sunstein, who was one of the authors of the original ‘nudge’ book. Nudging is nudging for your own good, right? A lot of nudging targets your subconscious brain, but if you were to see it, you would agree with it. For instance, something could nudge you to smoke or drink less, and you respond amicably.
Sludging is not for your good; it’s for my good.
Cigdem: As a side note, I’ve observed that learners who have access to an adult who can help them with their interactions in the online world seem to be more confident; there was a lot of mention of consulting parents, for example. They are prepared to engage in those discussions or raise it with their teacher because there has been some kind of security training at school. And that question was embedded in their head. So, it's not necessarily a nudge, but a preparation for students to be confident and careful online and for children to find support from the adults in their lives.
When we think about young people and technology in terms of bad actors, we often think of grooming. And this is a massive thing in online safety. Would you call ‘sludge’ an example of ‘corporate grooming’?
Karen: What you're talking about is the difference between safety and security, and that's a substantial difference. Safety is the child's personal safety. Security is to do with their information and devices. There is a difference, but few people make that difference.
But yes, predators may draw a child in by using sludge, with the techniques not only aimed at your devices and your information. We want to make sure children have a radar to spot this stuff. Whatever the intention of the bad actor, whether it's targeting the child or the informational devices, the kids need to be aware thatthere are bad actors online who might try to do this.
Cigdem:I think it’s important to note that not every child owns a device.. There are shared devices in the households, and you would want to avoid the child being the weakest link in terms of exposing the information on that device that is holding more than what is accessible to a child in the physical world. They wouldn't have a bank card, but they would have access to a device that holds that information, for instance.
Karen: There were one or two kids who spoke about the fact they [the hackers] would be able to see where they live and they [the hackers] could come to their house, which was a strange thing for us to see. We hadn’t mentioned anything of the sort, so the connection was quite curious.
Tell us more about the structure and method of the study.
Cigdem: In the workshops, we basically had a window into the classroom, where we connected with the teacher. Because of the ethics restrictions , we were not allowed to have even a camera on. We just had the sound recordings which challenged us because we couldn’t even see what they were drawing.
Depending on the school's resources, we could see the effects of [those resources] immediately in terms of security training. The schools that were able to give out tablets to students had to also teach the children how to be safe on those tablets, henceforth there was much more given to those students in terms of cybersecurity training. These issues came up less in schools where we've observed resource issues.
Karen: Also, in terms of ethics, we wanted to respect the privacy of the teachers and of the children, and so we also asked for full anonymity, so there was no accountability for mistakes etc. It was often the teachers who slipped up, with the children reminding the adults to maintain anonymity. And we thought that was quite impressive.]>
We published a paper on these issues which investigates how you can do this type of research ethically. Because naturally, you must protect children.
In terms of other challenges, the analysis proved rather difficult as we brought together narratives and drawings, but you’re unsure which narrative belongs to which drawing. But Kovila produced a framework to analyse the drawings which we now call the ‘Kovila’ framework.
But it’s been a long hard process. When ethics are done properly, you’re going to face analysis difficulties, that we can overcome.
Were there any ethical hoops to jump through?
Karen: We had to get signed consent from each parent and the teachers. We had training sessions. We couldn’t go into the schools because of COVID, and doing research remotely adds more problems. For instance, we were going to use Zoom but could not guarantee data was to be stored in the EU. These are the small issues people may not consider, but they’re important.
Ben: Another thing to note is that we only had a limited number of dark patterns that we could cover. In the ‘dark patterns’ literature, there are so many different kinds. I think it would be fascinating to compare our scenarios to different dark patterns.
Cigdem: Also, terminology in schools’ cyber training is not up to date – they are sometimes using archaic terms in their communication of cyber threats such as ransomware. It might be the next step for intervention, to make sure the schools have up-to-date training material.
What do you think in this ecosystem about parental control features?
Kovila: Parental controls provide a useful protective layer, but I feel that parental control features may shift responsibility of safe online practices (from an under-aged user) to an adult user – who may also not be equipped to deal with the potential risks online.
There ought to be mechanisms to clearly share responsibility of ensuring the safety of children (or other vulnerable users) online - there's not one person who's responsible, so it shouldn't be only the child or the guardian’s, whether it's the parent, teacher, or the educator. The provider of the technology must have certain responsibility. Big tech providers need to have responsible practices for what they are producing / deploying, how that impacts people’s lives, and have support mechanisms for inadvertent effects.
The government must make sure the proper regulations are in place, that providers can be held accountable, and be conscious of the accountability challenges and potential loopholes when making policies for protecting the vulnerable. There's a whole ecosystem of stakeholder involvement, where shared responsibility and layers of protection can be developed, that then ensure we are not relying mainly on parental controls for children safety.
Some of the ways in which sites manipulate and play with expectations remind of discourse around abuse, with terms like ‘gaslighting.’ Do you think sometimes some sites are unhealthy and even abusive in the treatment of young users?
Kovila: I would say there are sites that present interfaces that act in [the site’s] best interest rather than the user. There is research looking into platform (deceptive) practices that encourage people to disclose more. For example, social networks integrate mechanisms to encourage user interaction and content creation that obviously benefit the platform, somehow benefits users with online presence but also expose them to abuse as well. Then the problem is in dealing with the aftermath – who provides care to cope with the ramifications to the (young) users’ life and wellbeing, likely not the platform. I think you’ve raised a very interesting question indeed, how can sites / platforms be more involved in caring for potential negative impact on people’s lives?
Jacqui: There is this concept where young people and adolescents have been shown to accept that they must give something away to get something for free. A lot of the drawings that we've looked at so far have accepted that. I have found that they [the students in our study] are quite cautious, even more so than some of my students. I teach 18 to 21-year-olds and they give a lot away. I think they're much less cautious than the children who have had good online training in their schools – some of whom have been excellent.
Cigdem: There’s definitely been a ‘tit-for-tat’ kind of interaction expected. However, I found that the children we worked with were very sceptical online. They question a lot and expect the worst. They have also observed adults being hacked, for example, so that knowledge is built on both self and observed experience.
Do any of you have children of that sort of age yourselves? How would you apply this sort of research as a parent?
Cigdem: I have a four-year-old and he amazes me daily. He has access to a tablet, and I have to say the pandemic had a role in him having access to more screens earlier than we may have anticipated. He knows how to skip the content he doesn’t want to interact with already. But on the other hand, I feel I must constantly monitor the content that he has access to, making parental involvement quite important to use as an educational activity as well. You must be aware and engaged.
Karen: I worked with Susie Prior at Abertay University on a mechanism we called the three M’s: Monitor, Mediate and Mentor. There is no substitute for a trusted adult being there in the child’s life, where if there is an issue, they can report to the parent for example. It becomes more important as they get older; when they’re younger you can take the device away, but not as easily when they’re teenagers, for example. You need to be an approachable adult, [with] an open and non-judgemental manner.
But equally, sometimes parents are not able to do that for whatever reason. So then that dynamic needs to be facilitated in schools.
You’re talking about children’s mental models as they’re growing up. Are you suggesting a Piagetian-style development in terms of online activity/conceptualisation?
Jacqui: There is some development that we would hopefully be seeing but we didn’t take the ages of the children. It was just one year group so we wouldn’t be able to draw any conclusions about that. But we may be able to show [that] children are at different stages. In Kovila’s framework, there are four different categories, and it may be that some children are in categories one and three perhaps somee at two and four for example. Once we’ve analysed all the data, we may be able to see ‘stages’ but it wouldn’t be linked to age as much as Piaget was.
Ben: I think one of the interesting things about child developmental theories is that they stipulate things like peer learning - especially with Piaget - which suggest that there is a certain age at which peer support becomes the most important mechanism for communication between children. So, it gets to a certain point where actually parents predominantly take less of an important role, and similar age peers become the information-sharing source.
One of the interesting things here was that with sharing platforms like Roblox (an example we chose to use as it’s a hugely popular child-based game), instead of those peer support networks, parents were still the predominant driving factors behind knowledge and experience behind cyber threats and attacks. They were learning vicariously from their role models; their parents rather than their peers.
So, in my opinion, I wouldn’t say it necessarily fits within a Piagetian age-based framework. I think it’s more about how that child is scaffolded around that knowledge and their experiences of security at home. It’s about having that discourse and open conversation, as Karen mentioned earlier. I imagine if you took that factor, you’d find children who had open conversations about online threats and harms with parents/role models would have a lot more sophisticated mental models than children who see friends being hacked.
Cigdem: We did see differences in the students’ responses to the cases we presented. Some just repeated back to us what they have seen, and some explained further what is at stake in terms of their security, trying to further understand the motivation behind such an attack. So here we see different depths of understanding or questioning, but it doesn’t quite fit a formal framework.
Ben: It is important to note the scenarios we had were precursors to the ‘trapping’ situation. So being on the website that hosts the malware for example, with us focusing on the transactional approach to online harms. In terms of future research, I think it would be fascinating to find out what children would do in the threatening, trapping situation in terms of coping mechanisms. Would they turn to an adult? Friend? Would they have the technical skills to troubleshoot the problem?
Cigdem: I agree completely with Ben. It was more about how the ‘transactional’ way the children approached the online websites. If they didn't like it, they would reload the website. If it doesn't go away, they close it. There were often black and white interactions that didn’t look at controlling a website according to their privacy needs. I didn't see that happening in terms of legibility, in terms of being able to understand a legitimate warning beyond scepticism. To make things clearer to children, we must think about how to build trust into those warnings.