SPRITE+ Lunch and Learn
- spriteplus
- May 23
- 3 min read
Updated: 2 days ago

SPRITE+ Lunch + Learn is your chance to:
Discover the latest developments in the world of TIPS (Trust, Identity, Privacy and Security).
Hear short TED style talks by experts on TIPS related topics.
Get your questions answered with 10 mins of Q&A.
Join us on the third Wednesday of the month at 1:00pm (GMT) for a lightning 20 minute talk from an Industry or Academic expert.
Upcoming Lunch and Learn talks:
Keep an eye on this page for further updates on our 2026 Lunch and Learn talks.
Watch our previous Lunch and Learn talks:
Prof. Rahat Masood: 'From Sophistication to Coordination: Decoding the Next Generation of Social Bot Behaviour and Detection'
Social bots have advanced into highly sophisticated actors within online ecosystems, shaping public discussion, spreading misinformation, and carrying out coordinated deceptive campaigns on an unprecedented scale.
Driven by the rising threat of covert and adaptable bot operations, Dr. Rahat Masood's talk brings together three complementary investigations into the behaviour and detection of these entities. Together, these investigations offer a comprehensive view of how advanced bots operate and how they can be countered, emphasising the urgent need for adaptive, resilient, and trustworthy detection strategies in today’s dynamic social media landscapes.
Prof. Emma Barrett: 'What are the TIPSS risks associated with immersive technologies?'
In this introductory talk, SPRITE+ Co-Director Emma Barrett outlines the potential risks and harms associated with immersive technologies such as virtual reality, mixed reality, and augmented reality.
The talk is aimed at people who have limited familiarity with these new technologies and their applications. Prof Barrett draws on the results of a recent scoping review of relevant research to highlight the range and nature of TIPSS risks and harms, and potential ways of mitigating harm.
Dr. Jennifer Cearns: 'Through the Looking Glass?' The Collapse of Self/Other in AI Mental Health Apps.
In this talk, digital anthropologist Dr. Jennifer Cearns discusses how Generative AI is challenging some of the analytical frameworks we think about self and other with, specifically regarding the rapid rise of AI in mental health care.
This talk was co-hosted by Digital Futures, a highly interdisciplinary network that operates across the whole range of the University’s digital research.
Dr. Emily C Collins: 'Where's the Trust? The Implications of Human Interactions for Trustworthy Robotics and AI'
Who are the users? Who are their employers? Who deploys the technology? And what do these mediating relationships have to do with knowing the best way to deploy RAI in the real-world?
In this talk, Human-Robot Interaction (HRI) researcher Dr. Emily C Collins will discuss the importance of understanding human interactions in RAI (Robotics and AI) use.
Daniel Dresner: 'Don't dump that risk on me'
'Free us from security tyranny! Why do you expect me to secure multi-million pounds worth of assets on my salary? Just let me do my job!'
Daniel Dresner FCIIS is the first Professor of Cyber Security at the University of Manchester following 22 years with The National Computing Centre. In this talk, Danny discusses how setting unrealistic and unreasonable expectations that users will compensate for systemic flaws in information technology exacerbates security issues, and how we might redress the balance to improve cyber security.
Kieron O'Hara: '5 AIMS (Artificial Intelligence Management Strategies)
Our first Lunch + Learn talk was with computer scientist and philosopher Kieron O'Hara. Following on from their recent book Four Internets, on ideology and Internet governance, Kieron O’Hara and Wendy Hall have applied its framework to Artificial Intelligence, delineating Five Artificial Intelligence Management Strategies (5 AIMS).
In this talk, Kieron considers these in the context of the theory of trust from his forthcoming book Blockchain Politics to consider how each of the 5 AIMS interprets the notion of trustworthy AI differently, and what this means for global governance of AI.

