This article was edited by SPRITE+ Research Associate Dmitry Dereshev, with responses and edits from the Director of the Centre for Computing and Social Responsibility, Professor Bernd Stahl.
The spotlight today is on Bernd Stahl – Professor of Critical Research in Technology, Director of the Centre for Computing and Social Responsibility at De Montfort University, and a SPRITE+ Expert Fellow. Some of Bernd's latest publications include:
How would you describe your job to a 12-year-old?
I look at ethics of technology. My job is to think about how new technologies affect the world, and what we can do about them. A key topic at the moment is the ethical issues of artificial intelligence (AI). When somebody builds a self-driving car for example, they have to make sure that car reacts to road situations appropriately.
I work with people who develop and deploy these new technologies, to get them to ask the right questions and make sure that advanced tech like self-driving cars does not run anyone over or destroys the environment.
Could you describe what you do during a typical workday?
Hours and hours of virtual meetings 😄. I spend my working days glued to a computer doing research. I work with people from across Europe and beyond.
Some meetings have to do with research content, like: what data do we collect? What do we collect it for? How do we analyse it? How do we write papers about it? What audiences do we cater to? A lot of it also has to do with administration, project plans, and budgets. I spend the majority of my time communicating within my university, and with other universities and non-university partners discussing and coordinating various projects.
Could you describe a challenging project that you’ve recently worked on?
I currently coordinate SHERPA project. SHERPA is an EU project that looks at the ethical and human rights aspects of big data and AI. The project has been running for a bit over 2 years now, and has another year to go.
We have conducted case studies, scenarios, legal and ethical analyses, and looked at possible interventions. We have also looked at the technology and standardisation, and developed guidance for developers and users.
The challenge now is to bring all that together to create a coherent narrative, and to come up with recommendations that make sense and fit into the current landscape that we can then communicate to decision makers and policy makers.
What specific technologies have you worked with?
We are interested in questions of model poisoning, how that would happen, and how you can defend against that. More broadly, we are interested in working with organisations who either develop or deploy smart information systems. We try to cover a spectrum from the very techie people who build systems to the people who employ and use them.
Smartphones and tablets make heavy use of AI, so they are a good example. You have companies like Google who really know what's under the hood, but the end users typically do not. Even people who are the intermediaries that are closer to the technology, like app developers and distributors, don't really own the content of the technology. Even if you are a computer scientist and have a perfect theoretical understanding of how computers work, that does not mean that you have any idea of what happens in your phone at any given time.
How well do people who make laws about these technologies understand them?
The people who work on legislation tend to be very clued up. They often have a technical background themselves. They may not understand the bits and bytes of any single neural network, but they certainly have a clear understanding of the capabilities of tech.
As part of the SHERPA project, we have done a training event for the European Commission policy officers, and we had about 80 of them in the room. To me, that demonstrated a high level of interest, and they also had good questions and good insights. They have a lot of technical, but also other types of expertise. I suspect, it is similar in the UK.
I think the more fundamental question is: what exactly is the ethics of what they want to regulate? What are the ethical issues? How do we define them? What bits of technology can and do we want to regulate? Where does it make sense? Where does it not make sense?
What do you think “ethical” actually means in ethical AI and smart information systems?
I think it is becoming very publicly visible that some technologies have non-technical consequences. Cambridge Analytica was one public highlight, and the Snowden revelations very clearly showed the potential of comprehensive state surveillance across probably the entire world. I think these scandals have really driven awareness of these questions, and if you look at the newspapers today, they have stories that have an ethical technology aspect to them.
You also have philosophers who look at ethical theory. You have millennia of consequence-based ethical theories, duty-based ones, and virtue-based ones. At the same time there is agreement among philosophers how best to understand practical ethical issues and what to do about them. There is a pluralist theoretical position that says: ethical issues are what people perceive them to be. That is what we've done in the SHERPA project; we asked people: “what do you think is an ethical problem?”, and if they said: “this is an ethical problem”, we just accepted that it was, without trying to reinterpret that from a particular ethical perspective.
It is a broad field and many different players have different perspectives on what we should focus on. At the moment, there is a lot of legislation happening through the European Commission that will probably be around risk assessment for AI liability, the question of: “who is responsible for AI if something goes wrong with a particular system?”
Laws can be too vague to reliably implement them in code. Have you encountered this as a problem?
Absolutely. There are different interpretations as to what those laws mean. A related example is the question of explainability of AI in the General Data Protection Regulation (GDPR): you should be able to explain what technology does. But what does it mean “to explain” in this case? There are many different interpretations of “explainable AI”. A techie may have the desire to implement something where they can trace what happens in a neural network, but that may be completely irrelevant to a customer of a bank whose mortgage application was just declined by the system; they have no interest in what happens inside the algorithm, but they want an explanation as to why they have been denied that mortgage.
I think the interpretation of some of the key terms differ vastly between different disciplines, and between people with different backgrounds. When I talk to technologists they often say: “what you are saying is too woolly, I need to be able to program it”. The concepts we use in law and ethics simply aren't of that sort – I can't give you a definition that is completely watertight, which you can then turn into a bit of code.
What training/experience did you have at the start of your career?
I started my career as a soldier. After school I joined the West German army, and stayed there for 12 years specialising in artillery and air reconnaissance. As part of that, I went to study industrial engineering at the University of the German Armed Forces in Hamburg. One of the conditions there was that technologists had to do humanities and social sciences, so I had to do a module on ethics and technology, which I found very interesting.
I served as a forward observer, and spent a lot of time trying to figure out where exactly I stood and where the enemy was, which involved a lot of mathematics and computing. But then all of a sudden, there was the big bang where the artillery hit. That brought science into reality in a completely different way from anything I had experienced before.
The application of technology in the army shows that the consequences of the use of technology may be much more radical than you would imagine sitting in a classroom doing a computer science degree. That was one of the motivators for me to look at why we do this and how we justify it. The military experience was really important in my personal journey because it gave me this completely different view of what technology is and what it does.
How did you get into your current role?
When I left the Army, I started a PhD. Halfway through that I was offered a lecturing position in Dublin that was funded by the German Academic Exchange Service. I brought together business ethics and information technology work, and ended up writing a PhD on responsibility and information systems.
I finished my PhD in the early 2000s, and my contract in Dublin ran out. My topic was not very prominent at the time, and there weren't many places where you could do research on ethics and technology. At De Montfort University they had established the Centre for Computing and Social Responsibility, which is still a leading centre in this area, and they had a vacancy. That's how I got there. I worked my way up the career ladder, and about 10 years ago I became the director. That's where I have been ever since.
What do you wish you'd known when you started your career?
My career was not planned. I certainly never thought I would end up a professor in Leicester, which I didn't even know existed at the time when I started to pursue a research-related career. I think it was a collection of opportunities and a bit of luck. I don't think I have major regrets – it has been going okay so far.
What would you recommend to people who want to follow in your footsteps?
I am not sure anyone would want to follow in my footsteps. I have taken a very idiosyncratic route through life, which worked for me but probably would not work for many other people.
If I were to give advice to people who are at an earlier stage of their career, it will be something generic like follow your instinct, do what you find interesting, and hope for the best, and sometimes it works out.
What troubles did you have progressing through your career?
The most difficult bit in my career was the break from years in the Army and going into academia. That was a big cultural change. I wanted this change, but it was still difficult. I was already over 30 when I started my PhD which made it difficult, and for a while I wasn't sure it was going to work out. There were a few years when I was not sure academia was a good fit for me, and I didn't know what the options were. What saved me was that job in Ireland. That international move was very conducive to my career, and ever since it's been fairly plain sailing.
What one stereotype would like to dispel about your job or industry?
One stereotype is that people who do ethics are completely removed from real life. I think that's not true. There is a lot of technical awareness and understanding of how technology is used in organisations, and people try to bring some ethical reflection to those questions.
How would you describe your research or business interest in relation to SPRITE+?
Privacy is definitely the most discussed individual topic under the heading of ethics and technology. Issues around privacy include the questions of why we protect privacy, what we mean by it, and what the consequences are. There are also much deeper philosophical questions around what it means to be a person, and how technology changes the way I see myself or how other people see me. I think ethics is a constituent part of SPRITE+, and that's what I have been doing.
How do you hope to benefit from working with SPRITE+ network?
It is always important to be a part of the scientific discourse that covers both the technology and the applications. SPRITE+ is a place where experts from different areas with different backgrounds can come together and exchange ideas. While in the SHERPA project I do stuff around ethics of AI, I do not necessarily know what the cutting edge things in AI are. By being part of the conversation, I benefit from better understanding of what the technologies are, what the applications are, and what other approaches, reflections, safeguards and policies are being discussed. I see being up to date with these debates as very beneficial.
Which of the SPRITE+ Challenge Themes can you relate to from the job that you do? How does it impact your role?
I think the one with the most obvious resonance in the title is the Accountability and Ethics in the Digital Ecosystem. That's where the questions of regulation, legislation, self-policing are discussed.
The theme of power and control is also extremely important in ethics. With regards to AI, the big tech companies use that in order to cement both economic and political power. That has an important ethical angle to it.