This project explores how human dignity may be impacted by an AI-based decision support system for post-Covid health certification. Drawing from law and moral philosophy, we relate the definition and substantive content of human dignity to two aspects: (1) recognition of the status of human beings as agents with autonomy and rational capacity to exercise judgement, reasoning, and choice; and (2) respectful treatment of human agents so that their capacity is not diminished or lost through interaction with or use of the technology. We identify components, sub-components, and related concepts of human dignity to translate into algorithms. These algorithms are then used to design an agent-based behavioural simulation model of the health certification process and AI-based decision support system. In a closed computer-based environment the simulation model utilises scenarios that indicate undermining or loss of human dignity (e.g. coercion, manipulation, deception, loss of autonomy). Part of the challenge is to see whether it is possible to represent human dignity as algorithms to determine behavioural changes, which can then be used as a proxy for understanding the impact of an AI-based decision support system.
The project has identified several legal-philosophical components that constitute human dignity, primarily the status of human beings as autonomous agents with rational capacity. A respectful treatment which does not diminish these capacities, where the decisions are made in a way which put person’s interests first would constitute a system which treats human dignity fairly.
The team has developed an algorithmic design of a human dignity-aware decision support system (DSS), together with pre-conditions simulation for a human dignity-aware DSS, based on agent-based modelling.
Three use case scenarios were developed to represent real-life contexts of an individual interacting with a human dignity- aware DSS to access a vaccine and obtain a vaccine credential. These scenarios provided representation of:
Different options which may be available to an individual, and how an individual may act in relation to their interactions with the DSS and other agents.
The role of the DSS, the health certification authority, and the service provider.
Human dignity components which may be impacted when an individual interacts with the DSS.
Factors which may affect an individual’s interaction with the DSS (e.g., time, technical ability, human factors, AI).
This project’s impact is that this is the first ever attempt at designing and developing a human dignity-aware AI system that relies on source expertise on human dignity based on law and moral philosophy, and combines with AI expertise in algorithm mapping and design, and simulation modelling. It creates a “human dignity-aware AI design” method that connects source expert knowledge on human dignity with guidelines to inform development of future human dignity-aware AI systems. By representing the AI system as part of a process in which it may be operating autonomously or semi-autonomously, alongside other agents, and where there may be intervening acts by other agents, an evaluation can be made as to: (i) whether the system directly or indirectly impacts on an individual’s human dignity; (ii) the causation of impact; and (iii) legal responsibility and liability for harm, damage, or loss.
This project is innovative, because human dignity has not been specifically explored as an important indicator of whether the technology is ethically designed, developed, and deployed. Nor has human dignity been used to understand behavioural change induced by the technology deployed. If an AI-based decision support system is used for health status certification purposes to determine the extent to which a person can enter public spaces, premises, and access resources/services, then there is potential for the person’s human dignity to be undermined or lost.
The team will collate, evaluate, and write up the research findings in a high impact peer-reviewed journal. The team also aims to seek further funding to develop the existing basis of the research, and expand it to other domains.
Call for Events is now open! We're supporting Members and Expert Fellows to lead activities that explore aspects of TIPS in the Digital Economy. We will help to organise the activity with up to £5,000 to cover the associated costs.
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.