top of page
  • Writer's pictureGanna Pogrebna

Behavioural Data Science and Responsible AI

Artificial intelligence (AI) is transforming the way we live and work, from personalised product recommendations to self-driving cars. However, as AI becomes more sophisticated, concerns are growing about the potential biases associated with AI systems. Under these circumstances, ethical concerns need to be embedded in AI systems. This is where behavioural data science already plays a critical role as it ensures that AI is developed and used in a responsible manner.


Behavioural data science contributes to the development of responsible AI systems in the following ways:


1. AI Risk Identification: Behavioural Data Science as a field made an important contribution to identifying AI-related risks. Specifically, behavioural data science methods were used for the analysis of potential risks, related to emerging technologies for the Council of Europe report "A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework". The report was prepared by the Expert Committee on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT) at the Council of Europe. Advanced digital technologies and services, including task-specific artificial intelligence (AI), have generated substantial benefits, such as increased efficiency and convenience across various digital services. However, the emergence of these technologies has also raised public concerns about their potentially damaging effects on individuals, vulnerable groups, and society as a whole. To ensure that these technologies promote individual and societal flourishing, it is crucial to understand their impact on human rights and fundamental freedoms, as well as to determine where responsibility lies for any adverse consequences.


The study examines the implications of advanced digital technologies for the concept of responsibility, particularly regarding potential impediments to the enjoyment of human rights and fundamental freedoms protected under the ECHR. The interdisciplinary methodological approach incorporates concepts and academic scholarship from law, humanities, social sciences, and computer science. The study emphasizes that nations bear the primary duty to protect human rights, and they must ensure that those who design, develop, and deploy these technologies are held responsible for their adverse impacts.


Chapter 1 introduces AI and task-specific AI technologies, discussing their reliance on machine learning, their dynamic nature, and their ability to adapt and change over time. It also highlights the interdisciplinary "human rights perspective" adopted in the study.


Chapter 2 examines the range of adverse consequences potentially associated with the use of advanced digital technologies, such as threats to the right to a fair trial, freedom of expression and information, privacy and data protection, and protection against discrimination. The chapter also explores the implications of data-driven profiling techniques at scale, which may undermine human dignity and autonomy. Other adverse social implications considered include risks of large-scale harm from malicious attacks, unethical system design, unintended system failure, loss of authentic human contact, chilling effects of data repurposing, exercise of digital power without responsibility, hidden privatization of decisions about public values, and exploitation of human labor to train algorithms. The discussion highlights the power asymmetry between those who develop and employ AI technologies and those who interact with and are subject to them, potentially threatening collective values and interests.


Chapter 3 of the Council of Europe Study focuses on the allocation of responsibility for adverse consequences caused by advanced digital technologies. The chapter highlights the distinction between historic (or retrospective) responsibility, which looks backward, and prospective responsibility, which establishes obligations for the future. The study argues that both types of responsibility must be considered for AI technologies to prevent and repair harm effectively. The chapter investigates how advanced digital technologies implicate existing conceptions of responsibility, emphasizing the difference between moral and legal responsibility. The discussion primarily focuses on analyzing responsibility for human rights violations rather than tangible harm. The chapter identifies different responsibility models, including intention/culpability, risk/negligence, strict responsibility, and mandatory insurance schemes, and their suitability for allocating and distributing responsibility for adverse impacts arising from the operation of AI systems. The discussion draws attention to several challenges that arise in seeking to allocate responsibility for the risks and adverse impacts arising from the operation of complex socio-technical systems, including the "many hands" problem, human-computer interaction, and interacting algorithmic systems. The study emphasizes that states bear the primary obligation to ensure that human rights are effectively protected, and the importance of national legislation, properly resourced enforcement authorities, and accessible collective complaints mechanisms to ensure effective human rights protection. The chapter also identifies potential non-judicial mechanisms to help secure both prospective and historic responsibility for the adverse impacts of AI systems, including impact assessment, auditing techniques, and technical protection mechanisms. The chapter concludes by considering whether existing conceptions of human rights are fit for purpose in a global and connected digital age, suggesting the need to reinvigorate human rights discourse and develop new institutional mechanisms to safeguard against the adverse effects of new digital technologies.


Chapter 4 concludes by summarizing the findings of the study. Firstly, effective mechanisms must be in place to prevent and forestall human rights violations in advanced digital systems, which pose substantial threats to human rights without necessarily generating tangible harm. Secondly, identifying the appropriate responsibility model to prevent threats and risks associated with digital technologies is a social policy choice, and states have a critical responsibility to ensure transparent and democratic decision-making that will safeguard human rights. Thirdly, interdisciplinary technical research must be developed to facilitate effective technical protection mechanisms and algorithmic auditing to ensure due respect for human rights values. Fourthly, effective and legitimate governance mechanisms, instruments, and institutions must be in place to monitor, constrain and oversee the responsible design, development, implementation and operation of complex socio-technical systems. Finally, those who deploy and benefit from digital technologies must be held responsible for adverse consequences, and states must ensure that both prospective and historic responsibility for risks, harms, and wrongs arising from advanced digital technologies are duly allocated. Overall, the study highlights the importance of protecting and promoting human rights in a global and connected digital age and underscores the critical role of states in ensuring that human rights are effectively safeguarded in the use of advanced digital technologies.


2. AI Governance by Human Rights-Centred Design, Deliberation and Oversight:

An End to Ethics Washing: Behavioural Data Science methods have contributed to the in-depth analysis of AI Governance by Human Rights-Centred Emerging Technology Design method, suggested in the Oxford Handbook of AI Ethics, Oxford University Press in Chapter 4 of the Handbook. This chapter highlights the deficiencies in the current voluntary self-regulation model for ethical AI, which has led to conceptual incoherence and a lack of clear ethical standards within the tech industry. Ethical codes often lack a comprehensive vision and do not address the tensions and conflicts that can arise between norms. Moreover, there is no effective governance framework to independently assess and enforce ethical standards, resulting in a marketing exercise rather than truly ethical AI.


The chapter proposes an alternative approach called "human-rights centered design, deliberation, and oversight" that is systematic, coherent, and comprehensive. This approach requires human rights norms to be considered at every stage of AI system design, development, and implementation, adapting technical methods and social and organizational approaches to ensure compliance. The regime must be mandated by law, with external oversight by independent, competent, and properly resourced regulatory authorities. However, the approach will not guarantee protection of all ethical values, as human rights norms do not cover all societal concerns. More work needs to be done to develop robust, reliable, and practical techniques and methodologies. There are also significant challenges in establishing a legal and institutional governance framework that ensures meaningful scrutiny and input from stakeholders in the design, development, and implementation of AI systems.


The proposed approach requires interdisciplinary research and cooperation between computational, engineering, technical specialists, and legal experts with competence in human rights discourse and jurisprudence. Universities must create, nurture, and deliver interdisciplinary training and education to equip professionals with the necessary skills and commitment to embed human-rights centered principles into AI systems. The approach faces cultural challenges, including obstacles to systematic implementation into product development lifecycles for AI. Software engineering has yet to mature into a professional discipline committed to robust technical methods and standards. Additionally, objections may arise, stating that legally mandated regulatory regimes will stifle innovation and hamper tech start-ups. However, evidence suggests that legal regulation can foster socially beneficial innovation. The chapter argues for a human-rights centered approach to AI governance that requires interdisciplinary research, cooperation, and a shift in cultural attitudes within the tech industry. It is crucial to foster a language and culture of human rights consciousness to ensure the ethical development and implementation of AI systems.


3. Behavioural Data Science and Explainable AI: Behavioural Data Science works at the forefront of the development of explainable emerging technology systems. For example, anthropomorphic learning is a model, which combines decision theory and machine learning to develop new algorithms, which can be better understood by humans. It involves using data to better understand human behaviour, including biases and other factors that may impact decision-making. One example of the use of behavioural data science in responsible AI is the use of data on user behaviour to identify and address biases in AI systems. For example, if an AI system is designed to make hiring recommendations based on resumes, it may inadvertently perpetuate biases in the hiring process if the data it is trained on reflects existing biases in the workforce. By analyzing data on user behaviour and identifying patterns of bias, researchers can develop more inclusive and equitable AI systems that better reflect the needs of all users. Another area where behavioural data science can be useful in responsible AI is in analyzing data on user feedback and preferences. By collecting and analyzing data on how users interact with AI systems and what they find valuable, researchers can better understand how to design AI systems that meet the needs of diverse user groups. This can help ensure that AI systems are not inadvertently excluding certain users or perpetuating biases. In addition to improving the design of AI systems, behavioural data science can also be used to monitor and evaluate AI systems for bias and ethical concerns. By analyzing data on the outcomes of AI systems, researchers can identify areas where biases may be present or where ethical concerns may arise. This can help ensure that AI systems are used in a responsible and ethical manner and that their impact on society is carefully monitored and evaluated.


Behavioural Data Science is a powerful tool for developing and using AI in a responsible manner. By analysing data on human behaviour and decision-making, researchers can better understand the biases and ethical concerns that may be present in AI systems and develop strategies to address these issues. It is also important to ensure that data is collected and analysed in an ethical and responsible manner and that AI systems are developed and used in ways that benefit all users and promote transparency and inclusivity. As AI continues to transform the way we live and work, behavioural data science will become an increasingly important tool for ensuring that AI is developed and used in a responsible and ethical manner.


Selected References:


Yeung, K. (2018). A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework. MSI-AUT (2018), 5. (Karen Yeung - rapporteur; Ganna Pogrebna and Andrew Howes - contributors).


Yeung, K., Howes, A., & Pogrebna, G. (2020). AI Governance by Human Rights–Centered Design, Deliberation, and Oversight. The Oxford Handbook of Ethics of AI, 77-106.


Pogrebna, G. and K. Renaud (2023) Big Bad Bias Book, forthcoming.

bottom of page