Trustworthy AI Lab

Graduate School of Data Science, Seoul National University

Trustworthy of AI

Applications based on Machine Learning and/or Deep Learning carry specific (mostly unintentional) risks that are considered within AI ethics. As a consequence, the quest for trustworthy AI has become a central issue for governance and technology impact assessment efforts, and has increased in the last four years, with focus on identifying both ethical and legal principles.

 

The mission of the Trustworthy AI Lab is to advance AI research, education, policy and offer best practices to support ethical, responsible, mindful, sustainable, trustworthy AI and AI-based-applications.

 

The laboratory is interdisciplinary, involving multiple international researchers from various disciplines, and provides a peer forum for academic and industrial research and applications.

In addition, the Lab offers workshops and continuous learning possibilities for citizens, local and regional private entities, professionals, engineers and developers, students who want to learn more concerning the ethical and trustworthy use of AI.

 

The Lab collaborates closely with international initiatives such as the Z-Inspection® initiative.

Z-Inspection® is a holistic process used to evaluate the trustworthyness of AI-based technologies at different stages of the AI lifecycle. It focuses, in particular, on the identification and discussion of ethical issues and tensions through the elaboration of socio-technical scenarios. It uses the general European Union’s High-Level Expert Group’s (EU HLEG) guidelines for trustworthy AI.

The process is published in IEEE Transactions on Technology and Society.

 

Z-Inspection® is distributed under the terms and conditions of the Creative Commons (Attribution-NonCommercial-ShareAlike CC BY-NC-SA) license.


Z-Inspection® is listed in the new OECD  Catalogue of AI Tools & Metrics

https://oecd.ai/en/catalogue/tools/z-inspection 

Best Practices

Lessons Learned in Performing a Trustworthy AI and Fundamental Rights Assessment

“Responsible use of AI” Pilot Project with the Province of Fryslân, Rijks ICT Gilde & the

Z-Inspection® Initiative.

This report shares the experiences, results and lessons learned in conducting a pilot project 'Responsible use of AI' in cooperation with the Province of Friesland, Rijks ICT Gilde-part of the Ministry of the Interior and Kingdom Relations (BZK) (both in The Netherlands) and a group of members of the Z-Inspection® Initiative.


The pilot project took place from May 2022 through January 2023. During the pilot, the practical application of a deep learning algorithm from the province of Frŷslan was assessed. The AI maps heathland grassland by means of satellite images for monitoring nature reserves. Environmental monitoring is one of the crucial activities carried on by society for several purposes ranging from maintaining standards on drinkable water to quantifying the CO2 emissions of a particular state or region. Using satellite imagery and machine learning to support decisions is becoming an important part of environmental monitoring.


The main focus of this report is to share the experiences, results and lessons learned from performing both a Trustworthy AI assessment using the Z-Inspection® process and the EU framework for Trustworthy AI, and combining it with a Fundamental Rights assessment using the Fundamental Rights and Algorithms Impact Assessment (FRAIA) as recommended by the Dutch government for the use of AI algorithms by the Dutch public authorities.

 

Comments: On behalf of the Z-Inspection® Initiative

Subjects: Computers and Society (cs.CY)

 

Cite: arXiv:2404.14366 [cs.CY] (or arXiv:2404.14366v1 [cs.CY] for this version)

 

Link: https://arxiv.org/abs/2404.14366

(Following the link you can then download the online .pdf of the report)

Building artificial intelligence (AI) systems that adhere to ethical standards is a complex problem. Even though a multitude of guidelines for the design and development of such trustworthy AI systems exist, these guidelines focus on high-level and abstract requirements for AI systems, and it is often very difficult to assess if a specific system fulfills these requirements.

The Z-Inspection® process provides a holistic and dynamic framework to evaluate the trustworthiness of specific AI systems at different stages of the AI lifecycle, including intended use, design, and development. It focuses, in particular, on the discussion and identification of ethical issues and tensions through the analysis of socio-technical scenarios and a requirement-based framework for ethical and trustworthy AI. 

This article is a methodological reflection on the Z-Inspection® process. We illustrate how high-level guidelines for ethical and trustworthy AI can be applied in practice and provide insights for both AI researchers and AI practitioners. We share the lessons learned from conducting a series of independent assessments to evaluate the trustworthiness of real-world AI systems, as well as key recommendations and practical suggestions on how to ensure a rigorous trustworthiness assessment throughout the lifecycle of an AI system. 

The results presented in this article are based on our assessments of AI systems in the healthcare sector and environmental monitoring, where we used the framework for trustworthy AI proposed in the Ethics Guidelines for Trustworthy AI by the European Commission’s High-Level Expert Group on AI. However, the assessment process and the lessons learned can be adapted to other domains and include additional frameworks.

The ethical and societal implications of artificial intelligence systems raise concerns. In this article, we outline a novel process based on applied ethics, namely, Z-Inspection®, to assess if an AI system is trustworthy. We use the definition of trustworthy AI given by the high-level European Commission’s expert group on AI. Z-Inspection® is a general inspection process that can be applied to a variety of domains where AI systems are used, such as business, healthcare, and public sector, among many others. To the best of our knowledge, Z-Inspection® is the first process to assess trustworthy AI in practice.

Emergency medical dispatchers fail to identify approximately 25% of cases of out of hospital cardiac arrest, thus lose the opportunity to provide the caller instructions in cardiopulmonary resuscitation. A team led by Stig Nikolaj Blomberg (Emergency Medical Services Copenhagen, and Department of Clinical Medicine, University of Copenhagen, Denmark) examined whether a machine learning framework could recognize out-of-hospital cardiac arrest from audio files of calls to the emergency medical dispatch center.

We worked with Stig Nikolaj Blomberg and his team and applied the Z-inspection® process to assess the ethical, technical and legal implications of using machine learning in this context.

The main contribution of our work is to show the use of an ethically aligned co-design methodology to ensure a trustworthiness early design of an artificial intelligence (AI) system component for healthcare. The system is aimed to explain the decisions made by deep learning networks when used to analyze images of skin lesions. For that, we use a holistic process, called Z-inspection®, which requires a multidisciplinary team of experts working together with the AI designers and their managers to explore and investigate possible ethical, legal and technical issues that could arise from the future use of the AI system. Our research work is addressing the need for the co-design of trustworthy AI using a holistic approach, rather than using static ethical checklists.  Our results can also serve as guidance for other early-phase AI-similar tool developments.

The paper's main contributions are twofold: to demonstrate how to apply the general European Union’s High- Level Expert Group’s (EU HLEG) guidelines for trustworthy AI in practice for the domain of healthcare; and to investigate the research question of what does “trustworthy AI” mean at the time of the COVID-19 pandemic. 

To this end we present the results of a post-hoc self-assessment to evaluate the trustworthiness of an AI system for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients, developed and verified by an interdisciplinary team with members from academia, public hospitals, and industry in time of pandemic.

The AI system aims to help radiologists to estimate and communicate the severity of damage in a patient’s lung from Chest X-rays. It has been experimentally deployed in the radiology department of the ASST Spedali Civili clinic in Brescia (Italy) since December 2020 during pandemic time. The methodology we have applied for our post-hoc assessment, called Z-Inspection®, uses socio-technical scenarios to identify ethical, technical and domain-specific issues in the use of the AI system in the context of the pandemic.