Research
New Pilot Project: Assessing Trustworthiness of the use of Generative AI for higher Education
The Trustworthy AI Lab at GSDS,SNU is participating to this pilot project of the Z-inspection® initiative which aims at assessing the use of Generative AI in higher level education considering specific use cases.
For this pilot project, we will assess the ethical, technical, domain-specific (i.e. education) and legal implications of the use of Generative AI-product/service within the university context.
We follow the UNESCO guidance for policymakers on AI and education. In particular the policy recommendation 6. : Pilot testing, monitoring and evaluation, and building an evidence base.
Approach
An interdisciplinary team of experts will assess the trustworthiness of Generative AI for selected use cases in High Education using the Z-Inspection® process.
For more info: https://z-inspection.org/pilot-project-assessing-trustworthiness-of-the-use-of-generative-ai-for-higher-education/
Best Practices
“Responsible use of AI” Pilot Project with the Province of Fryslân, Rijks ICT Gilde & the
Z-Inspection® Initiative.
This report shares the experiences, results and lessons learned in conducting a pilot project 'Responsible use of AI' in cooperation with the Province of Friesland, Rijks ICT Gilde-part of the Ministry of the Interior and Kingdom Relations (BZK) (both in The Netherlands) and a group of members of the Z-Inspection® Initiative.
The pilot project took place from May 2022 through January 2023. During the pilot, the practical application of a deep learning algorithm from the province of Frŷslan was assessed. The AI maps heathland grassland by means of satellite images for monitoring nature reserves. Environmental monitoring is one of the crucial activities carried on by society for several purposes ranging from maintaining standards on drinkable water to quantifying the CO2 emissions of a particular state or region. Using satellite imagery and machine learning to support decisions is becoming an important part of environmental monitoring.
The main focus of this report is to share the experiences, results and lessons learned from performing both a Trustworthy AI assessment using the Z-Inspection® process and the EU framework for Trustworthy AI, and combining it with a Fundamental Rights assessment using the Fundamental Rights and Algorithms Impact Assessment (FRAIA) as recommended by the Dutch government for the use of AI algorithms by the Dutch public authorities.
Comments: On behalf of the Z-Inspection® Initiative
Subjects: Computers and Society (cs.CY)
Cite: arXiv:2404.14366 [cs.CY] (or arXiv:2404.14366v1 [cs.CY] for this version)
Link: https://arxiv.org/abs/2404.14366
(Following the link you can then download the online .pdf of the report)
Building artificial intelligence (AI) systems that adhere to ethical standards is a complex problem. Even though a multitude of guidelines for the design and development of such trustworthy AI systems exist, these guidelines focus on high-level and abstract requirements for AI systems, and it is often very difficult to assess if a specific system fulfills these requirements.
The Z-Inspection® process provides a holistic and dynamic framework to evaluate the trustworthiness of specific AI systems at different stages of the AI lifecycle, including intended use, design, and development. It focuses, in particular, on the discussion and identification of ethical issues and tensions through the analysis of socio-technical scenarios and a requirement-based framework for ethical and trustworthy AI.
This article is a methodological reflection on the Z-Inspection® process. We illustrate how high-level guidelines for ethical and trustworthy AI can be applied in practice and provide insights for both AI researchers and AI practitioners. We share the lessons learned from conducting a series of independent assessments to evaluate the trustworthiness of real-world AI systems, as well as key recommendations and practical suggestions on how to ensure a rigorous trustworthiness assessment throughout the lifecycle of an AI system.
The results presented in this article are based on our assessments of AI systems in the healthcare sector and environmental monitoring, where we used the framework for trustworthy AI proposed in the Ethics Guidelines for Trustworthy AI by the European Commission’s High-Level Expert Group on AI. However, the assessment process and the lessons learned can be adapted to other domains and include additional frameworks.
The ethical and societal implications of artificial intelligence systems raise concerns. In this article, we outline a novel process based on applied ethics, namely, Z-Inspection®, to assess if an AI system is trustworthy. We use the definition of trustworthy AI given by the high-level European Commission’s expert group on AI. Z-Inspection® is a general inspection process that can be applied to a variety of domains where AI systems are used, such as business, healthcare, and public sector, among many others. To the best of our knowledge, Z-Inspection® is the first process to assess trustworthy AI in practice.
Emergency medical dispatchers fail to identify approximately 25% of cases of out of hospital cardiac arrest, thus lose the opportunity to provide the caller instructions in cardiopulmonary resuscitation. A team led by Stig Nikolaj Blomberg (Emergency Medical Services Copenhagen, and Department of Clinical Medicine, University of Copenhagen, Denmark) examined whether a machine learning framework could recognize out-of-hospital cardiac arrest from audio files of calls to the emergency medical dispatch center.
We worked with Stig Nikolaj Blomberg and his team and applied the Z-inspection® process to assess the ethical, technical and legal implications of using machine learning in this context.
The main contribution of our work is to show the use of an ethically aligned co-design methodology to ensure a trustworthiness early design of an artificial intelligence (AI) system component for healthcare. The system is aimed to explain the decisions made by deep learning networks when used to analyze images of skin lesions. For that, we use a holistic process, called Z-inspection®, which requires a multidisciplinary team of experts working together with the AI designers and their managers to explore and investigate possible ethical, legal and technical issues that could arise from the future use of the AI system. Our research work is addressing the need for the co-design of trustworthy AI using a holistic approach, rather than using static ethical checklists. Our results can also serve as guidance for other early-phase AI-similar tool developments.
The paper's main contributions are twofold: to demonstrate how to apply the general European Union’s High- Level Expert Group’s (EU HLEG) guidelines for trustworthy AI in practice for the domain of healthcare; and to investigate the research question of what does “trustworthy AI” mean at the time of the COVID-19 pandemic.
To this end we present the results of a post-hoc self-assessment to evaluate the trustworthiness of an AI system for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients, developed and verified by an interdisciplinary team with members from academia, public hospitals, and industry in time of pandemic.
The AI system aims to help radiologists to estimate and communicate the severity of damage in a patient’s lung from Chest X-rays. It has been experimentally deployed in the radiology department of the ASST Spedali Civili clinic in Brescia (Italy) since December 2020 during pandemic time. The methodology we have applied for our post-hoc assessment, called Z-Inspection®, uses socio-technical scenarios to identify ethical, technical and domain-specific issues in the use of the AI system in the context of the pandemic.