The polygenic risk score (PRS) is an important method for assessing genetic susceptibility to traits and diseases. Significant progress has been made in applying PRS to conditions such as obesity, cancer, and type 2 diabetes. Studies have demonstrated that PRS can effectively identify individuals at high risk, enabling early screening, personalized treatment, and targeted interventions.
One of the current limitations of PRS, however, is a lack of interpretability tools. To address this problem, at team of researchers at the Graduate School of Data Science at the Seoul National University introduced eXplainable PRS (XPRS), an interpretation and visualization tool that decomposes PRSs into genes/regions and single nucleotide polymorphism (SNP) contribution scores via Shapley additive explanations (SHAPs), which provide insights into specific genes and SNPs that significantly contribute to the PRS of an individual. [1]
A software system has being implemented featuring a multilevel visualization approach, including Manhattan plots, LocusZoom-like plots and tables at the population and individual levels, to highlight important genes and SNPs. The software system includes a user-friendly web interface, which allows for straightforward data input and interpretation. By bridging the gap between complex genetic data and actionable clinical insights, XPRS promises to improve communication between clinicians and patients.
It has been reported in [2] that "the ethical implications of such polygenic risk score applications in medicine and their confluence with the use of artificial intelligence have not yet been sufficiently considered". They pointed out "emerging complexity regarding fairness, challenges in building trust, explaining and understanding artificial intelligence and polygenic risk scores as well as regulatory uncertainties and further challenges". They then advocate taking "a proactive approach to embedding ethics in research and implementation processes for polygenic risk scores driven by artificial intelligence".
In this best practice we use a co-design approach to help various stakeholders to embed ethics
across the whole span of the design and implementation process of the XPRS system
design and implementation. For that, we use Z-inspection® an ethically aligned Trustworthy AI co-design methodology.
In this project, we focus on the part of the co-design process that helps to identify the possible exposures when designing a system. In the framework for Trustworthy AI, exposures are defined as ethical, technical, domain specific and legal issues related to the use of the AI system.
Team
Heejin Kim, Na Yeon Kim, Seunggeun Lee, Yuna Park, Roberto V. Zicari
Graduate School of Data Science, Seoul National University (SNU)
Elisabeth Hildt
Center for the Study of Ethics in the Professions, Illinois Institute of Technology Chicago, USA.
Magnus Westerlund
Arcada University of Applied Sciences, Helsinki, Finland
Ralf Beuthan
Department of Philosophy at Myongji University (Seoul)
Emilie Wiinblad Mathez
Z-inspection® Initiative, Geneva, Switzerland.
Megan Coffee
Department of Medicine and Division of Infectious Diseases and Immunology, NYU Grossman School of Medicine, New York, USA
Stephan Sonnenberg
Seoul National University School of Law
HaeKyung Lee
Yonsei University
References
[1] XPRS: A Tool for Interpretable and Explainable Polygenic Risk Score.
Na Yeon Kim, Seunggeun Lee
doi: https://doi.org/10.1101/2024.10.24.24316050
[2] Ethical layering in AI-driven polygenic risk scores—New complexities, new challenges,
Marie-Christine Fritzsche et al.
Front. Genet. , 26 January 2023 Sec. ELSI in Science and Genetics Volume 14 - 2023 |
The Trustworthy AI Lab at GSDS,SNU is participating to this pilot project of the Z-inspection® initiative which aims at assessing the use of Generative AI in higher level education considering specific use cases.
For this pilot project, we will assess the ethical, technical, domain-specific (i.e. education) and legal implications of the use of Generative AI-product/service within the university context.
We follow the UNESCO guidance for policymakers on AI and education. In particular the policy recommendation 6. : Pilot testing, monitoring and evaluation, and building an evidence base.
Approach
An interdisciplinary team of experts will assess the trustworthiness of Generative AI for selected use cases in High Education using the Z-Inspection® process.
For more info: https://z-inspection.org/pilot-project-assessing-trustworthiness-of-the-use-of-generative-ai-for-higher-education/
“Responsible use of AI” Pilot Project with the Province of Fryslân, Rijks ICT Gilde & the
Z-Inspection® Initiative.
This report shares the experiences, results and lessons learned in conducting a pilot project 'Responsible use of AI' in cooperation with the Province of Friesland, Rijks ICT Gilde-part of the Ministry of the Interior and Kingdom Relations (BZK) (both in The Netherlands) and a group of members of the Z-Inspection® Initiative.
The pilot project took place from May 2022 through January 2023. During the pilot, the practical application of a deep learning algorithm from the province of Frŷslan was assessed. The AI maps heathland grassland by means of satellite images for monitoring nature reserves. Environmental monitoring is one of the crucial activities carried on by society for several purposes ranging from maintaining standards on drinkable water to quantifying the CO2 emissions of a particular state or region. Using satellite imagery and machine learning to support decisions is becoming an important part of environmental monitoring.
The main focus of this report is to share the experiences, results and lessons learned from performing both a Trustworthy AI assessment using the Z-Inspection® process and the EU framework for Trustworthy AI, and combining it with a Fundamental Rights assessment using the Fundamental Rights and Algorithms Impact Assessment (FRAIA) as recommended by the Dutch government for the use of AI algorithms by the Dutch public authorities.
Comments: On behalf of the Z-Inspection® Initiative
Subjects: Computers and Society (cs.CY)
Cite: arXiv:2404.14366 [cs.CY] (or arXiv:2404.14366v1 [cs.CY] for this version)
Link: https://arxiv.org/abs/2404.14366
(Following the link you can then download the online .pdf of the report)
Building artificial intelligence (AI) systems that adhere to ethical standards is a complex problem. Even though a multitude of guidelines for the design and development of such trustworthy AI systems exist, these guidelines focus on high-level and abstract requirements for AI systems, and it is often very difficult to assess if a specific system fulfills these requirements.
The Z-Inspection® process provides a holistic and dynamic framework to evaluate the trustworthiness of specific AI systems at different stages of the AI lifecycle, including intended use, design, and development. It focuses, in particular, on the discussion and identification of ethical issues and tensions through the analysis of socio-technical scenarios and a requirement-based framework for ethical and trustworthy AI.
This article is a methodological reflection on the Z-Inspection® process. We illustrate how high-level guidelines for ethical and trustworthy AI can be applied in practice and provide insights for both AI researchers and AI practitioners. We share the lessons learned from conducting a series of independent assessments to evaluate the trustworthiness of real-world AI systems, as well as key recommendations and practical suggestions on how to ensure a rigorous trustworthiness assessment throughout the lifecycle of an AI system.
The results presented in this article are based on our assessments of AI systems in the healthcare sector and environmental monitoring, where we used the framework for trustworthy AI proposed in the Ethics Guidelines for Trustworthy AI by the European Commission’s High-Level Expert Group on AI. However, the assessment process and the lessons learned can be adapted to other domains and include additional frameworks.
The ethical and societal implications of artificial intelligence systems raise concerns. In this article, we outline a novel process based on applied ethics, namely, Z-Inspection®, to assess if an AI system is trustworthy. We use the definition of trustworthy AI given by the high-level European Commission’s expert group on AI. Z-Inspection® is a general inspection process that can be applied to a variety of domains where AI systems are used, such as business, healthcare, and public sector, among many others. To the best of our knowledge, Z-Inspection® is the first process to assess trustworthy AI in practice.
Emergency medical dispatchers fail to identify approximately 25% of cases of out of hospital cardiac arrest, thus lose the opportunity to provide the caller instructions in cardiopulmonary resuscitation. A team led by Stig Nikolaj Blomberg (Emergency Medical Services Copenhagen, and Department of Clinical Medicine, University of Copenhagen, Denmark) examined whether a machine learning framework could recognize out-of-hospital cardiac arrest from audio files of calls to the emergency medical dispatch center.
We worked with Stig Nikolaj Blomberg and his team and applied the Z-inspection® process to assess the ethical, technical and legal implications of using machine learning in this context.
The main contribution of our work is to show the use of an ethically aligned co-design methodology to ensure a trustworthiness early design of an artificial intelligence (AI) system component for healthcare. The system is aimed to explain the decisions made by deep learning networks when used to analyze images of skin lesions. For that, we use a holistic process, called Z-inspection®, which requires a multidisciplinary team of experts working together with the AI designers and their managers to explore and investigate possible ethical, legal and technical issues that could arise from the future use of the AI system. Our research work is addressing the need for the co-design of trustworthy AI using a holistic approach, rather than using static ethical checklists. Our results can also serve as guidance for other early-phase AI-similar tool developments.
The paper's main contributions are twofold: to demonstrate how to apply the general European Union’s High- Level Expert Group’s (EU HLEG) guidelines for trustworthy AI in practice for the domain of healthcare; and to investigate the research question of what does “trustworthy AI” mean at the time of the COVID-19 pandemic.
To this end we present the results of a post-hoc self-assessment to evaluate the trustworthiness of an AI system for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients, developed and verified by an interdisciplinary team with members from academia, public hospitals, and industry in time of pandemic.
The AI system aims to help radiologists to estimate and communicate the severity of damage in a patient’s lung from Chest X-rays. It has been experimentally deployed in the radiology department of the ASST Spedali Civili clinic in Brescia (Italy) since December 2020 during pandemic time. The methodology we have applied for our post-hoc assessment, called Z-Inspection®, uses socio-technical scenarios to identify ethical, technical and domain-specific issues in the use of the AI system in the context of the pandemic.