List of publications

The listed publications are a result of our work with Z-Inspection® globally. They include publications where Z-Inspection® has been utilized in different domains and also publications where members of Trustworthy AI Lab Norway are co-authors.


Z-Inspection®: A Process to Assess Trustworthy AI


Roberto V. Zicari, John Brodersen, James Brusseau, Boris Düdder, Timo Eichhorn,Todor Ivanov, Georgios Kararigas , Pedro Kringen, Melissa McCullough, Florian Möslein, Karsten Tolle, Jesmin Jahan Tithi, Naveed Mushtaq, Gemma Roig , Norman Stürtz, Irmhild van Halem, Magnus Westerlund.


Abstract

The ethical and societal implications of artificial intelligence systems raise concerns. In this article, we outline a novel process based on applied ethics, namely, Z-Inspection®, to assess if an AI system is trustworthy. We use the definition of trustworthy AI given by the high-level European Commission’s expert group on AI. Z-inspection®is a general inspection process that can be applied to a variety of domains where AI systems are used, such as business, healthcare, and the public sector, among many others. To the best of our knowledge, Z-Inspection is the first process to assess trustworthy AI in practice.


IEEE Transactions on Technology and Society,
VOL. 2, NO. 2, JUNE 2021


DOWNLOAD THE PAPER


Co-design of a Trustworthy AI System in Healthcare: Deep Learning based Skin Lesion Classifier


Roberto V. Zicari, Sheraz Ahmed, Julia Amann, Stephan Alexander Braun, John Brodersen, Frédérick Bruneault, James Brusseau, Erik Campano , Megan Coffee, Andreas Dengel, Boris Düdder, Alessio Gallucci, Thomas Krendl Gilbert, Philippe Gottfrois, Emmanuel Goffi, Christoffer Bjerre Haase, Thilo Hagendorff, Eleanore Hickman, Elisabeth Hildt, Sune Holm, Pedro Kringen, Ulrich Kühne, Adriano Lucieri, Vince I. Madai, Pedro A. Moreno-Sánchez, Oriana Medlicott, Matiss Ozols, Eberhard Schnebel, Andy Spezzatti, Jesmin Jahan Tithi, Steven Umbrello, Dennis Vetter, Holger Volland, Magnus Westerlund, Renee Wurth. 


Abstract

This paper documents how an ethically aligned co-design methodology ensures trustworthiness in the early design phase of an artificial intelligence (AI) system component for healthcare. The system explains decisions made by deep learning networks analyzing images of skin lesions. The co-design of trustworthy AI developed here used a holistic approach rather than a static ethical checklist and required a...


Published on 13 July 2021. 

Front. Hum. Dyn. doi:
https://doi.org/10.3389/fhumd.2021.688152

On Assessing Trustworthy AI in Healthcare. Machine Learning as a Supportive Tool to Recognize Cardiac Arrest in Emergency Calls


Roberto V. Zicari • James Brusseau • Stig Nikolaj Blomberg • Helle Collatz Christensen • Megan Coffee • Marianna B. Ganapini • Sara Gerke • Thomas Krendl Gilbert • Eleanore Hickman • Elisabeth Hildt • Sune Holm • Ulrich Kühne • Vince I. Madai • Walter Osika • Andy Spezzatti • Eberhard Schnebel • Jesmin Jahan Tithi • Dennis Vetter • Magnus Westerlund • Renee Wurth • Julia Amann • Vegard Antun • Valentina Beretta • Frédérick Bruneault • Erik Campano • Boris Düdder • Alessio Gallucci • Emmanuel Goffi • Christoffer Bjerre Haase • Thilo Hagendorff • Pedro Kringen • Florian Möslein • Davi Ottenheimer • Matiss Ozols • Laura Palazzani • Martin Petrin • Karin Tafur • Jim Tørresen • Holger Volland • Georgios Kararigas


Abstract

Artificial Intelligence (AI) has the potential to greatly improve the delivery of healthcare and other services that advance population health and wellbeing. However, the use of AI in healthcare also brings potential risks that may cause unintended harm. To guide future developments in AI, the High-Level Expert Group on AI set up by the European Commission (EC), recently published ethics guidelines for what it terms “trustworthy” AI. These guidelines are...


Front. Hum. Dyn., 08 July 2021|

https://lnkd.in/dkFmauY


Lessons Learned from Assessing Trustworthy AI in Practice


Dennis VetterJulia AmannFrederick BruneaultMegan CoffeeBoris DüdderAlessio GallucciThomas Krendl Gilbert, Thilo Hagendorff, Irmhild van HalemDr Eleanore HickmanElisabeth HildtSune HolmGeorge Kararigas, Pedro KringenVince Madai , Emilie Wiinblad MathezJesmin Jahan Tithi, Ph.D , Magnus WesterlundRenee Wurth, PhDRoberto V. Zicari & Z-Inspection® initiative (2022)


Abstract

Building artificial intelligence (AI) systems that adhere to ethical standards is a complex problem. Even though a multitude of guidelines for the design and development of such trustworthy AI systems exist, these guidelines focus on high-level and abstract requirements for AI systems, and it is often very difficult to assess if a specific system fulfills these requirements. The Z-Inspection® process provides a holistic and dynamic framework to evaluate the trustworthiness...



Digital Society (DSO),  2, 35 (2023). Springer

Link:

https://link.springer.com/article/10.1007/s44206-023-00063-1

How to Assess Trustworthy AI in Practice


Roberto V. Zicari, Julia Amann, Frédérick Bruneault, Megan Coffee, Boris Düdder, Eleanore Hickman, Alessio Gallucci, Thomas Krendl Gilbert, Thilo Hagendorff, Irmhild van Halem, Elisabeth Hildt, Georgios Kararigas, Pedro Kringen, Vince I. Madai, Emilie Wiinblad Mathez, Jesmin Jahan Tithi, Dennis Vetter, Magnus Westerlund, Renee Wurth


On behalf of the Z-Inspection® initiative (2022)

                                                                                                                      Abstract
This report is a methodological reflection on Z-Inspection®. Z-Inspection® is a holistic process used to evaluate the trustworthyness of AI-based technologies at different stages of the AI lifecycle. It focuses, in particular, on the identification and discussion of ethical issues and tensions through the elaboration of socio-technical scenarios. It uses the general European Union’s High-Level Expert Group’s (EU HLEG) guidelines for trustworthy AI. This report illustrates for both AI researchers and AI practitioners how the EU HLEG guidelines for trustworthy AI can be applied in practice. We share the lessons learned from conducting a series of independent assessments to evaluate the trustworthiness of AI systems in healthcare. We also share key recommendations and practical suggestions on how to ensure a rigorous trustworthy AI assessment throughout the life-cycle of an AI system. 


Cite as: arXiv:2206.09887 [cs.CY] [v1] Mon, 20 Jun 2022 16:46:21 UTC (463 KB)

The full report is available on arXiv .

Download the full report as .PDF

For full list of publications go to Publications – Z-Inspection


unsplash