Paving the way towards Explainable Artificial Intelligence through Fuzzy Sets and Systems

 Prof. Jose Maria Alonso Moral – USC – Spain

June 24 2022 – 10:00-12:00

 Virtual room: https://uniurb-it.zoom.us/j/85843743428?pwd=RXZrWGZZdWtGbk9GQTVQbWhXRldNQT09

 Abstract

 In the era of the Internet of Things and Big Data, data scientists are aimed for finding out valuable knowledge from data. They first analyze, cure and pre-process data. Then, they apply Artificial Intelligence (AI) techniques to automatically extract knowledge from data. Explainable AI (XAI) is an endeavor to evolve AI methodologies and technology by focusing on the development of agents capable of generating decisions that a human can understand in each context, and explicitly explaining such decisions. This way, it is possible to scrutinize the underlying intelligent models and verify if automated decisions are made based on accepted rules and principles, so that decisions can be trusted and their impact justified. Notice that, the target audience includes usually human beings who expect machines to provide verbal and non-verbal explanations. Such explanations include linguistic pieces of information which embrace vague concepts representing naturally the imprecision and uncertainty inherent to most activities in the everyday life.

 This webinar is organized into two parts. The first part of the webinar will take about 30min (followed by an open discussion of about 15 min) and it will include a general (non-technical) introduction to the field of XAI (i.e., revisiting definitions and fundamentals, reviewing the state of the art and enumerating open challenges). The second part of the webinar will take about 1h and it will pay attention to fundamentals and current research trends in the research field of fuzzy sets and systems, with special emphasis on fuzzy-grounded knowledge representation and reasoning, as well as how to use interpretable fuzzy sets and systems to deal with imprecision and uncertainty in the context of XAI. Then, we will see how explainable fuzzy systems can provide users with effective factual and counterfactual explanations to be integrated in interactive dialogue games; thus, enhancing human-machine interaction via persuasive multimodal communication. In addition, we will see some practical software tools as well as critical issues related to psycholinguistic human evaluation.

Attendees are kindly invited to fill in this anonymous survey on factual and counterfactual explanation quality.

 Lecturer

 Jose M. Alonso received his M.Sc. and Ph.D. degrees in Telecommunication Engineering, both from the Technical University of Madrid (UPM), Spain, in 2003 and 2007, respectively. He is currently “Ramón y Cajal” researcher funded by the Spanish Government under project RYC-2016-19802, affiliated to CiTIUS-USC, President of the Executive Board and Deputy Coordinator of the H2020-MSCA-ITN-2019 (Grant Agreement No 860621) project entitled “Interactive Natural Language Technology for Explainable Artificial Intelligence” (NL4XAI), Chair of the IEEE-CIS Task Force on Explainable Fuzzy Systems, member of the IEEE-CIS Task Force on Explainable Machine Learning, member of the IEEE-CIS Working Group on eXplainable AI (P2976), member of the IEEE-CIS Task Force on Fuzzy Systems Software, board member of the ACL Special Interest Group on Natural Language Generation (SIGGEN). He has published more than 150 papers in international journals, book chapters and conferences; being co-author of the book “Explainable Fuzzy Systems – Paving the Way from Interpretable Fuzzy Systems to Explainable AI Systems”, Studies in Computational Intelligence, Springer International Publishing, 2021. His research interests include explainable artificial intelligence, interpretable fuzzy systems, natural language generation, development of free software tools, etc. You can find further details at https://citius.gal/team/jose-maria-alonso-moral