International workshop coordinated by Aikaterini Tzompanaki, ETIS laboratory.
During the last years. methods of Explainable Artificial Intelligence (xAI) have been developed especially with the goal to make opaque machine-learned models (e.g .. Deep Learning (DL)) transparent, interpretable, an comprehensible. However, in many real use cases, e.g . for debugging DL models, cleaning training data, or transferring models across domains, merely establishing transparency, interpretability and comprehensibility are not enough to act on and learn fom ML models or the data used to train them.
Actionable xAI (aXAI) focuses on xAI methods that support a safer and more effective human/Al-decision making in various disciplines (e.g .. healthcare. precision agriculture. security). ln particular, aXAI focus on more expressive forms of explanations that can answer not only why questions (why do we obtain a specific prediction for a given input data?) but also why-not (why don't we obtain an alternative prediction for particular input data), how-to (what are the necessary actions to change the prediction or specific input data?) and what-if (what are the necessary and minimal sets of actions on input data required to obtain an alternative prediction prediction?). Answers to these questions are crucial in order to act on the models and data used in various classification, regression or recommendation tasks.
More details