<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40"><head><META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=US-ASCII"><meta name=Generator content="Microsoft Word 15 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.StileMessaggioDiPostaElettronica17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:70.85pt 2.0cm 2.0cm 2.0cm;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]--></head><body lang=IT link="#0563C1" vlink="#954F72"><div class=WordSection1><p class=MsoNormal><span lang=EN-US>Special Issue "Advances in Explainable Artificial Intelligence"<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>[Apologies if you receive multiple copies of this CFP]<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>****************************************************************<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Special Issue "Advances in Explainable Artificial Intelligence" <o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>MDPI Information, open access<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Website: https://www.mdpi.com/journal/information/special_issues/advance_explain_AI<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>****************************************************************<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>The following special issue will be published in Information (ISSN 2078-2489, https://www.mdpi.com/journal/information), and is now open to receive submissions of full research articles and comprehensive review papers for peer-review and possible publication.<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>The papers will be published, after a standard peer-review procedure, in Open Access journal Information.<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>The official deadline for submission is 31 May 2021. However, you may send your manuscript at any time before the deadline. <o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>If accepted, the paper will be published very soon.<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>** SPECIAL ISSUE INFORMATION<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Machine Learning (ML)-based Artificial Intelligence (AI) algorithms can learn from known examples of various abstract representations and models that, once applied to unknown examples, can perform classification, regression, or forecasting tasks, to name a few.<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Very often, these highly effective ML representations are difficult to understand; this holds true particularly for Deep Learning models, which can involve millions of parameters. However, for many applications, it is of utmost importance for the stakeholders to understand the decisions made by the system, in order to use them better. Furthermore, for decisions that affect an individual, the legislation might even advocate in the future a “right to an explanation”. Overall, improving the algorithms’ explainability may foster trust and social acceptance of AI.<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>The need to make ML algorithms more transparent and more explainable has given rise to several lines of research that form an area known as explainable Artificial Intelligence (XAI). <o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Among the goals of XAI are adding transparency to ML models by providing detailed information about why the system has reached a particular decision; designing more explainable and transparent ML models, while at the same time maintaining high-performance levels; finding a way to evaluate the overall explainability and transparency of the models and quantifying their effectiveness for different stakeholders.<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>The objective of this Special Issue is to explore recent advances and techniques in the XAI area.<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Research topics of interest include (but are not limited to):<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>- Devising machine learning models that are transparent-by-design;<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>- Planning for transparency, from data collection up to training, test, and production;<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>- Developing algorithms and user interfaces for explainability;<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>- Identifying and mitigating biases in data collection;<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>- Performing black-box model auditing and explanation;<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>- Detecting data bias and algorithmic bias;<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>- Learning causal relationships;<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>- Integrating social and ethical aspects of explainability;<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>- Integrating explainability into existing AI systems;<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>- Designing new explanation modalities;<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>- Exploring theoretical aspects of explanation and interpretability;<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>- Investigating the use of XAI in application sectors such as healthcare, bioinformatics, multimedia, linguistics, human–computer interaction, machine translation, autonomous vehicles, risk assessment, justice, etc.<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p></div></body></html>