Highly complex software drives business and markets. For example, software is the foundation for high-frequency trading on the stock markets. Algorithms react to market changes within fractions of a second and buy or sell corresponding securities. Events like the 2010 flash crash are the result of software. However, reasons and causes of this crash are still not entirely clear. Similarly, credit scoring by software is widely used to automatically assess the creditworthiness of a person based on multiple different data points about the person as well as historic data on creditworthiness of similar persons. Software is the enabling factor for autonomous driving, where software supports and/or realizes multiple different functions like speed and distance control, lane centering, and object detection. This problem is exacerbated by the fact that increasingly those software functions are realized by machine-learned software, e.g. various kinds of neural networks.
All these examples of software systems have a major impact on society. Their failure would have a strongly negative impact. However, the software’s decision-making process is often highly adaptive and increasingly heavily dependent on data, such that its decisions are no longer easily understandable by people. Consequently, we need to increase the ability of explaining software.
Explanations have different aspects. For example, an explanation has a certain explanation goal, e.g., why did a certain user’s request for a loan has been denied, what have been the deciding factors resulting in the rejection. An explanation is also targeted towards a certain role, e.g., a developer or an end-user. An explanation might have also have a (configurable) degree of abstraction, it will have a rationale, and may provide different types of visualizations and interactions. In a way, explainability has to become a new quality attribute of software systems.
The EXPLAIN workshop will bring together researchers from various sub-disciplines of computer science, e.g. formal methods, algorithm engineering, software architectures, artificial intelligence, human-computer interface, visual analytics, as providing explanations requires cross-cutting research of multiple of those sub-disciplines.
Specifically, the workshop seeks contributions related but not limited to the following list of topics:
This will be the first edition of the EXPLAIN workshop. The workshop will focus on (1) the identification of problems, e.g., how to ensure understandable explanations and what aspect of a software’s decision should be explained, (2) discussion on ideas how to provide explanations, and (3) building a community for explainable software.
We welcome 4 page research, experience report and position papers. Research papers are expected to describe new research results and make contributions to the body of knowledge in the area. Experience reports are expected to describe experiences with (amongst other things) providing, creating, and using explanations in the development, deployment, and maintenance of software. Position papers are expected to discuss controversial issues or describe interesting or thought provoking ideas thatare not yet fully developed.
All papers need to follow the general formatting guidelines and policies. Submissions not conforming to these will be desk-rejected.
The submission of the paper is via EasyChair.