Workshop on
Explainable AI and Security
July 6th, 2026 in Lisbon, Portugal
co-located with the 11th IEEE European Symposium on Security and Privacy

Keynotes

Barbara Hammer, University of Bielefeld

Barbara researches theory and algorithms in machine learning and their application for technical systems and the life sciences, including explainability and learning with drift. She is extremely well embedded in the ML community, serving on the review board for machine learning of the German Research Foundation (DFG), the selection committee for fellowships of the Alexander von Humboldt Foundation and the Scientific Directorate of Schloss Dagstuhl. She will share her expertise on explainability and when to trust an explanation.

Konrad Rieck, TU Berlin & BIFOLD

Konrad's research revolves around computer security and machine learning, developing novel methods for detecting computer attacks, analyzing malicious software and discovering security vulnerabilities. In particular, in the scope of the latter, he has recently used Explainable AI to improve task solving performance. His works is regularly published at top computer security venues including IEEE S&P, USENIX Security, ACM CCS, and ISOC NDSS.

Junqi Jiang, J.P. Morgan Trustworthy AI Center of Excellence

Junqi works on Trustworthy AI focusing on robust explainable AI and robust counterfactual explanations in particular. He works on these topics in both traditional machine learning for tabular data and large language models (LLM) for textual data. He publishes his work at top venues like NeurIPS, ICLR, AAAI, IJCAI and is well-embedded in the machine learning community.

Programme

The following times are in UTC+0.

08:30–9:00 Opening and Welcome
9:00–10:00 Keynote 1
Barbara Hammer, University of Bielefeld
10:00-11:30 Paper/Poster Session 1
11:30–12:30 Keynote 2
Junqi Jiang, J.P. Morgan Trustworthy AI Center of Excellence
12:30–14:00 Lunch
14:00–15:00 Keynote 3
Konrad Rieck, TU Berlin & BIFOLD
15:00–16:00 Paper/Poster Session 2
16:00–16:30 Break
16:30–17:30 Hands-on/Tutorial Session
17:30– Final Discussion & Closing remarks

Call for Papers

Important Dates

  • Paper submission deadline: January 29th, 2026 (AoE, UTC-12)
  • Acceptance notification: March 18th, 2026 (AoE, UTC-12)
  • Camera ready due: April 10th, 2026 (tentative)
  • Workshop day: July 6th, 2026

Overview

The XAISEC Workshop aims to bridge the computer security and the machine-learning communities at the intersection of Explainable AI (XAI) and security. Naturally, the security community utilizes XAI to address computer security tasks, such as malware detection, vulnerability discovery, and even the detection of attacks against AI. However, both communities also work on the security and robustness of XAI—unfortunately, largely independently of each other. In light of the close collaboration (and success stories) in the field of "adversarial machine learning" during the past decade, this observation is not only a curiosity but a missed opportunity.

Scope of Papers

We invite the ML and Security communities to submit papers on either using Explainable AI for computer security tasks or the security of Explainable AI. Submission are expected to have 6 pages excl. references and well-marked appendices.

Topics of Interest

Topics of interest include but are not limited to:

  • Innovative applications of XAI for computer security and the analysis of the security of AI models
  • Robustness analysis of XAI
  • Vulnerabilities of XAI
  • Novel explanation techniques that are more robust (to attacks)
  • New datasets, benchmarks and challenges to assess the security and robustness of AI and XAI

Submission Guidelines

Papers must be submitted as a single PDF document, must be anonymous (double-blind review) and written in English language, and shall not exceed 6 pages body text with unlimited additional pages for references and appendices. Reviewers are not expected to read the appendices while deciding whether to accept or reject the paper. Moreover, submissions must be typeset in LaTeX in A4 format (not "US Letter") using the IEEE conference proceeding template we supply. Please do not use other IEEE templates.

Submissions must not substantially overlap with papers that have been published or that are simultaneously submitted to a journal or conference with proceedings. Also, authors should refer to their previous work in the third person. Accepted papers will be published in IEEE Xplore. One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.

Proactive Prevention of Harm

We expect authors to carefully consider and address the potential harms associated with carrying out their research, as well as the potential negative consequences that could stem from publishing their work. Failure to adequately discuss such potential harms within the body of the submission may result in rejection of a submission, regardless of its quality and scientific value.

Open Science Expectations

In line with the main conference, our expectation is that researchers will maximize the scientific and community value of their work by making it as open as possible. This means that, by default, all of the code, data, and other materials (such as survey instruments) needed to reproduce your work described in an accepted paper will be released publicly under an open source license. Sometimes it is not possible to share work this openly, such as when it involves malware samples, data from human subjects that must be protected, or proprietary data obtained under agreement that precludes publishing the data itself. All submissions are encouraged to include a clear statement on Data Availability that explains how the artifacts needed to reproduce their work will be shared, or an explanation of why they will not be shared.

AI Guidelines

The use of AI-generated content (including but not limited to text, figures, images, and code) shall be disclosed in the acknowledgments section. At the time of submission, the acknowledgments do not count towards the page limit. The AI system used shall be identified, and specific sections of the article that use AI-generated content shall be identified and accompanied by a brief explanation regarding the level at which the AI system was used to generate the content. The use of AI systems for editing and grammar enhancement is common practice and, as such, is generally outside the intent of the above policy. In this case, disclosure as noted above is not required, but recommended.

Submission Site

All accepted submissions must be presented at the workshop as posters. One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.

Submission link: https://submission.intellisec.de/xaisec-2026.

For any questions, please contact one the workshop organizers at

Committee

Workshop Chairs

Program Committee

  • tba