Barbara researches theory and algorithms in machine learning and their application for technical systems and the life sciences, including explainability and learning with drift. She is extremely well embedded in the ML community, serving on the review board for machine learning of the German Research Foundation (DFG), the selection committee for fellowships of the Alexander von Humboldt Foundation and the Scientific Directorate of Schloss Dagstuhl. She will share her expertise on explainability and when to trust an explanation.
Konrad's research revolves around computer security and machine learning, developing novel methods for detecting computer attacks, analyzing malicious software and discovering security vulnerabilities. In particular, in the scope of the latter, he has recently used Explainable AI to improve task solving performance. His works is regularly published at top computer security venues including IEEE S&P, USENIX Security, ACM CCS, and ISOC NDSS.
Junqi works on Trustworthy AI focusing on robust explainable AI and robust counterfactual explanations in particular. He works on these topics in both traditional machine learning for tabular data and large language models (LLM) for textual data. He publishes his work at top venues like NeurIPS, ICLR, AAAI, IJCAI and is well-embedded in the machine learning community.
| 08:30–9:00 | Opening and Welcome |
| 9:00–10:00 | Keynote 1 |
| Barbara Hammer, University of Bielefeld | |
| 10:00-11:30 | Paper/Poster Session 1 |
| 11:30–12:30 |
Keynote 2
|
| Junqi Jiang, J.P. Morgan Trustworthy AI Center of Excellence | |
| 12:30–14:00 | Lunch |
| 14:00–15:00 |
Keynote 3
|
| Konrad Rieck, TU Berlin & BIFOLD | |
| 15:00–16:00 | Paper/Poster Session 2 |
| 16:00–16:30 | Break |
| 16:30–17:30 | Hands-on/Tutorial Session |
| 17:30– | Final Discussion & Closing remarks |
The XAISEC Workshop aims to bridge the computer security and the machine-learning communities at the intersection of Explainable AI (XAI) and security. Naturally, the security community utilizes XAI to address computer security tasks, such as malware detection, vulnerability discovery, and even the detection of attacks against AI. However, both communities also work on the security and robustness of XAI—unfortunately, largely independently of each other. In light of the close collaboration (and success stories) in the field of "adversarial machine learning" during the past decade, this observation is not only a curiosity but a missed opportunity.
We invite the ML and Security communities to submit papers on either using Explainable AI for computer security tasks or the security of Explainable AI. Submission are expected to have 6 pages excl. references and well-marked appendices.
Topics of interest include but are not limited to:
Papers must be submitted as a single PDF document, must be anonymous (double-blind review) and written in English language, and shall not exceed 6 pages body text with unlimited additional pages for references and appendices.
Reviewers are not expected to read the appendices while deciding whether to accept or reject the paper.
Moreover, submissions must be typeset in LaTeX in A4 format (not "US Letter") using the IEEE conference proceeding template we supply.
Please do
Submissions must not substantially overlap with papers that have been published or that are simultaneously submitted to a journal or conference with proceedings. Also, authors should refer to their previous work in the third person. Accepted papers will be published in IEEE Xplore. One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.
We expect authors to carefully consider and address the potential harms associated with carrying out their research, as well as the potential negative consequences that could stem from publishing their work. Failure to adequately discuss such potential harms within the body of the submission may result in rejection of a submission, regardless of its quality and scientific value.
In line with the main conference, our expectation is that researchers will maximize the scientific and community value of their work by making it as open as possible. This means that, by default, all of the code, data, and other materials (such as survey instruments) needed to reproduce your work described in an accepted paper will be released publicly under an open source license. Sometimes it is not possible to share work this openly, such as when it involves malware samples, data from human subjects that must be protected, or proprietary data obtained under agreement that precludes publishing the data itself. All submissions are encouraged to include a clear statement on Data Availability that explains how the artifacts needed to reproduce their work will be shared, or an explanation of why they will not be shared.
The use of AI-generated content (including but not limited to text, figures, images, and code) shall be disclosed in the acknowledgments section.
All accepted submissions must be presented at the workshop as posters. One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.
Submission link: https://submission.intellisec.de/xaisec-2026.
For any questions, please contact one the workshop organizers at