With professor Georg Loho, I work on the mathematical foundations of Deep Learning. We investigate the geometry of the Newton Polytopes of Neural Networks and develop new Explainable AI (XAI) methods using the Difference-of-Convex decomposition of Neural Networks.
Current
Fraunhofer HHI – Berlin
With professor Wojciech Samek I am working on new backpropagation based XAI methods for the Transformer Architecture.
Previously
Fraunhofer IOSB – Karlsruhe
I have worked on runtime monitoring of image recognition models and representation learning. I developed a monitoring approach that is knowledge-guided and works on internal representations of image recognition models and embedding models such as DINO.
Goals
I want machine learning models to be:
Safe and Reliable – especially in critical applications and under distribution shifts
Interpretable – transparent decision-making that humans can understand and verify
Aligned – reasoning that mirrors human intuition and values
I investigate how geometric and mathematical structures in Neural Networks can help achieve these properties, and how to explain the decision-making of Neural Networks in a way understandable to humans.
We propose an algorithm to decompose any ReLU Neural Network (CNN, MLP, ResNet) into a difference of two monotone and convex Maxout networks and to stabilize the forward and backward pass through this decomposition. This provides a principled approach for Explainable AI, enabling separate analysis of positive and negative contributions to the network output. Our proposed saliency methods – SplitCAM and SplitLRP – improve on SOTA results on both VGG16 and ResNet18 on ImageNet-S across all Quantus saliency metric categories.
Jakob Paul Zimmermann, Gerrit Holzbach, and David Lerch
arXiv:2603.25499, March 2026
Accepted at SAIAD Workshop at CVPR 2026
We introduce KGFP, a representation-based monitoring framework for detecting when an object detector is likely to miss safety-critical objects. KGFP measures semantic misalignment between YOLOv8 internal features and DINO foundation-model embeddings using a dual-encoder architecture with an angular distance metric. On COCO person detection, KGFP improves person recall from 64.3% to 84.5% at 5% False Positive Rate.
K2,3-free extremal cograph on 18 vertices — drag to explore
We investigate the bipartite Turán problem on cographs and fully classify the edge-maximal cographs with respect to not containing certain bicliques. These extremal cographs turn out to be star-shaped, highly symmetrical and beautiful. Explore more extremal cographs →
Talks & Conferences
Past
22 Jan 2026
Talk “Bipartite Turán problem on cographs” at Szabó’s Research Seminar, FU Berlin
17 Mar 2026
Poster “Hidden Monotonicity” at Workshop on Polyhedral Geometry for Neural Networks, Nürnberg
Upcoming
Jun 2026
Upcoming “Hidden Monotonicity” at CVPR 2026, Denver
Jun 2026
Upcoming “Knowledge-Guided Failure Prediction” at SAIAD Workshop at CVPR 2026, Denver
I completed my bachelor's degrees in Mathematics and Computer Science at KIT (Karlsruhe Institute of Technology) in Karlsruhe.
My bachelor thesis “Induced Turán problems” was supervised by Maria Axenovich and Thorsten Ueckerdt. The thesis studies extremal problems for induced and biinduced subgraphs, connections to Vapnik–Chervonenkis dimension, and includes a simplified proof of the Erdős–Hajnal conjecture for graphs with bounded VC dimension.