Publications

A Measure Theoretical Approach to the Mean-field Maximum Principle for Training NeurODEs
B. Bonnet, C. Cipriani, M. Fornasier and H. Huang. Nonlinear Analysis, 2023.

Algebraic Optimization of Sequential Decision Problems
M. Dressler, M. Garrote-Lopez, G. Montafur, J. Müller and K. Rose. Journal of Symbolic Computation, 2023.

Analysis of Convolutions, Non-linearity and Depth in Graph Neural Networks using Neural Tangent Kernel (Preprint)
M Sabanayagam P. Esser and D. Ghoshdastidar.

Analyzing the Sample Complexity of Self-Supervised Image Reconstruction Methods (Preprint)
T. Klug, D. Atik and R. Heckel.

Computability of Optimizers (Preprint)
Y. Lee, H. Boche and G. Kutyniok.

Conditional Generative Models Are Provably Robust: Pointwise Guarantees for Bayesian Inverse Problems
F. Altekrüger, P. Hagemann and G. Steidl. TMLR, 2023.

Continuous Limits of Residual Neural Networks in Case of Large Input Data
M. Herty, A. Thünen, T. Trimborn and G. Visconti. Communications in Applied and Industrial Mathematics, 2023.

Convergent Data-driven Regularizations for CT Reconstruction (Preprint)
S. Kabri, A. Auras, D. Riccio, H. Bauermeister, M. Benning, M. Moeller and M. Burger.

Critical Points and Convergence Analysis of Generative Deep Linear Networks Trained with Bures-Wasserstein Loss
P. Brechet, K. Papagiannouli, J. An and G. Montafur. Proceeding of the 40th International Conference on Machine Learning, 2023.

Deep ReLU neural networks overcome the curse of dimensionality for partial integrodifferential equations
L. Gonon and C. Schwab. Analysis and Applications, 2023.

Embedding Capabilities of Neural ODEs (Preprint)
C. Kuehn and S. Kuntz.

Enumeration of Max-pooling Responsed with Generalized Permutohedra (Preprint)
L. Escobar, P. Gallardo, J. Gonzales-Anaya, J. L. Gonzales, G. Montafur and A. H. Morales.

Examples for Separable Control Lyapunov Functions and Their Neural Network Approximation
L. Grüne and M. Sperl. IFAC-PapersOnLine, 2023.

Expected Gradients of Maxout Networks and Consequences to Parameter Initializations
H. Tseran and G. Montafur. Proceeding of the 40th International Conference on Machine Learning, 2023.

Finite Sample Identification of Wide Shallow Neural Networks with Biases (Preprint)
M. Fornasier, T. Klock, M. Mondelli and M. Rauchensteiner.

From NeurODEs to AutoencODEs: a Mean-field Control Framework for Width-varying Neural Networks (Preprint)
C. Cipriani, M. Fornasier and A. Scagliotti.

Function Space and Critical Points of Linear Convoluational Networks (Preprint)
K. Kohn, G. Montafur, V. Shahverdi and M. Trager.

Generalization Analysis of Message Passing Neural Networks on Large Random Graphs
S. Maskey, Y. Lee, R. Levie and G. Kutyniok. Advances in Neural Information Processing Systems, 2022.

Generalized normalizing flows via Markov Chains
P. Hagemann, J. Hertrich and G. Steidl. Cambridge University Press, 2023.

Generative Sliced MMD Flows with Riesz Kernels (Preprint)
J. Hertrich, C. Wald, F. Altekrüger, P. Hagemann.

Geometry and Convergence of Natural Policy Gradient Methods
J. Müller and G. Montafur. Information Geometry, 2023.

Guidelines for Data-driven Approaches to Study Transitions in Multiscale Systems: the Case of Lyapunov Vectors
A. Viennet, N. Vercauteren, M. Engel and D. Faranda. Chaos, 2022.

Improved Representation Learning Through Tensorized Autoencoders
P. Esser, S. Mukherjee, M. Sabanayagam and D. Ghoshdastidar. AISTATS, 2023.

Invariance-Aware Randomized Smoothing Certificates
J. Schuchardt and S. Günnemann. NeurIPS, 2022.

Implicit Bias of Gradient Descent for Mean Squared Error Regression with Wide Neural Networks
H. Jin and G. Montafur. Journal of Machine Learning Research, 2023.

Learning Provably Robust Estimators for Inverse Problems via Jittering (Preprint)
A. Krainovic, M. Soltanolkotabi and R. Heckel.

Learning Theory Can (Sometimes) Explain Generalisation in Graph Neural Networks
P. Esser, L. C. Vankadara and D. Ghoshdastidar. NeurIPS, 2021.

Localized Randomized Smoothing for Collective Robustness Certification
J. Schuchardt, T. Wollschläger, A. Bojchevski and S. Günnemann. ICLR, 2023.

Mildly Overparametrized ReLU Networks Have a Favorable Loss Landscape (Preprint)
K. Karhadkar, M. Murray, H. Tseran and G. Montafur.

Multilevel Diffusion: Infinite Dimensional Score-Based Diffusion Models for Image Generation (Preprint)
P. Hagemann, L. Mildenberger, M. Ruthotto, G. Steidl and N.T. Yang.

Non-Parametric Representation Learning with Kernels (Preprint)
P. Esser, M. Fleissner and D. Ghoshdastidar.

On the Effectiveness of Persistent Homology
R. Turkes, G. Montafur and N. Otter. NeurIPS, 2022.

On the Generalization Analysis of Adversarial Learning
W. Mustafa, Y. Lei and M. Kloft. PMLR, 2023.

PatchNR: Learning From Very Few Images by Patch Normalizing Flow Regularization
F. Altekrüger, A. Denker, P. Hagemann, J. Hertrich, P. Maass and G. Steidl. Inverse Problems, 2023.

Positive Lyapunov Exponent in the Hopf Normal Form with Additive Noise (Preprint)
D. Chemnitz and M. Engel.

Random Feature Neural Networks Learn Black-Scholes Type PDEs Without Curse of Dimensionality
L. Gonon. Journal of Machine Learning Research, 2023.

Randomized Message-Interception Smoothing: Gray-Box Certificates for Graph Neural Networks
Y. Scholten, J. Schuchardt, S. Geisler, A. Bojchevski and S. Günnemann. NeurIPS Proceedings, 2022.

Recent Trends on Nonlinear Filtering for Inverse Problems
M. Herty, E. Iacomini, and G. Visconti. Communications in Applied and Industrial Mathematics, 2022.

Representation Learning Dynamics of Self-Supervised Models (Preprint)
P. Esser, S. Mukherjee and D. Ghoshdastidar.

Reproducing Kernel Hilbert Spaces in the Mean Field Limit
C. Fiedler, M. Herty, M. Rom, C. Segala and S. Trimpe. Kinetic and Related Models, 2023.

Resolution-Invariant Image Classification Based on Fourier Neural Operators
S. Kabri, T. Roith, D. Tenbrick and M. Burger. International Conference on Scale Space and Variational Methods in Computer Vision, 2023.

Scaling Laws for Deep Learning Based Image Reconstruction
T. Klug and R. Heckel. ICLR, 2023.

Separable approximations of optimal value functions under a decaying sensitivity assumption (Preprint)
M. Sperl, L. Saluzzi, L. Grüne and D. Kalise.

Stable Recovery of Entangled Weights: Towards Robust Identification of Deep Neural Networks from Minimal Samples
C. Fiedler, M. Fornasier, T. Klock and M. Rauchensteiner. Applied and Computational Harmonic Analysis, 2023.

Stochastic Normalizing Flows for Inverse Problems: a Markov Chains Viewpoint
P. Hagemann, J. Hertrich and G. Steidl. SIAM/ASA Journal on Uncertainty Quantification, 2022.

Structure Preserving Neural Networks: A Case Study in the Entropy Closure of the Boltzmann Equation
S. Schotthöfer, T. Xiao, M. Frank and C. Hauck. PMLR, 2022.

Supermodular Rank: Set Function Decomposition and Optimization (Preprint)
R. Sonthalia, A. Seigal and G. Montafur.

The ELBO of Variational Autoencoders Converges to a Sum of Entropies
S. Damm, D. Forster, D. Velychko, Z. Dai, A. Fischer and J. Lücke. PMLR, 2022.

The Lyapunov Spectrum for Conditioned Random Dynamical Systems (Preprint)
M. M. Castro, D. Chemnitz, H. Chu, M. Engel, J. S. W. Engel and M. Rasmussen.

Turnpike Properties of Optimal Boundary Control Problems with Random Linear Hyperbolic Systems
M. Gugat and M. Herty. ESAIM: Control, Optimisation and Calculus of Variations, 2023.

Using neural networks to accelerate the solution of the Boltzmann equation
T. Xiao and M. Frank. Journal of Computational Physics, 2021.

News & Blog

Online Info Event

October 18, 2023

This information session is for anyone who is interested in phase 2 of the priority program. It will take place on the 18th of October 2-4pm via Zoom. Please click the link down below to register.

Call for Proposals

June 23, 2023

If you are interested in joining Phase 2 of the Theoretical Foundations of Deep Learning Prioritiy Program, please click below to find out more about how to apply.

Public Lecture

May 26, 2023

Michael Möller, a principal investigator of the SPP, was invited to give a public talk at the University of Siegen. To find out more, please click the link down below.