Arnulf Jentzen
The Chinese University of Hong Kong (CUHK), Shenzhen, China
University of Münster, Germany
Vacant PhD and postdoctoral positions: There are several
vacant PhD and postdoctoral positions
in my research group both at the University of Münster in
Germany and at the Chinese University of Hong Kong, Shenzhen in China.
I am especially interested in candidates with a strong
background in differential geometry, dynamical systems,
algebraic geometry, functional analysis, analysis of partial
differential equations, or
stochastic analysis.
Interested candidates can approach me by email through ajentzen
(at) unimuenster.de.
Address at The Chinese University of Hong Kong, Shenzhen:
Prof. Dr. Arnulf Jentzen
School of Data Science & Shenzhen Research Institute of Big Data
The Chinese University of Hong Kong, Shenzhen
Dao Yuan Building
2001 Longxiang Road
Longgang District, Shenzhen
China
Fon (Secretariat): +86 755 23517035
Office hour: by appointment
Email: ajentzen (at) cuhk.edu.cn
Address at the University of Münster:
Prof. Dr. Arnulf Jentzen
Institute for Analysis and Numerics
Applied Mathematics Münster
Faculty of Mathematics and Computer Science
University of Münster
Einsteinstraße 62
48149 Münster
Germany
Fon (Secretariat): +49 251 8333792
Office hour: by appointment
Email: ajentzen (at) unimuenster.de
Links:
[Homepage at the CUHK Shenzhen]
[Homepage at the University of Münster]
[Personal homepage]
[Mathematics Münster: Research Areas]
[University of Münster Webmail]
Scientific profiles:
[Profile on Google Scholar]
[Profile on ResearchGate]
[Profile on MathSciNet]
[Profile on Scopus]
[ORCID]
[ResearcherID]
Last update of this homepage: December 31st, 2022
Research areas
 Dynamical systems and gradient flows (geometric properties of gradient flows, domains of attractions,
blowup phenomena for gradient flows, critical points, centerstable manifold theorems, KurdykaLojasiewicz functions)
 Analysis of partial differential equations (wellposedness and regularity analysis for partial differential equations)
 Stochastic analysis (stochastic calculus, wellposedness and regularity analysis for
stochastic ordinary and partial differential equations)
 Machine learning (mathematics for deep learning,
stochastic gradient descent methods, deep neural networks,
empirical risk minimization)
 Numerical analysis (computational stochastics/stochastic numerics, computational finance)
Short Curriculum Vitae
2004–2007  Diploma studies in Mathematics, 
 Faculty of Computer Science and Mathematics, Goethe University Frankfurt 
2007–2009  PhD studies in Mathematics, 
 Faculty of Computer Science and Mathematics, Goethe University Frankfurt 
2009–2010  Assistant Professor (Akademischer Rat a.Z.), 
 Faculty of Mathematics, Bielefeld University 
2011–2012  Research Fellowship (German Research Foundation), 
 Program in Applied and Computational Mathematics, Princeton University 
2012–2019  Assistant Professor for Applied Mathematics, 
 Department of Mathematics, ETH Zurich 
2019–  Full Professor, 
 Faculty of Mathematics and Computer Science, University of Münster 
2021–  Presidential Chair Professor, 
 School of Data Science, The Chinese University of Hong Kong, Shenzhen 
Selected awards
 Felix Klein Prize, European Mathematical Society (EMS), 2020
 Joseph F. Traub Prize for Achievement in InformationBased Complexity, 2022
Research group
Current members of the research group
 Robin Graeber (PhD Student at the Faculty of Mathematics and Computer Science, University of Münster)
 Prof. Dr. Arnulf Jentzen (Head of the research group)
 Shokhrukh Ibragimov (PhD Student at the Faculty of Mathematics and Computer Science, University of Münster)
 Timo Kröger (PhD Student at the Faculty of Mathematics and Computer Science, University of Münster)
 Dr. Benno Kuckuck (Postdoc at the Faculty of Mathematics and Computer Science, University of Münster)
 Adrian Riekert (PhD Student at the Faculty of Mathematics and Computer Science, University of Münster)
 Florian Rossmannek (PhD student at DMATH, ETH Zurich, joint supervision with Prof. Dr. Patrick Cheridito)
 Philippe von Wurstemberger (PhD student at DMATH, ETH Zurich, joint supervision with Prof. Dr. Patrick Cheridito)
Former members of the research group
 Dr. Christian Beck (former PhD student, joint supervision with Prof. Dr. Norbert Hungerbühler)
 Dr. Sebastian Becker (former PhD student, joint supervision with Prof. Dr. Peter E. Kloeden, 2010–2017, now Postdoc at ETH Zurich)
 Prof. Dr. Sonja Cox (former Postdoc/Fellow, 2012–2014, now Associate Professor at the University of Amsterdam)
 Dr. Simon Eberle (former Postdoc, 2021–2022, now Postdoc at the Basque Center for Applied Mathematics, Bilbao)
 Dr. Fabian Hornung (former Postdoc/Fellow, 2018–2018, now at SAP)
 Prof. Dr. Raphael Kruse
(former Postdoc, 2012–2014, now Associate Professor at the Martin Luther University HalleWittenberg)
 Dr. Ryan Kurniawan (former PhD student, 2014–2018, now VP of Quantitative Research at Cr\'edit Agricole CIB)
 Prof. Dr. Ariel Neufeld (former Postdoc/Fellow, joint mentoring with Prof. Dr. Patrick Cheridito, 2018–2018,
now Assistant Professor at NTU Singapore)
 Dr. Primož Pušnik (former PhD Student, 2014–2020, now Quantitative Developer at Vontobel)
 Dr. Diyora Salimova (former PhD student, 2015–2019, now JProf. at Freiburg University)
 Prof. Dr. Michaela Szoelgyenyi (former Postdoc/Fellow, 2017–2018, now Full Professor at the University of Klagenfurt)
 Dr. Frederic Weber (former Postdoc, 2022–2022, now at Bosch)
 Dr. Timo Welti (former PhD Student, 2015–2020, now Data Analytics Consultant at D ONE Solutions AG)
 Dr. Larisa Yaroslavtseva
(former Postdoc, 2018–2018, now interim professor at the University of Ulm)
Current editorial boards affiliations
Preprints
 Ibragimov, S., Jentzen, A., and Riekert, A.,
Convergence to good nonoptimal critical points in the training of neural networks: Gradient descent optimization with one random initialization overcomes all bad nonglobal local minima with high probability.
[arXiv] (2022), 98 pp.
 Gallon, D., Jentzen, A., and Lindner, F.,
Blow up phenomena for gradient descent optimization methods in the training of artificial neural networks.
[arXiv] (2022), 84 pp.
 Beck, C., Becker, S., Cheridito, P., Jentzen, A., and Neufeld, A.,
An efficient Monte Carlo scheme for Zakai equations.
[arXiv] (2022), 41 pp.
 Cheridito, P., Jentzen, A., and Rossmannek, F.,
Gradient descent provably escapes saddle points in the training of shallow ReLU networks.
[arXiv] (2022), 16 pp.
 Eberle, S., Jentzen, A., Riekert, A., and Weiss, G. S.,
Normalized gradient flow optimization in the training of ReLU artificial neural networks.
[arXiv] (2022), 26 pp.
 Jentzen, A. and Kröger, T.,
On bounds for norms of reparameterized ReLU artificial neural network parameters: sums of fractional powers of the Lipschitz norm control the network parameter vector.
[arXiv] (2022), 39 pp.
 Boussange, V., Becker, S., Jentzen, A., Kuckuck, B., and Pellissier, L.,
Deep learning approximations for nonlocal nonlinear PDEs with Neumann boundary conditions.
[arXiv] (2022), 59 pp. Revision requested from Partial Differ. Equ. Appl.
 Ibragimov, S., Jentzen, A., Kröger, T., and Riekert, A.,
On the existence of infinitely many realization functions of nonglobal local minima in the training of artificial neural networks with ReLU activation.
[arXiv] (2022), 49 pp.
 Becker, S., Jentzen, A., Müller, M. S., and von Wurstemberger, P.,
Learning the random variables in Monte Carlo simulations with stochastic gradient descent: Machine learning for parametric PDEs and financial derivative pricing.
[arXiv] (2022), 70 pp.
 Beneventano, P., Cheridito, P., Graeber, R., Jentzen, A., and Kuckuck, B.,
Deep neural network approximation theory for highdimensional functions.
[arXiv] (2021), 82 pp.
 Hutzenthaler, M., Jentzen, A., Pohl, K., Riekert, A., and Scarpa, L.,
Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions.
[arXiv] (2021), 52 pp.
 Hutzenthaler, M., Jentzen, A., Kuckuck, B., and Padgett, J. L.,
Strong $L^p$error analysis of nonlinear Monte Carlo approximations for highdimensional semilinear partial differential equations.
[arXiv] (2021), 42 pp.
 Eberle, S., Jentzen, A., Riekert, A., and Weiss, G. S.,
Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation.
[arXiv] (2021), 30 pp.
 Grohs, P., Ibragimov, S., Jentzen, A., and Koppensteiner, S.,
Lower bounds for artificial neural network approximations: A proof that shallow neural networks fail to overcome the curse of dimensionality.
[arXiv] (2021), 53 pp. Revision requested from J. Complexity.
 Beck, C., Hutzenthaler, M., Jentzen, A., and Magnani, E.,
Full history recursive multilevel Picard approximations for ordinary differential equations with expectations.
[arXiv] (2021), 24 pp.
 Jentzen, A. and Kröger, T.,
Convergence rates for gradient descent in the training of overparameterized artificial neural networks with biases.
[arXiv] (2021), 38 pp.
 Beneventano, P., Cheridito, P., Jentzen, A., and von Wurstemberger, P.,
Highdimensional approximation spaces of artificial neural networks and applications to partial differential equations.
[arXiv] (2020), 32 pp.
 Beck, C., Becker, S., Cheridito, P., Jentzen, A., and Neufeld, A.,
Deep learning based numerical approximation algorithms for stochastic partial differential equations and highdimensional nonlinear filtering problems.
[arXiv] (2020), 58 pp.
 Hutzenthaler, M., Jentzen, A., Kruse, T., and Nguyen, T. A.,
Multilevel Picard approximations for highdimensional semilinear secondorder PDEs with Lipschitz nonlinearities.
[arXiv] (2020), 37 pp.
 Beck, C., Jentzen, A., and Kruse, T.,
Nonlinear Monte Carlo methods with polynomial runtime for highdimensional iterated nested expectations.
[arXiv] (2020), 47 pp.
 Bercher, A., Gonon, L., Jentzen, A., and Salimova, D.,
Weak error analysis for stochastic gradient descent optimization algorithms.
[arXiv] (2020), 123 pp. Revision requested from Lecture Notes in Mathematics.
 Hornung, F., Jentzen, A., and Salimova, D.,
Spacetime deep neural network approximations for highdimensional partial differential equations.
[arXiv] (2020), 52 pp.
 Jentzen, A. and Welti, T.,
Overall error analysis for the training of deep neural networks via stochastic gradient descent with random initialisation.
[arXiv] (2020), 51 pp.
 Beck, C., Gonon, L., and Jentzen, A.,
Overcoming the curse of dimensionality in the numerical approximation of highdimensional semilinear elliptic partial differential equations.
[arXiv] (2020), 50 pp.
 Giles, M. B., Jentzen, A., and Welti, T.,
Generalised multilevel Picard approximations.
[arXiv] (2019), 61 pp. Revision requested from IMA J. Numer. Anal.
 Hutzenthaler, M., Jentzen, A., Lindner, F., and Pušnik, P.,
Strong convergence rates on the whole probability space for spacetime discrete numerical approximation schemes for stochastic Burgers equations.
[arXiv] (2019), 60 pp.
 Beccari, M., Hutzenthaler, M., Jentzen, A., Kurniawan, R., Lindner, F., and Salimova, D.,
Strong and weak divergence of exponential and linearimplicit Euler approximations for stochastic partial differential equations with superlinearly growing nonlinearities.
[arXiv] (2019), 65 pp.
 Cox, S., Jentzen, A., and Lindner, F.,
Weak convergence rates for temporal numerical approximations of stochastic wave equations with multiplicative noise.
[arXiv] (2019), 51 pp.
 Hefter, M., Jentzen, A., and Kurniawan, R.,
Weak convergence rates for numerical approximations of stochastic partial differential equations with nonlinear diffusion coefficients in UMD Banach spaces.
[arXiv] (2016), 51 pp.
 Hutzenthaler, M., Jentzen, A., and Noll, M.,
Strong convergence rates and temporal regularity for Cox–Ingersoll–Ross processes and Bessel processes with accessible boundaries.
[arXiv] (2014), 32 pp.
Publications and accepted research articles
 Jentzen, A. and Riekert, A.,
Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation.
J. Math. Anal. Appl. 517 (2023), no. 2, 126601, 43 pp. [arXiv]
 Beck, C., Hutzenthaler, M., Jentzen, A., and Kuckuck, B.,
An overview on deep learningbased approximation methods for partial differential equations.
Early access version available online. Discrete Contin. Dyn. Syst. Ser. B (2022), 50 pp. [arXiv]
 Jentzen, A. and Riekert, A.,
A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions.
Z. Angew. Math. Phys. 73 (2022), Paper no. 188, 30 pp. [arXiv]
 Jentzen, A. and Riekert, A.,
A proof of convergence for the gradient descent optimization method with random initializations in the training of neural networks with ReLU activation for piecewise linear target functions.
J. Mach. Learn. Res. 23 (2022), 260, 50 pp. [arXiv]
 Hutzenthaler, M., Jentzen, A., Kruse, T., and Nguyen, T. A.,
Overcoming the curse of dimensionality in the numerical approximation of backward stochastic differential equations.
Early access version available online. J. Numer. Math. (2022). [arXiv]
 Cheridito, P., Jentzen, A., and Rossmannek, F.,
Landscape analysis for Shallow Neural Networks: Complete Classification of Critical Points for Affine Target Functions.
J. Nonlinear Sci. 32 (2022), 64, 45 pp. [arXiv]
 Jentzen, A., Kuckuck, B., MüllerGronbach, T., and Yaroslavtseva, L.,
Counterexamples to local Lipschitz and local Hölder continuity with respect to the initial values for additive noise driven SDEs with smooth drift coefficient functions with at most polynomially growing derivatives.
Discrete Contin. Dyn. Syst. Ser. B 27 (2022), no. 7, 3707–3724. [arXiv]
 Cheridito, P., Jentzen, A., and Rossmannek, F.,
Efficient approximation of highdimensional functions with neural networks.
IEEE Trans. Neural Netw. Learn. Syst. 33 (2022), no. 7, 3079–3093. [arXiv]
 Jentzen, A. and Riekert, A.,
On the existence of global minima and convergence analyses for gradient descent methods in the training of deep neural networks.
J. Mach. Learn. 1 (2022), no. 2, 141–246. [arXiv]
 Grohs, P., Hornung, F., Jentzen, A., and Zimmermann, P.,
Spacetime error estimates for deep neural network approximations for differential equations.
[arXiv] (2019), 86 pp. Accepted in Adv. Comput. Math.
 Cheridito, P., Jentzen, A., Riekert, A., and Rossmannek, F.,
A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions.
J. Complexity 72 (2022), Paper no. 101646, 26 pp. [arXiv]
 Grohs, P., Jentzen, A., and Salimova, D.,
Deep neural network approximations for solutions of PDEs based on Monte Carlo algorithms.
Partial Differ. Equ. Appl. 3 (2022), no. 4, 45, 41 pp. [arXiv]
 Becker, S., Gess, B., Jentzen, A., and Kloeden, P. E.,
Strong convergence rates for explicit spacetime discrete numerical approximations of stochastic Allen–Cahn equations.
Early access version available online. Stoch. Partial Differ. Equ. Anal. Comput. (2022), 58 pp. [arXiv]
 Beck, C., Jentzen, A., and Kuckuck, B.,
Full error analysis for the training of deep neural networks.
Early access version available online. Infin. Dimens. Anal. Quantum Probab. Relat. Top. (2022), Paper no. 2150020, 77 pp. [arXiv]
 Elbrächter, D., Grohs, P., Jentzen, A., and Schwab, C.,
DNN Expression Rate Analysis of Highdimensional PDEs: Application to Option Pricing.
Constr. Approx. 55 (2022), 3–71. [arXiv]
 E, W., Han, J., and Jentzen, A.,
Algorithms for solving high dimensional PDEs: from nonlinear Monte Carlo to machine learning.
Nonlinearity 35 (2022), no. 1, 278–310. [arXiv]
 Jacobe de Naurois, L., Jentzen, A., and Welti, T.,
Weak convergence rates for spatial spectral Galerkin approximations of semilinear stochastic wave equations with multiplicative noise.
Appl. Math. Optim. 84 (2021), suppl. 2, S1187–S1217. [arXiv]
 Jentzen, A., Salimova, D., and Welti, T.,
A proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of Kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients.
Commun. Math. Sci. 19 (2021), no. 5, 1167–1205. [arXiv]
 E, W., Hutzenthaler, M., Jentzen, A., and Kruse, T.,
Multilevel Picard iterations for solving smooth semilinear parabolic heat equations.
Partial Differ. Equ. Appl. 2 (2021), no. 6, Paper no. 80, 31 pp. [arXiv]
 Jentzen, A., Kuckuck, B., MüllerGronbach, T., and Yaroslavtseva, L.,
On the strong regularity of degenerate additive noise driven stochastic differential equations with respect to their initial values.
J. Math. Anal. Appl. 502 (2021), no. 2, Paper no. 125240, 23 pp. [arXiv]
 Beck, C., Becker, S., Cheridito, P., Jentzen, A., and Neufeld, A.,
Deep splitting method for parabolic PDEs.
SIAM J. Sci. Comput. 43 (2021), no. 5, A3135–A3154. [arXiv]
 Beck, C., Gonon, L., Hutzenthaler, M., and Jentzen, A.,
On existence and uniqueness properties for solutions of stochastic fixed point equations.
Discrete Contin. Dyn. Syst. Ser. B 26 (2021), no. 9, 4927–4962. [arXiv]
 Jentzen, A., Lindner, F., and Pušnik, P.,
Spatial Sobolev regularity for stochastic Burgers equations with additive trace class noise.
Nonlinear Anal. 210 (2021), Paper no. 112310, 29 pp. [arXiv]
 Gonon, L., Grohs, P., Jentzen, A., Kofler, D., and Šiška, D.,
Uniform error estimates for artificial neural network approximations for heat equations.
Early access version available online. IMA J. Numer. Anal. (2021), 64 pp. [arXiv]
 Beck, C., Becker, S., Grohs, P., Jaafari, N., and Jentzen, A.,
Solving the Kolmogorov PDE by means of deep learning.
J. Sci. Comput. 88 (2021), no. 3, Paper no. 73, 28 pp. [arXiv]
 Hutzenthaler, M., Jentzen, A., and Kruse, T.,
Overcoming the curse of dimensionality in the numerical approximation of parabolic partial differential equations with gradientdependent nonlinearities.
Early access version available online. Found. Comput. Math. (2021), 62 pp. [arXiv]
 Beck, C., Hutzenthaler, M., and Jentzen, A.,
On nonlinear Feynman–Kac formulas for viscosity solutions of semilinear parabolic partial differential equations.
Stoch. Dyn. 21 (2021), no. 8, Paper no. 2150048, 68 pp. [arXiv]
 Becker, S., Cheridito, P., Jentzen, A., and Welti, T.,
Solving highdimensional optimal stopping problems using deep learning.
European J. Appl. Math. 32 (2021), no. 3, 470–514. [arXiv]
 Cheridito, P., Jentzen, A., and Rossmannek, F.,
Nonconvergence of stochastic gradient descent in the training of deep neural networks.
J. Complexity 64 (2021), Paper no. 101540, 10 pp. [arXiv]
 Jentzen, A. and Kurniawan, R.,
Weak convergence rates for Eulertype approximations of semilinear stochastic evolution equations with nonlinear diffusion coefficients.
Found. Comput. Math. 21 (2021), no. 2, 445–536. [arXiv]
 Andersson, A., Jentzen, A., and Kurniawan, R.,
Existence, uniqueness, and regularity for stochastic evolution equations with irregular initial values.
J. Math. Anal. Appl. 495 (2021), no. 1, Paper no. 124558, 33 pp. [arXiv]
 Cox, S., Hutzenthaler, M., Jentzen, A., van Neerven, J., and Welti, T.,
Convergence in Hölder norms with applications to Monte Carlo methods in infinite dimensions.
IMA J. Numer. Anal. 41 (2021), no. 1, 493–548. [arXiv]
 Jentzen, A., Kuckuck, B., Neufeld, A., and von Wurstemberger, P.,
Strong error analysis for stochastic gradient descent optimization algorithms.
IMA J. Numer. Anal. 41 (2021), no. 1, 455–492. [arXiv]
 Hutzenthaler, M., Jentzen, A., Kruse, T., Nguyen, T. A., and von Wurstemberger, P.,
Overcoming the curse of dimensionality in the numerical approximation of semilinear parabolic partial differential equations.
Proc. A. 476 (2020), no. 2244, 20190630, 25 pp. [arXiv]
 Beck, C., Hornung, F., Hutzenthaler, M., Jentzen, A., and Kruse, T.,
Overcoming the curse of dimensionality in the numerical approximation of AllenCahn partial differential equations via truncated fullhistory recursive multilevel Picard approximations.
J. Numer. Math. 28 (2020), no. 4, 197–222. [arXiv]
 Jentzen, A. and Riekert, A.,
Strong overall error analysis for the training of artificial neural networks via random initializations.
[arXiv] (2020), 40 pp. Accepted in Commun. Math. Stat.
 Jentzen, A., Lindner, F., and Pušnik, P.,
Exponential moment bounds and strong convergence rates for tamedtruncated numerical approximations of stochastic convolutions.
Numer. Algorithms 85 (2020), no. 4, 1447–1473. [arXiv]
 Becker, S., Gess, B., Jentzen, A., and Kloeden, P. E.,
Lower and upper bounds for strong approximation errors for numerical approximations of stochastic heat equations.
BIT 60 (2020), no. 4, 1057–1073. [arXiv]
 Becker, S., Braunwarth, R., Hutzenthaler, M., Jentzen, A., and von Wurstemberger, P.,
Numerical simulations for full history recursive multilevel Picard approximations for systems of highdimensional partial differential equations.
Commun. Comput. Phys. 28 (2020), no. 5, 2109–2138. [arXiv]
 Hutzenthaler, M., Jentzen, A., and von Wurstemberger, P.,
Overcoming the curse of dimensionality in the approximative pricing of financial derivatives with default risks.
Electron. J. Probab. 25 (2020), Paper no. 101, 73 pp. [arXiv]
 Berner, J., Grohs, P., and Jentzen, A.,
Analysis of the generalization error: empirical risk minimization over deep artificial neural networks overcomes the curse of dimensionality in the numerical approximation of Black–Scholes partial differential equations.
SIAM J. Math. Data Sci. 2 (2020), no. 3, 631–657. [arXiv]
 Becker, S., Cheridito, P., and Jentzen, A.,
Pricing and hedging Americanstyle options with deep learning.
J. Risk Financial Manag. 13 (2020), no. 7, Paper no. 158, 12 pp. [arXiv]
 Fehrman, B., Gess, B., and Jentzen, A.,
Convergence rates for the stochastic gradient descent method for nonconvex objective functions.
J. Mach. Learn. Res. 21 (2020), Paper no. 136, 48 pp. [arXiv]
 Hutzenthaler, M., Jentzen, A., Kruse, T., and Nguyen, T. A.,
A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations.
Partial Differ. Equ. Appl. 1 (2020), no. 2, Paper no. 10, 34 pp. [arXiv]
 Jentzen, A. and Pušnik, P.,
Strong convergence rates for an explicit numerical approximation method for stochastic evolution equations with nonglobally Lipschitz continuous nonlinearities.
IMA J. Numer. Anal. 40 (2020), no. 2, 1005–1050. [arXiv]
 Jentzen, A. and von Wurstemberger, P.,
Lower error bounds for the stochastic gradient descent optimization algorithm: sharp convergence rates for slowly and fast decaying learning rates.
J. Complexity 57 (2020), 101438, 16 pp. [arXiv]
 Hutzenthaler, M. and Jentzen, A.,
On a perturbation theory and on strong convergence rates for stochastic ordinary and partial differential equations with nonglobally monotone coefficients.
Ann. Probab. 48 (2020), no. 1, 53–93. [arXiv]
 Beck, C., E, W., and Jentzen, A.,
Machine learning approximation algorithms for highdimensional fully nonlinear partial differential equations and secondorder backward stochastic differential equations.
J. Nonlinear Sci. 29 (2019), no. 4, 1563–1619. [arXiv]
 Jentzen, A., Lindner, F., and Pušnik, P.,
On the Alekseev–Gröbner formula in Banach spaces.
Discrete Contin. Dyn. Syst. Ser. B 24 (2019), no. 8, 4475–4511. [arXiv]
 E, W., Hutzenthaler, M., Jentzen, A., and Kruse, T.,
On multilevel Picard numerical approximations for highdimensional nonlinear parabolic partial differential equations and highdimensional nonlinear backward stochastic differential equations.
J. Sci. Comput. 79 (2019), no. 3, 1534–1571. [arXiv]
 Da Prato, G., Jentzen, A., and Röckner, M.,
A mild Itô formula for SPDEs.
Trans. Amer. Math. Soc. 372 (2019), no. 6, 3755–3807. [arXiv]
 Berner, J., Elbrächter, D., Grohs, P., and Jentzen, A.,
Towards a regularity theory for ReLU networks – chain rule and global error estimates.
In 13th International Conference on Sampling Theory and Applications (SampTA), 2019, 5 pp. [arXiv] (2019), 5 pp.
 Andersson, A., Hefter, M., Jentzen, A., and Kurniawan, R.,
Regularity properties for solutions of infinite dimensional Kolmogorov equations in Hilbert spaces.
Potential Anal. 50 (2019), no. 3, 347–379. [arXiv]
 Becker, S., Cheridito, P., and Jentzen, A.,
Deep optimal stopping.
J. Mach. Learn. Res. 20 (2019), Paper no. 74, 25 pp. [arXiv]
 Conus, D., Jentzen, A., and Kurniawan, R.,
Weak convergence rates of spectral Galerkin approximations for SPDEs with nonlinear diffusion coefficients.
Ann. Appl. Probab. 29 (2019), no. 2, 653–716. [arXiv]
 Hutzenthaler, M., Jentzen, A., and Salimova, D.,
Strong convergence of fulldiscrete nonlinearitytruncated accelerated exponential Eulertype approximations for stochastic Kuramoto–Sivashinsky equations.
Commun. Math. Sci. 16 (2018), no. 6, 1489–1529. [arXiv]
 Hefter, M. and Jentzen, A.,
On arbitrarily slow convergence rates for strong numerical approximations of Cox–Ingersoll–Ross processes and squared Bessel processes.
Finance Stoch. 23 (2019), no. 1, 139–172. [arXiv]
 Jentzen, A., Salimova, D., and Welti, T.,
Strong convergence for explicit spacetime discrete numerical approximation methods for stochastic Burgers equations.
J. Math. Anal. Appl. 469 (2019), no. 2, 661–704. [arXiv]
 Becker, S. and Jentzen, A.,
Strong convergence rates for nonlinearitytruncated Eulertype approximations of stochastic Ginzburg–Landau equations.
Stochastic Process. Appl. 129 (2019), no. 1, 28–69. [arXiv]
 Hudde, A., Hutzenthaler, M., Jentzen, A., and Mazzonetto, S.,
On the Itô–Alekseev–Gröbner formula for stochastic differential equations.
[arXiv] (2018), 29 pp. Accepted in Ann. Inst. Henri Poincaré Probab. Stat.
 Jentzen, A. and Pušnik, P.,
Exponential moments for numerical approximations of stochastic partial differential equations.
Stoch. Partial Differ. Equ. Anal. Comput. 6 (2018), no. 4, 565–617. [arXiv]
 Grohs, P., Hornung, F., Jentzen, A., and von Wurstemberger, P.,
A proof that artificial neural networks overcome the curse of dimensionality in the numerical approximation of Black–Scholes partial differential equations.
[arXiv] (2018), 124 pp. Accepted in Mem. Amer. Math. Soc.
 Han, J., Jentzen, A., and E, W.,
Solving highdimensional partial differential equations using deep learning.
Proc. Natl. Acad. Sci. USA 115 (2018), no. 34, 8505–8510. [arXiv]
 Cox, S., Jentzen, A., Kurniawan, R., and Pušnik, P.,
On the mild Itô formula in Banach spaces.
Discrete Contin. Dyn. Syst. Ser. B 23 (2018), no. 6, 2217–2243. [arXiv]
 Jacobe de Naurois, L., Jentzen, A., and Welti, T.,
Lower bounds for weak approximation errors for spatial spectral Galerkin approximations of stochastic wave equations.
In Eberle, A., Grothaus, M., Hoh, W., Kassmann, M., Stannat, W., and Trutnau, G. (eds.) Stochastic partial differential equations and related fields, 237–248, Springer Proc. Math. Stat., 229, Springer, Cham, 2018. [arXiv] (2017), 237–248
 Hutzenthaler, M., Jentzen, A., and Wang, X.,
Exponential integrability properties of numerical approximation processes for nonlinear stochastic differential equations.
Math. Comp. 87 (2018), no. 311, 1353–1413. [arXiv]
 E, W., Han, J., and Jentzen, A.,
Deep learningbased numerical methods for highdimensional parabolic partial differential equations and backward stochastic differential equations.
Commun. Math. Stat. 5 (2017), no. 4, 349–380. [arXiv]
 Gerencsér, M., Jentzen, A., and Salimova, D.,
On stochastic differential equations with arbitrarily slow convergence rates for strong approximation in two space dimensions.
Proc. A. 473 (2017), no. 2207, 20170104, 16 pp. [arXiv]
 Andersson, A., Jentzen, A., Kurniawan, R., and Welti, T.,
On the differentiability of solutions of stochastic evolution equations with respect to their initial values.
Nonlinear Anal. 162 (2017), 128–161. [arXiv]
 Hefter, M., Jentzen, A., and Kurniawan, R.,
Counterexamples to regularities for the derivative processes associated to stochastic evolution equations.
[arXiv] (2017), 26 pp. Accepted in Stoch. Partial Differ. Equ. Anal. Comput.
 E, W., Jentzen, A., and Shen, H.,
Renormalized powers of Ornstein–Uhlenbeck processes and wellposedness of stochastic Ginzburg–Landau equations.
Nonlinear Anal. 142 (2016), 152–193. [arXiv]
 Jentzen, A., MüllerGronbach, T., and Yaroslavtseva, L.,
On stochastic differential equations with arbitrary slow convergence rates for strong approximation.
Commun. Math. Sci. 14 (2016), no. 6, 1477–1500. [arXiv]
 Becker, S., Jentzen, A., and Kloeden, P. E.,
An exponential Wagner–Platen type scheme for SPDEs.
SIAM J. Numer. Anal. 54 (2016), no. 4, 2389–2426. [arXiv]
 Hairer, M., Hutzenthaler, M., and Jentzen, A.,
Loss of regularity for Kolmogorov equations.
Ann. Probab. 43 (2015), no. 2, 468–527. [arXiv]
 Hutzenthaler, M. and Jentzen, A.,
Numerical approximations of stochastic differential equations with nonglobally Lipschitz continuous coefficients.
Mem. Amer. Math. Soc. 236 (2015), no. 1112, v+99 pp. [arXiv]
 Hutzenthaler, M., Jentzen, A., and Kloeden, P. E.,
Divergence of the multilevel Monte Carlo Euler method for nonlinear stochastic differential equations.
Ann. Appl. Probab. 23 (2013), no. 5, 1913–1966. [arXiv]
 Cox, S., Hutzenthaler, M., and Jentzen, A.,
Local Lipschitz continuity in the initial value and strong completeness for nonlinear stochastic differential equations.
[arXiv] (2014), 90 pp. Accepted in Mem. Amer. Math. Soc.
 Blömker, D. and Jentzen, A.,
Galerkin approximations for the stochastic Burgers equation.
SIAM J. Numer. Anal. 51 (2013), no. 1, 694–715. [arXiv]
 Hutzenthaler, M., Jentzen, A., and Kloeden, P. E.,
Strong convergence of an explicit numerical method for SDEs with nonglobally Lipschitz continuous coefficients.
Ann. Appl. Probab. 22 (2012), no. 4, 1611–1641. (Awarded a second prize of the 15th Leslie Fox Prize in Numerical Analysis (Manchester, UK, June 2011).) [arXiv]
 Jentzen, A. and Röckner, M.,
A Milstein scheme for SPDEs.
Found. Comput. Math. 15 (2015), no. 2, 313–362. [arXiv]
 Jentzen, A. and Röckner, M.,
Regularity analysis for stochastic partial differential equations with nonlinear multiplicative trace class noise.
J. Differential Equations 252 (2012), no. 1, 114–136. [arXiv]
 Hutzenthaler, M. and Jentzen, A.,
Convergence of the stochastic Euler scheme for locally Lipschitz coefficients.
Found. Comput. Math. 11 (2011), no. 6, 657–706. [arXiv]
 Hutzenthaler, M., Jentzen, A., and Kloeden, P. E.,
Strong and weak divergence in finite time of Euler's method for stochastic differential equations with nonglobally Lipschitz continuous coefficients.
Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 467 (2011), no. 2130, 1563–1576. [arXiv]
 Jentzen, A., Kloeden, P. E., and Winkel, G.,
Efficient simulation of nonlinear parabolic SPDEs with additive noise.
Ann. Appl. Probab. 21 (2011), no. 3, 908–950. [arXiv]
 Jentzen, A.,
Higher order pathwise numerical approximations of SPDEs with additive noise.
SIAM J. Numer. Anal. 49 (2011), no. 2, 642–667.
 Jentzen, A. and Kloeden, P. E.,
Taylor approximations for stochastic partial differential equations.
CBMSNSF Regional Conference Series in Applied Mathematics, 83, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2011, xiv+211 pp.
 Jentzen, A.,
Taylor expansions of solutions of stochastic partial differential equations.
Discrete Contin. Dyn. Syst. Ser. B 14 (2010), no. 2, 515–557. [arXiv]
 Jentzen, A. and Kloeden, P. E.,
Taylor expansions of solutions of stochastic partial differential equations with additive noise.
Ann. Probab. 38 (2010), no. 2, 532–569. [arXiv]
 Jentzen, A. and Kloeden, P. E.,
A unified existence and uniqueness theorem for stochastic evolution equations.
Bull. Aust. Math. Soc. 81 (2010), no. 1, 33–46.
 Jentzen, A., Leber, F., Schneisgen, D., Berger, A., and Siegmund, S.,
An improved maximum allowable transfer interval for $L^p$stability of networked control systems.
IEEE Trans. Automat. Control 55 (2010), no. 1, 179–184.
 Jentzen, A. and Kloeden, P. E.,
The numerical approximation of stochastic partial differential equations.
Milan J. Math. 77 (2009), 205–244.
 Jentzen, A.,
Pathwise numerical approximation of SPDEs with additive noise under nonglobal Lipschitz coefficients.
Potential Anal. 31 (2009), no. 4, 375–404.
 Jentzen, A. and Kloeden, P. E.,
Pathwise Taylor schemes for random ordinary differential equations.
BIT 49 (2009), no. 1, 113–140.
 Jentzen, A., Kloeden, P. E., and Neuenkirch, A.,
Pathwise approximation of stochastic differential equations on domains: higher order convergence rates without global Lipschitz coefficients.
Numer. Math. 112 (2009), no. 1, 41–64.
 Jentzen, A. and Kloeden, P. E.,
Overcoming the order barrier in the numerical approximation of stochastic partial differential equations with additive spacetime noise.
Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 465 (2009), no. 2102, 649–667.
 Jentzen, A. and Neuenkirch, A.,
A random Euler scheme for Carathéodory differential equations.
J. Comput. Appl. Math. 224 (2009), no. 1, 346–359.
 Jentzen, A., Kloeden, P. E., and Neuenkirch, A.,
Pathwise convergence of numerical schemes for random and stochastic differential equations.
In Cucker, F., Pinkus, A., and Todd, M. J. (eds.) Foundations of Computational Mathematics, Hong Kong 2008, 140–161, London Math. Soc. Lecture Note Ser., 363, Cambridge Univ. Press, Cambridge, 2009.
 Kloeden, P. E. and Jentzen, A.,
Pathwise convergent higher order numerical schemes for random ordinary differential equations.
Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 463 (2007), no. 2087, 2929–2944.
Theses
 Jentzen, A., Taylor Expansions for Stochastic Partial
Differential Equations.
PhD thesis (2009), Frankfurt University, Germany.
 Jentzen, A., Numerische Verfahren hoher Ordnung
für zufällige Differentialgleichungen.
Diploma thesis (2007), Frankfurt University, Germany.
