<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="/feed.xml" rel="self" type="application/atom+xml" /><link href="/" rel="alternate" type="text/html" /><updated>2025-10-21T09:53:58+00:00</updated><id>/feed.xml</id><title type="html">Tim Roith</title><subtitle>Tim Roith</subtitle><author><name>Tim Roith</name><email> tim{dot}roith{at}desy{dot}de</email></author><entry><title type="html">Bridging AI and Theory: Workshop on the Mathematics of Transformers at DESY</title><link href="/2025/10/15/Transformer.html" rel="alternate" type="text/html" title="Bridging AI and Theory: Workshop on the Mathematics of Transformers at DESY" /><published>2025-10-15T00:00:00+00:00</published><updated>2025-10-15T00:00:00+00:00</updated><id>/2025/10/15/Transformer</id><content type="html" xml:base="/2025/10/15/Transformer.html"><![CDATA[<p>At a recent workshop hosted by DESY, around 30 experts from across Europe came together to explore the theoretical foundations of transformers, the core architecture behind modern large language models. Organized by researchers from DESY, Helmholtz Imaging, and the University of Oxford, the event featured inspiring talks and a lively world café discussion.
The positive response shows growing momentum in this young field. Stay tuned for a potential second edition!</p>

<p>🔗 https://bit.ly/mathematics-of-transformers</p>

<p>The workshop was jointly organized by Martin Burger, Samira Kabri, Tim Roith, Lukas Weigand (Computational Imaging at DESY and Helmholtz Imaging) and Konstantin Riedl (University of Oxford). It was a satellite event to the Conference on Mathematics of Machine Learning, also co-organized by Martin Burger.</p>]]></content><author><name>Tim Roith</name><email> tim{dot}roith{at}desy{dot}de</email></author><summary type="html"><![CDATA[At a recent workshop hosted by DESY, around 30 experts from across Europe came together to explore the theoretical foundations of transformers, the core architecture behind modern large language models. Organized by researchers from DESY, Helmholtz Imaging, and the University of Oxford, the event featured inspiring talks and a lively world café discussion. The positive response shows growing momentum in this young field. Stay tuned for a potential second edition!]]></summary></entry><entry><title type="html">Position-Blind Ptychography: Viability of image reconstruction via data-driven variational inference</title><link href="/paper/2025/09/28/Ptycho.html" rel="alternate" type="text/html" title="Position-Blind Ptychography: Viability of image reconstruction via data-driven variational inference" /><published>2025-09-28T00:00:00+00:00</published><updated>2025-09-28T00:00:00+00:00</updated><id>/paper/2025/09/28/Ptycho</id><content type="html" xml:base="/paper/2025/09/28/Ptycho.html"><![CDATA[<p>We put a preprint on the arXiv dealing with position-blind pytychography. Compared to the standard setup, here we ask ourselves how well the reconstruction problem can be solved, when the scan position are completely unknown. Furthermore, we explore how to leverage a powerful data-driven prior with a variational inference approach. This is joint work Simon Welker, Lorenz Kuger Berthy Feng, Martin Burger, Timo Gerkmann, Henry Chapman. Part of this project was done during my DAAD funded research stay at Caltech, in Oktober 2024.</p>

<p>The preprint is available on the arXiv: <a href="https://arxiv.org/abs/2509.25269">Position-Blind Ptychography: Viability of image reconstruction via data-driven variational inference</a>.</p>]]></content><author><name>Tim Roith</name><email> tim{dot}roith{at}desy{dot}de</email></author><category term="paper" /><summary type="html"><![CDATA[We put a preprint on the arXiv dealing with position-blind pytychography. Compared to the standard setup, here we ask ourselves how well the reconstruction problem can be solved, when the scan position are completely unknown. Furthermore, we explore how to leverage a powerful data-driven prior with a variational inference approach. This is joint work Simon Welker, Lorenz Kuger Berthy Feng, Martin Burger, Timo Gerkmann, Henry Chapman. Part of this project was done during my DAAD funded research stay at Caltech, in Oktober 2024.]]></summary></entry><entry><title type="html">Introduction to Regularization and Learning Methods for Inverse Problems</title><link href="/paper/2025/08/27/LIP.html" rel="alternate" type="text/html" title="Introduction to Regularization and Learning Methods for Inverse Problems" /><published>2025-08-27T00:00:00+00:00</published><updated>2025-08-27T00:00:00+00:00</updated><id>/paper/2025/08/27/LIP</id><content type="html" xml:base="/paper/2025/08/27/LIP.html"><![CDATA[<p>Together with Danielle Bednarski we uploaded our lecture notes <a href="https://arxiv.org/abs/2508.18178">Introduction to Regularization and Learning Methods for Inverse Problems</a> to arXiv. The notes are based on a course we taught at the University of Hamburg in the winter term 2024/2025.</p>]]></content><author><name>Tim Roith</name><email> tim{dot}roith{at}desy{dot}de</email></author><category term="paper" /><summary type="html"><![CDATA[Together with Danielle Bednarski we uploaded our lecture notes Introduction to Regularization and Learning Methods for Inverse Problems to arXiv. The notes are based on a course we taught at the University of Hamburg in the winter term 2024/2025.]]></summary></entry><entry><title type="html">Consensus-based optimization for closed-box adversarial attacks and a connection to evolution strategies</title><link href="/paper/2025/06/30/AdvCBO.html" rel="alternate" type="text/html" title="Consensus-based optimization for closed-box adversarial attacks and a connection to evolution strategies" /><published>2025-06-30T00:00:00+00:00</published><updated>2025-06-30T00:00:00+00:00</updated><id>/paper/2025/06/30/AdvCBO</id><content type="html" xml:base="/paper/2025/06/30/AdvCBO.html"><![CDATA[<p>Together with Leon Bungert and Philipp Wacker we uploaded a paper to arXiv, which shows how CBO can be used to create closed-box adversarial attacks. Beyond that we establish an interesting connection between CBO and evolution strategies. The paper can be found <a href="https://arxiv.org/pdf/2506.24048">here</a>.</p>]]></content><author><name>Tim Roith</name><email> tim{dot}roith{at}desy{dot}de</email></author><category term="paper" /><summary type="html"><![CDATA[Together with Leon Bungert and Philipp Wacker we uploaded a paper to arXiv, which shows how CBO can be used to create closed-box adversarial attacks. Beyond that we establish an interesting connection between CBO and evolution strategies. The paper can be found here.]]></summary></entry><entry><title type="html">Analysis of mean-field models arising from self-attention dynamics in transformer architectures with layer normalization</title><link href="/paper/2025/06/05/Transformers.html" rel="alternate" type="text/html" title="Analysis of mean-field models arising from self-attention dynamics in transformer architectures with layer normalization" /><published>2025-06-05T00:00:00+00:00</published><updated>2025-06-05T00:00:00+00:00</updated><id>/paper/2025/06/05/Transformers</id><content type="html" xml:base="/paper/2025/06/05/Transformers.html"><![CDATA[<p>I’m happy to share that our paper <a href="https://royalsocietypublishing.org/toc/rsta/2025/383/2298">Analysis of mean-field models arising from self-attention dynamics in transformer architectures with layer normalization</a> has been published in the Philosophical Transactions of the Royal Society A.</p>]]></content><author><name>Tim Roith</name><email> tim{dot}roith{at}desy{dot}de</email></author><category term="paper" /><summary type="html"><![CDATA[I’m happy to share that our paper Analysis of mean-field models arising from self-attention dynamics in transformer architectures with layer normalization has been published in the Philosophical Transactions of the Royal Society A.]]></summary></entry><entry><title type="html">Dr.-Klaus-Körper Prize</title><link href="/2025/04/07/KKP.html" rel="alternate" type="text/html" title="Dr.-Klaus-Körper Prize" /><published>2025-04-07T00:00:00+00:00</published><updated>2025-04-07T00:00:00+00:00</updated><id>/2025/04/07/KKP</id><content type="html" xml:base="/2025/04/07/KKP.html"><![CDATA[<p>I am very happy to share that I was awarded the Dr.-Klaus-Körper Prize for my PhD thesis today. The prize is awarded by the <a href="https://www.gamm.de/">GAMM</a> at the <a href="https://jahrestagung.gamm.org/annual-meeting-2025/95th-annual-meeting-2/">annual meeting in Poznań</a>.</p>]]></content><author><name>Tim Roith</name><email> tim{dot}roith{at}desy{dot}de</email></author><summary type="html"><![CDATA[I am very happy to share that I was awarded the Dr.-Klaus-Körper Prize for my PhD thesis today. The prize is awarded by the GAMM at the annual meeting in Poznań.]]></summary></entry><entry><title type="html">MirrorCBO: A consensus-based optimization method in the spirit of mirror descent</title><link href="/paper/2025/01/21/MirrorCBO.html" rel="alternate" type="text/html" title="MirrorCBO: A consensus-based optimization method in the spirit of mirror descent" /><published>2025-01-21T00:00:00+00:00</published><updated>2025-01-21T00:00:00+00:00</updated><id>/paper/2025/01/21/MirrorCBO</id><content type="html" xml:base="/paper/2025/01/21/MirrorCBO.html"><![CDATA[<p>Together with Leon Bungert, Franca Hoffmann and Doh Yeon Kim we poste a new preprint on <a href="https://arxiv.org/abs/2501.12189">MirrorCBO: A consensus-based optimization method in the spirit of mirror descent</a>.</p>]]></content><author><name>Tim Roith</name><email> tim{dot}roith{at}desy{dot}de</email></author><category term="paper" /><summary type="html"><![CDATA[Together with Leon Bungert, Franca Hoffmann and Doh Yeon Kim we poste a new preprint on MirrorCBO: A consensus-based optimization method in the spirit of mirror descent.]]></summary></entry><entry><title type="html">Analysis of mean-field models arising from self-attention dynamics in transformer architectures with layer normalization</title><link href="/paper/2025/01/06/Transformers.html" rel="alternate" type="text/html" title="Analysis of mean-field models arising from self-attention dynamics in transformer architectures with layer normalization" /><published>2025-01-06T00:00:00+00:00</published><updated>2025-01-06T00:00:00+00:00</updated><id>/paper/2025/01/06/Transformers</id><content type="html" xml:base="/paper/2025/01/06/Transformers.html"><![CDATA[<p>Together with Martin Burger, Samira Kabri, Yury Korolev and Lukas Weigand we posted a new preprint called <a href="https://arxiv.org/abs/2501.03096">Analysis of mean-field models arising from self-attention dynamics in transformer architectures with layer normalization</a>.</p>]]></content><author><name>Tim Roith</name><email> tim{dot}roith{at}desy{dot}de</email></author><category term="paper" /><summary type="html"><![CDATA[Together with Martin Burger, Samira Kabri, Yury Korolev and Lukas Weigand we posted a new preprint called Analysis of mean-field models arising from self-attention dynamics in transformer architectures with layer normalization.]]></summary></entry><entry><title type="html">COMFORT</title><link href="/2024/10/18/COMFORT.html" rel="alternate" type="text/html" title="COMFORT" /><published>2024-10-18T00:00:00+00:00</published><updated>2024-10-18T00:00:00+00:00</updated><id>/2024/10/18/COMFORT</id><content type="html" xml:base="/2024/10/18/COMFORT.html"><![CDATA[<p>Our project <strong>COMFORT: Compression Methods for Robustness and Transferability</strong> is being funded by the BMBF!</p>]]></content><author><name>Tim Roith</name><email> tim{dot}roith{at}desy{dot}de</email></author><summary type="html"><![CDATA[Our project COMFORT: Compression Methods for Robustness and Transferability is being funded by the BMBF!]]></summary></entry><entry><title type="html">CMX Lunch Seminar</title><link href="/presentation/2024/10/15/CMX.html" rel="alternate" type="text/html" title="CMX Lunch Seminar" /><published>2024-10-15T00:00:00+00:00</published><updated>2024-10-15T00:00:00+00:00</updated><id>/presentation/2024/10/15/CMX</id><content type="html" xml:base="/presentation/2024/10/15/CMX.html"><![CDATA[<p>I’m very excited to give a talk in the CMX Lunch seminar at Caltech today. I will present my work on the mathematics of adversarial robustness, you can find the abstract <a href="https://www.caltech.edu/campus-life-events/calendar/cmx-lunch-seminar-44">here</a>.</p>]]></content><author><name>Tim Roith</name><email> tim{dot}roith{at}desy{dot}de</email></author><category term="presentation" /><summary type="html"><![CDATA[I’m very excited to give a talk in the CMX Lunch seminar at Caltech today. I will present my work on the mathematics of adversarial robustness, you can find the abstract here.]]></summary></entry></feed>