Welcome to the Optimization & Decision Intelligence Group. We are part of the Institute for Machine Learning at the Department of Computer Science of ETH Zurich.
We are looking for talented graduate students and postdocs with strong mathematical background and interests in optimization and machine learning. Check our Current Openings.
We are organizing two exciting workshops that will take place soon. Stay tuned!
SwissMAP Workshop on Computational Optimization Meets Gradient Flows and Optimal Transport: May 24-29, Les Diablerets, Switzerland.
https://swissmaprs.ch/events/computational-optimization-meets-gradient-flows-and-optimal-transport/
Swiss Optimization Symposium: Aug 23-27, 2026, Ascona, Switzerland.
https://swiss-opt.github.io/speakers/
We will be presenting the following work at ICLR 2026 in Brazil.
Our paper on A Hessian-aware stochastic differential equation for modelling SGD is accepted to Mathematical Programming, 2026.
Congratulations to Batu Yardim and Jiawei Huang for successfully defending their PhD. Best wishes for your next journey in industry!
We present the following papers at NeurIPS 2025.
We co-organized the INI program on Bridging Stochastic Control and Reinforcement Learning at the Alan Turing Institute, London, UK.
We co-organized the Symposium on Mathematical Foundations of Trustworthy Learning in Ascona, Switzerland.
We co-organized the inaugural Swiss CLOCK Summit, Engelberg, Switzerland. https://www.swissclocksummit.com/
Our paper on Exploiting Approximate Symmetry for Efficient Multi-Agent Reinforcement Learning made it to the Best Paper Finalists at the 7th Annual Learning for Dynamics & Control Conference (L4DC), 2025. Congratulations to Batu!
Our paper on Efficient Algorithms for A Class of Stochastic Hidden Convex Optimization and Its Applications in Network Revenue Management is accepted to Operations Research, 2024.
Several papers accepted to ICML 2024. Congrats to all!
Our paper on Convergence of Entropy-Regularized Natural Policy Gradient with Linear Function Approximation is accepted to SIAM Journal on Optimization, 2024, and our paper on Momentum-Based Policy Gradient with Second-Order Information is accepted to Transactions on Machine Learning Research (TMLR), 2024.
Our paper on Finite-Time Analysis of Natural Actor-Critic for POMDPs is accepted to SIAM Journal on Mathematics of Data Science (SIMODS), 2024, and our paper on Finite-Time Analysis of Entropy-Regularized Neural Natural Actor-Critic Algorithm is accepted to Transactions on Machine Learning Research (TMLR), 2024.
Several papers accepted to AISTATS 2024. Congrats to all!
Congrats to Dr. Junchi Yang for his next postdoc position at Argonne National Laboratory and Dr. Giorgia Ramponi for her next position as Assistant Professor at University of Zurich. Wish you great success in your new journey!
Our paper on Automated Design of Affine Maximizer Mechanisms in Dynamic Settings is accepted to AAAI 2024, and two papers are accepted to the 23rd International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2024. Congrats to Vinzenz, Pragnya, Giorgia, and Batu!
We have been awarded the SNSF Starting Grant 2023! Thanks SNSF for the unprecedented support of our research!
Several papers are accepted to NeurIPS 2023 main conference and workshops. Stay tuned and see you at New Orleans in December!
Six papers are accepted and presented at the 16th European Workshop on Reinforcement Learning (EWRL 2023) in Brussels, Belgium!
Congrats to Tanmay Goyal for receiving the ABB Research Prize for a top-class Master’s thesis from our group.
Three papers accepted to ICML 2023. Congrats to Batu, Anas, and Ilyas.
Congratulations to Xiang Li for receiving the ETH medal for his outstanding Master’s thesis. See the news item.
Two papers accepted to AISTATS 2023. Our paper on TiAda: A Time-scale Adaptive Algorithm for Nonconvex Minimax Optimization is accepted to ICLR 2023.
Niao visited University of Vienna in January and gave a lecture series on reinforcement learning at the Vienna Graduate School on Computational Optimization.
Two journal papers accepted: our paper on Sample Complexity and Overparameterization Bounds for Temporal Difference Learning with Neural Network Approximation is accepted to IEEE Transactions on Automatic Control; our paper on A discrete-time switching system analysis of Q-learning is accepted to SIAM Journal on Control and Optimization.
Niao gave a talk on Adaptive Min-Max Optimization at the NeurIPS Workshop on Optimization for Machine Learning in New Orleans and at the NUS Workshop on Optimization in the Big Data Era at the National University of Singapore.
Congratulations to ODI members Junchi, Jiawei, Giorgia, and Siqi for being rated as top reviewers for NeurIPS 2022.
Niao gave a talk on Nonconvex min-max optimization: fundamental limits, acceleration, and adaptivity at The Mathematics of Machine Learning Workshop in Bilbao, Spain.
Several papers from the group members are accepted for NeurIPS 2022.
Niao gave a talk on Complexities of Actor-critic Methods for Regularized MDPs and POMDPs at the 15th European Workshop on Reinforcement Learning (EWRL 2022) in Milan, Italy, and the WiOpt workshop on Reinforcement Learning and Stochastic Control in Queues and Networks.
Niao gave a lecture on the Interplay between Optimization and Reinforcement Learning at the Sargent Centre Summer School on Data-Driven Optimisation at Imperial College London, UK.
Congratulations to Yifan, Siqi, and Semih for their new journeys. Yifan is starting a postdoc position at EPFL, Switzerland, Siqi will be a Rufus Isaacs Postdoctoral Fellow at Johns Hopkins University, USA, and Semih Cayci will start a faculty position in the Department of Mathematics at RWTH Aachen, Germany.
Several group members, Yifan, Anas, and Niao gave talks and organized sessions at the seventh International Conference on Continuous Optimization (ICCOPT) at Lehigh University, USA.
Niao gave a talk on nonconvex minimax optimization and Junchi presented a poster at the ELLIS Theory Workshop in Arenzano, Italy.
Together with Florian Dorfler, Niao co-organized the NCCR symposium on Systems Theory of Algorithms at ETH Zurich and also gave a talk on Q-learning through the Lens of Dynamical Systems: from asymptotics to non-asymptotics.
Niao visited Simons Institute at UC Berkeley for six weeks and participated in the Learning and Games program. During the visit, Niao gave a talk on Universal Acceleration for Minimax Optimization at the visitor seminar series and another talk on single-loop algorithms for unbalanced minimax optimization at the workshop on Adversarial Approaches in Machine Learning.
Niao gave a seminar talk on Three common RL tricks: why and when do they work? at the Machine Learning Genoa Center (MaLGa) in Italy and a virtual talk at the Control Seminar series at University of Oxford, UK.
Together with Yurii Nesterov, Niao gave week-long lectures at the Zinal Summer School: Data Science, Optimization and Operations Research organized by TRANSP-OR from EPFL. The lecture slides on Reinforcement Learning: Optimization and Dynamical Systems Perspectives are available here.
Together with Agarwal, Du, Szepesvari, and Yang, we organized the ICML workshop on Reinforcement Learning Theory, July 24-25, virtual event.
Niao, jointly with Bo Dai from Google Brain, gave lectures on Reconciling Reinforcement Learning: Optimization, Generalization, and Exploration at the EPFL and ETHZ Summer School on Foundations and Mathematical Guarantees of Data-driven Control. The 8-hour video recording is available here.
Our group has moved to ETH Zurich, Switzerland.
Our group got six papers accepted to NeurIPS 2020. Check the papers here.
We are excited to be a part of the USDA-NIFA AI Institute on Next Generation Food Systems (AIFS, a joint effort led by UC Davis, UC Berkeley, Cornell, and UIUC). Check the news here.
Yingxiang graduated and started his next position as a research scientist at ByteDance in Seattle.
Donghwan Lee started a faculty position at KAIST.
Niao is elected as the 2020-21 Beckman CAS Fellow by the Center for Advanced Studies at UIUC.