I am an ELLIS PhD student at Saarland University and the University of Edinburgh, working with Prof. Vera Demberg and Prof. Antonio Vergari.
My research focuses on evaluating and improving the reliability of Large Language Models (LLMs). I am specifically interested in understanding the fundamental limitations of LLM architectures through expressivity frameworks and mechanistic interpretability and fixing them to improve LLMs’ reliability in practice.
Previously, I completed my Masters in Computational Linguistics at the University of Tübingen, where I worked with Prof. Detmar Meurers and Dr. Çağrı Çöltekin. Before that, I worked as a Research Assistant at IIT Madras with Prof. Mitesh Khapra.
Outside of the lab, I also help start-ups integrate GenAI into their products. In my free time, I enjoy hiking and reading poetry. I have been learning to play Violin for about a year now.
P.S. If you find me with my violin in my hand, you might want to look for some earplugs ;P
News
Jan 2026: Bridging Fairness and Explainability: Can Input-Based Explanations Promote Fairness in Hate Speech Detection? is accepted at ICLR 2026!
Nov 2025: Our paper on B-cos LM accepted at TMLR!
Oct 2025: ProofTeller paper showcasing recency bias of LLMs while reasoning accepted at IJCNLP-AACL 2025!
Sep 2025: Our work explaining effects pre-training and scale on architectural abilities of Transformers accepted at NeurIPS 2025!
Aug 2025: Our paper on improving factuality and attribution in multi-hop medical reasoning accepted at EMNLP 2025!
Sep 2023: Joined Saarland University as a PhD student in Prof. Vera Demberg’s group.