█████╗ ███╗ ███╗██████╗ ██╗████████╗
██╔══██╗████╗ ████║██╔══██╗██║╚══██╔══╝
███████║██╔████╔██║██████╔╝██║ ██║
██╔══██║██║╚██╔╝██║██╔══██╗██║ ██║
██║ ██║██║ ╚═╝ ██║██║ ██║██║ ██║
╚═╝ ╚═╝╚═╝ ╚═╝╚═╝ ╚═╝╚═╝ ╚═╝
MS Computer Science @ Arizona State University · GPA 4.0
LLM Researcher · AI/ML Engineer · ex-Quantitative Analytics Intern (Gen AI) @ Wells Fargo · ex-Data Scientist @ Bajaj Finance
I work at the intersection of LLM research and production ML — evaluating language models at scale, building reasoning systems, and shipping data products that reach millions of users.
- 🔭 Researching multilingual LLM performance gaps and cross-lingual reasoning at ASU
- 🏦 ex-Wells Fargo · Built GenAI QA systems and LLM evaluation frameworks on the Consumer GenAI team as Quant Analytics intern
- 🏢 ex-Bajaj Finance · Shipped churn prediction, notification recommendation engine, and forecasting pipelines to production
- 🎓 B.Tech CS @ VIT Vellore · Merit Scholarship, Top 10 rank
- 🏆 Runner-up @ Voxel51 Visual AI Hackathon · Finalist @ JPMorgan Code For Good
Identifying LLM Performance Gaps Across Languages
Benchmarked OLMo-2-7B across ~100 languages and 10 datasets. Measured the correlation between pre-training token exposure and downstream accuracy, and tested English Chain-of-Thought as a cross-lingual reasoning bridge.
OLMo-2 vLLM NLLB-200 Hunyuan-MT FastText Self-Consistency
Self-Taught Reasoner for Mathematical Problem Solving
Implemented the STaR methodology on LLaMA 3.2-3B — iterative rationale refinement with hint-guided correction, outperforming both vanilla SFT and zero-shot CoT on GSM8K.
LLaMA-3.2 vLLM TRL SFT Chain-of-Thought
Autonomous Grocery Shopping Agent
Conversational AI agent that converts natural language into checkout-ready grocery carts via Instacart API, Knowledge Graphs, and LLM orchestration with automated checkout using BrowserUse and Playwright.
LLM Agents Knowledge Graphs BrowserUse Playwright