Computer Science student at Georgia Tech focused on machine learning systems, distributed infrastructure, and scalable backend engineering.
I’m a Computer Science student at Georgia Tech interested in building large-scale systems that power modern machine learning applications. My work spans applied ML research, backend infrastructure, and high-performance computing, with a focus on how intelligent systems operate reliably and efficiently at scale.
I enjoy working on problems at the intersection of machine learning, distributed systems, and performance engineering, especially when they involve designing scalable architectures or improving system efficiency.
- Computer Science @ Georgia Tech (Rising Junior)
- Researching concept drift detection and dataset evolution in machine learning systems
- Interested in ML infrastructure, recommendation systems, and distributed compute
- Experience with CUDA, distributed GPU systems, and LLM-based applications
- Contact: [email protected]
Benchmarking distributed GPU communication primitives using NCCL and MPI on Slurm clusters.
Tech: CUDA, NCCL, MPI, Slurm
Optimized CUDA kernels for GEMM and parallel reduction using shared-memory tiling and memory coalescing.
Tech: CUDA, C++
AI-powered learning companion for Georgia Tech students using GraphRAG and retrieval-based LLM systems.
Tech: React, FastAPI, Neo4j, MongoDB, AWS
Machine learning research toolkit for detecting distribution shifts in streaming datasets.
Tech: Python, scikit-learn
I’m always interested in collaborating on impactful projects and exploring internship opportunities where I can continue to grow as an engineer.