AI/LLM Engineer building production AI systems, agent tooling, and evaluation workflows.
Most of my recent work lives in private and organization repositories, so this profile focuses on delivery themes, system scope, and working style.
I keep this README concise, delivery-focused, and actively maintained.
- Applied GenAI, RAG systems, and evaluation workflows
- Agent tooling and AI-assisted developer workflows
- AWS-based backend delivery, automation, and observability
- Reliability, latency, and operational hardening for LLM systems
Designed and shipped a Korean-language AI contact center MVP using Amazon Connect, Lex, Bedrock, Lambda, DynamoDB, observability, and operational runbooks. The work included implementation, deployment readiness, and test coverage rather than a demo-only prototype.
Built golden-dataset generation flows, evaluation guidebooks, latency measurement scripts, retry automation, and delivery packaging for enterprise retrieval systems. A recurring theme in this work is making GenAI quality measurable and handoff-friendly.
Improved resilience and efficiency across serverless chatbot systems with exponential backoff, connection-pool tuning, response-stream metrics, readiness checks, and infrastructure hardening.
The section below is refreshed automatically from GitHub activity data.
- Last updated: 2026-04-01
- 2,411 contributions in the last 12 months
- 19 pull request contributions and 40 commit contributions across 7 repositories
- 14 owned public repositories and 32 owned private repositories (excluding forks)
- Private and internal work represent most of the recent contribution volume
- Minimal, verifiable changes over flashy complexity
- Strong bias toward automation, documentation, and repeatable workflows
- Comfortable moving from prototype exploration to operational delivery



