Personal portfolio site showcasing AI systems, operational software, and data platform projects. Built with React + TypeScript, deployed on GitHub Pages.
Verified now (2026-04-07): local typecheck, content verification, tests, and production build were rerun from the repository root, and the current public deployments plus third-party assets were rechecked from the live URLs.
- Career: 국군지휘통신사령부 / 제1정보통신단 (전략 지휘통신망 네트워크·보안 운영 / 팀 리드,
2023.11 ~ 2025.05), ATOM TECH SOLUTIONS LTD (Backend / Full Stack Engineer Intern,2025.06 ~ 2025.09), Microsoft AI School 8기 (Trainee,2025.09 ~ 2026.02) - Languages: 한국어 Native, 영어 Business / Working, 일본어 Business / Working
- Certifications: Microsoft AI-900, Snowflake SnowPro Associate, Databricks Platform Architect (AWS / GCP), Databricks Fundamentals, Palantir Foundry Data Engineer Associate, Palantir Foundry Foundations, Datadog Observability, IBM AI / Cloud / Cyber Fundamentals, SAP Cloud Platform Integration
If you are screening for a specific lane, use this order first:
- Applied AI / LLM systems:
stage-pilot->AegisOps->frontier-llm-review-brief - Solutions / field engineering:
AegisOps->enterprise-llm-adoption-kit->aws-genai-application-packetorpalantir-application-packet - Data + AI platform:
Nexus-Hive->lakehouse-contract-lab->snowflake-review-briefordatabricks-review-brief - Network / security operations:
nw-service-assurance-workbench->security-threat-response-workbench->portfolio
If you only click one artifact first, stage-pilot is the cleanest proof for AI reliability, AegisOps is the clearest operator-facing applied system, and Nexus-Hive is the fastest data-platform proof.
- Big tech / applied AI systems:
big-tech-systems-review-brief - AWS / GenAI SA packet:
aws-genai-application-packet - Databricks Korea packet:
databricks-korea-application-packet - Snowflake Korea packet:
snowflake-korea-application-packet - OpenAI Seoul packet:
openai-seoul-application-packet - Anthropic Seoul packet:
anthropic-seoul-application-packet - Frontier LLM / runtime reliability:
frontier-llm-review-brief - Snowflake review brief:
snowflake-review-brief - Databricks review brief:
databricks-review-brief - Palantir / operational AI packet:
palantir-application-packet - Palantir review brief:
palantir-review-brief
git clone https://github.com/KIM3310/doeon-kim-portfolio.git
cd doeon-kim-portfolio
npm install
npm run devOpen http://localhost:5173 in your browser.
The portfolio is organized around a few focus areas:
- Runtime and reliability systems: StagePilot, AegisOps, ops-reliability-workbench
- Operational infrastructure systems: nw-service-assurance-workbench, security-threat-response-workbench
- Operational workflow systems: memory-test-master-change-gate, fab-ops-yield-control-tower, regulated-case-workbench, smallbiz-ops-copilot
- Data and analytics systems: enterprise-llm-adoption-kit, lakehouse-contract-lab, Nexus-Hive
- Applied vision systems: retina-scan-ai, weld-defect-vision
- Supporting experiments and archived context: Twincity UI, The Logistics Prophet, Signal Risk Lab, ogx, SteadyTap, ecotide, the-savior, kbbq-idle-unity
The public site intentionally leads with authored, reviewable public proof. Private workbenches remain part of the deeper interview story, but they are no longer treated as the first thing a recruiter should read.
components/,constants.ts, andcontent/define the main portfolio experience.public/briefs/contains optional walkthrough pages and supporting review guides.public/fabpilot-live-x.htmlandpublic/fabpilot-dossier.htmlpreserve the archived ops case study.docs/holds supporting runtime and resume pipeline notes.server/exposes the optional archived runtime bridge used by the older ops surface.
stage-pilotAegisOpstool-call-finetune-labNexus-Hiveenterprise-llm-adoption-kitlakehouse-contract-lab
These six repos are the clearest public proof for the current hiring story: applied AI reliability, governed analytics, enterprise AI delivery, and data-platform integration. Most of them include a built-in resource pack, review pack, or release-readiness surface so reviewers can inspect the strongest proof path without private data or API keys.
For tool-call-finetune-lab, the strongest public proof is the post-training pipeline, BFCL-aligned harness, Kaggle-ready notebook, and checked-in release-status artifacts. External Kaggle and Hugging Face publication should be treated as separately tracked proof, not silently assumed.
For targeted telecom, NOC, or cloud security loops, the live role-fit surfaces are nw-service-assurance-workbench and security-threat-response-workbench. They are intentionally separate from the six-flagship AI/data story so recruiters can inspect them only when the role actually benefits from that operator-facing context.
For the cloud security monitoring portfolio atlas itself, the current dual deployment is:
- Desktop:
https://cloud-security-monitoring.pages.dev/ - Mobile:
https://cloud-security-monitoring-mobile.pages.dev/
memory-test-master-change-gateops-reliability-workbenchregulated-case-workbenchretina-scan-aiUpstage-DocuAgent
These systems are part of the deeper role-fit story and are shared selectively in targeted interview loops. The public site keeps them behind the public-first flagship set so the portfolio stays legible to external reviewers.
Credential note: the public site keeps certification names and issuers visible, while issuer validation links or IDs are shared in application packets or on request.
Cross-repo verification and residual-risk ledger: KIM3310/PORTFOLIO_VERIFICATION_AND_RISK_LEDGER.md
Deployment and external resource audit: KIM3310/DEPLOYMENT_EXTERNAL_RESOURCE_AUDIT_2026-04-07.md
stage-pilot— GCS + BigQuery benchmark publish proofAegisOps— GCS + BigQuery incident artifact / analytics proofNexus-Hive— live Snowflake + live Databricks governed SQL proof, now with headless OAuth-ready Databricks authlakehouse-contract-lab— Snowflake + Databricks gold KPI export proof, now service-principal-ready on Databricksenterprise-llm-adoption-kit— AWS Bedrock runtime + Snowflake/Databricks eval/audit persistence, plus Databricks MLflow/Delta on headless OAuth authretina-scan-ai— AWS S3 review-safe artifact exportfab-ops-yield-control-tower— AWS S3 + DynamoDB + SQS handoff/audit export pathUpstage-DocuAgent— GCS review-safe document artifact export
| Label | Meaning | Current examples |
|---|---|---|
| live verified | real cloud/platform smoke or bounded live route verified | stage-pilot, AegisOps, enterprise-llm-adoption-kit, Nexus-Hive, lakehouse-contract-lab, memory-test-master-change-gate, fab-ops-yield-control-tower, retina-scan-ai, Upstage-DocuAgent |
| review-only live | public/runtime surface is live, but claims intentionally stay bounded and reviewer-safe | regulated-case-workbench, signal-risk-lab, nw-service-assurance-workbench, security-threat-response-workbench |
| local-first / supporting | strongest proof is local, staged, or supporting rather than public live | Aegis-Air, ops-reliability-workbench, ogx, dv-regression-lab, twincity-ui, the-logistics-prophet |
Public datasets used for richer local review are staged under a local-only cache directory and linked into individual repos as local-only files. Raw source files are not committed to GitHub. GitHub surfaces only keep:
- dataset provenance
- staged-data presence / row counts / sample counts
- no-key review routes
Main staged sources currently include:
- Kaggle
andrewmvd/retinal-disease-classification - Kaggle
sukmaadhiwijaya/welding-defect-object-detection - Kaggle
paresh2047/uci-semcom - Kaggle
olistbr/brazilian-ecommerce - Kaggle
javierspdatabase/global-online-orders - Kaggle
suraj520/customer-support-ticket-dataset - Kaggle
sanketgadekar/legal-indian-contract-clauses-dataset - Kaggle
anshankul/ibm-amlsim-example-dataset - Kaggle
vipulshinde/incident-response-log
Rebuild flow: use KIM3310/scripts/sync_open_data.py from the profile repo to refresh the local cache and relink staged files.
npm install
npm run devnpm run verifyThe command above is the fastest way to verify the current portfolio snapshot before publishing.
The site is deployed at https://kim3310.github.io/doeon-kim-portfolio/ via GitHub Pages.
The archived fab ops case study includes an optional model-backed runtime for generating live operator briefs.
npm run fabtwin:runtime:mockSee docs/FABPILOT_GEMINI_RUNTIME.md for setup details.
Prioritizes clarity and working demos over decorative complexity.