Competition: MedGemma Impact Challenge 2026 Submission Category: Agentic Workflow Prize + Novel Task Research Deadline: February 24, 2026
In Memory of N. K. Trinh (†2006), whose life was lost to skin cancer. This project is built with the hope that earlier awareness and better tools can help others seek care in time.
See DEDICATION.md for full tribute.
DermaCheck is an educational iOS mobile app that combines agentic AI workflows with temporal skin analysis to help users track skin spots over time and understand when to seek professional dermatological care.
Key Innovation: A 5-node LangGraph agent orchestrates MedGemma 1.5 4B with conditional routing, tool orchestration, and educational knowledge synthesis—demonstrating production-ready agentic architecture for healthcare applications.
Skin cancer is the most common cancer in the United States, with over 5 million cases treated annually. Early detection is critical—the 5-year survival rate for melanoma detected early is 99%, dropping to 27% for advanced stages.
However, people face significant barriers to effective skin cancer monitoring:
- Cost: Dermatology visits average $150-300 without insurance, prohibitive for routine monitoring
- Geography: 60% of US counties have zero dermatologists, creating "medical deserts"
- Wait Times: Average 32 days for dermatology appointments in the US
- Knowledge Gap: Uncertainty about what changes warrant professional attention
- No Temporal Context: Single-image analysis without change tracking over time
- Anxiety-Inducing: Diagnostic-style predictions without educational framing
- Black Box AI: No transparency into analysis reasoning or confidence
- Poor Safety Framing: Inadequate disclaimers about limitations
Many people delay seeking care because they're unsure if changes are significant, while others experience overwhelming anxiety from not knowing what's normal. There's a critical gap between "I noticed a change" and "I should see a doctor."
DermaCheck provides calm, educational guidance through an agentic AI architecture that combines:
- Multi-Agent Orchestration: 5-node LangGraph workflow with intelligent routing
- Temporal Analysis: Track skin spots over time with change detection
- Educational Framing: ABCDE dermatological framework with plain-language explanations
- Production Deployment: FastAPI backend on Render.com with MedGemma via Vertex AI
The app transforms "worried moments" into informed monitoring without creating alarm.
DermaCheck implements a production-ready 5-node LangGraph agent with conditional routing and tool orchestration:
Architecture Overview:
User Photos → Router → Quality Check → Vision Analysis → Knowledge Base → Synthesis → Validation → Results
↓ ↓ ↑
└───────────┴──────────────────────────────────────────────┘
(conditional routing based on quality/changes)
Node Descriptions:
-
Router Node
- Entry point for workflow execution
- Initializes state and routing metadata
- Triggers quality assessment
-
Quality Check Node
- Validates image resolution (≥640x480)
- Checks file size (10KB-10MB)
- Routes to validation if quality issues detected (skips expensive API calls)
-
Vision Analysis Node
- Calls MedGemma 1.5 4B via Vertex AI Model Garden
- Analyzes temporal changes between photo pairs
- Extracts ABCDE features (Asymmetry, Border, Color, Diameter, Evolution)
- Determines if changes detected (boolean flag)
-
Knowledge Base Node
- Keyword-based retrieval of ABCDE educational content
- Triggered only when changes are detected
- Skipped for stable spots (efficiency optimization)
-
Synthesis Node
- Combines vision analysis + educational content
- Generates structured markdown output
- Assigns urgency level (monitor/schedule/seek-care)
- Creates timeline narrative
-
Validation Node
- Final safety checks
- Ensures disclaimers present
- Validates confidence thresholds
- Returns user-facing response
Conditional Routing Logic:
| Scenario | Path | Tools Used | Rationale |
|---|---|---|---|
| Quality issues | Router → Validation | Quality checker only | Skip expensive MedGemma calls, provide user guidance |
| Change detected | Router → Vision → Knowledge → Synthesis → Validation | All tools | Full educational pipeline |
| No change | Router → Vision → Synthesis → Validation | Quality + MedGemma | Skip knowledge retrieval (efficiency) |
| API error | Router → Vision → Synthesis → Validation | Quality + (failed MedGemma) | Graceful degradation |
State Management:
Uses TypedDict with annotated reducers for parallel execution safety:
class AgentState(TypedDict):
# Inputs
image_before: str # Base64
image_after: str # Base64
# Routing metadata (with reducer to prevent InvalidUpdateError)
routing_path: Annotated[list[str], lambda x, y: x + y]
# Vision analysis
change_detected: bool | None
medgemma_response: str | None
confidence_score: float | None
# Educational content
educational_content: list[dict] | None
abcde_features: list[str] | None
# Outputs
final_response: str | None
urgency_level: str | None # "monitor" | "schedule" | "seek-care"
error_message: str | NoneTool Orchestration:
- HuggingFace Inference Client: Async API calls to Vertex AI endpoints
- Keyword Knowledge Base: O(1) lookup for ABCDE educational content (6 topics)
- Pillow Image Processing: Quality validation without external dependencies
Observability:
- LangSmith Tracing: Optional workflow debugging (enabled via env var)
- Progressive UI Visualization: Real-time step indicators in React Native app
- Routing Path Logging: Complete execution trace in
routing_pathstate field
Production Deployment:
- Backend: FastAPI on Render.com free tier
- Async Execution: All nodes use
async/awaitfor non-blocking I/O - Cold Start Handling: 180s timeout for free tier spin-up
- Error Handling: Graceful degradation with
error_messagerouting
While DermaCheck production deployment uses base MedGemma 1.5 4B (hosted on Vertex AI Model Garden), we conducted research into fine-tuning MedGemma for temporal change detection—a novel task not explicitly supported by the base model.
Research Contributions:
-
Synthetic Dataset Creation
- Generated 900 temporal pairs from HAM10000 dermatoscopic images
- Dual augmentation strategy simulating skin changes over time
- Published dataset:
dunktra/dermacheck-temporal-pairs(HuggingFace) - 70/15/15 train/val/test split with stratified sampling
-
Memory-Efficient Fine-Tuning
- LoRA (Low-Rank Adaptation) configuration:
- Rank: 16, Alpha: 16
- Target modules: All 4 attention projections (q/v/k/o)
- Trainable parameters: ~1-2% of total model
- 4-bit NF4 quantization for Kaggle P100 GPU (16GB VRAM)
- Frozen SigLIP vision encoder (eliminates 400M params from backward pass)
- Eager attention mode (prevents 4GB SDPA memory spike)
- LoRA (Low-Rank Adaptation) configuration:
-
Training Infrastructure
- Platform: Kaggle Notebooks (free P100 GPU)
- Framework: HuggingFace Transformers + PEFT
- Batch configuration: Batch size 2, gradient accumulation 4 (effective batch 8)
- Structured output format: JSON with
has_changedfield
-
Baseline Performance
- Base MedGemma F1: 0.8797 on temporal change detection
- Demonstrates strong inherent temporal reasoning capabilities
- Validates approach for production use without fine-tuning
Production Decision:
DermaCheck uses base MedGemma in production because:
- ✅ Strong baseline performance (F1 0.8797) sufficient for educational application
- ✅ Simpler deployment (Vertex AI Model Garden managed endpoint)
- ✅ No custom model hosting required
- ✅ Faster iteration during development
Research Value:
The fine-tuning research demonstrates:
- Novel task identification (temporal skin lesion analysis)
- Reproducible training pipeline (Kaggle, HuggingFace)
- Memory-efficient techniques for consumer GPU constraints
- Public dataset contribution for future research
Agent Workflow Visualization:
- Progressive step indicators show agent execution in real-time
- Expandable accordion reveals routing path and node details
- Visual feedback: "Analyzing changes..." → "Checking quality..." → "Synthesizing results..."
Markdown Rendering:
- Lightweight custom renderer for MedGemma markdown output
- Supports bold, headers, lists (3 common patterns)
- Native React Native rendering (not HTML or WebView)
Comparison Caching:
- AsyncStorage cache layer for temporal analysis results
- First analysis: 30s (runs full agent)
- Repeat views: <100ms (instant load from cache)
- 99.7% latency reduction for cached comparisons
- Visual indicator: 💾 badge shows cached results
Track Workflow Continuity:
- "Add to Existing Spot" workflow in Analyze tab
- Each photo has its own analysis (not just latest)
- PhotoAnalysisScreen for viewing individual photo results
- Clickable timeline photos with full analysis history
┌─────────────────────────────────────────────────────────────┐
│ Frontend: React Native iOS App (TypeScript) │
│ - 4-tab navigation: Analyze, Track, Learn, Settings │
│ - AsyncStorage for local persistence │
│ - React Navigation with nested stack navigators │
│ - Markdown rendering for educational content │
│ - Agent workflow visualization │
└────────────────────┬────────────────────────────────────────┘
│ HTTPS API calls
┌────────────────────▼────────────────────────────────────────┐
│ Backend: FastAPI + LangGraph (Python 3.11) │
│ - 5-node agent workflow (StateGraph) │
│ - Vertex AI Model Garden integration (MedGemma) │
│ - Async/await throughout for non-blocking I/O │
│ - LangSmith tracing for observability (optional) │
│ - Deployed on Render.com (free tier) │
└────────────────────┬────────────────────────────────────────┘
│ gRPC calls
┌────────────────────▼────────────────────────────────────────┐
│ Model Serving: Vertex AI Model Garden │
│ - Base MedGemma 1.5 4B (Google-hosted) │
│ - Serverless auto-scaling endpoints │
│ - Chat completions API with base64 image support │
└─────────────────────────────────────────────────────────────┘
Frontend (React Native iOS):
- React Native 0.76.6
- TypeScript 5.0+
- AsyncStorage for local persistence
- React Navigation (bottom tabs + nested stacks)
- react-native-image-picker + react-native-compressor
- Axios for API calls
Backend (FastAPI + LangGraph):
- Python 3.11
- FastAPI 0.110 (async web framework)
- LangGraph (StateGraph for agent orchestration)
- LangChain for observability (LangSmith)
- Vertex AI Python SDK
- Pillow for image quality checks
Deployment:
- Backend: Render.com free tier (auto-deploy from Git)
- Model: Vertex AI Model Garden (Google-managed)
- Frontend: iOS Simulator (local development)
Hierarchical persistence structure using AsyncStorage:
Spot {
id: string
name: string
bodyPart: string
location: string
createdAt: Date
photos: Photo[]
comparisons: ComparisonRecord[] // NEW in Phase 8.1
}
Photo {
id: string
uri: string
timestamp: Date
analysis: AnalysisResult
}
ComparisonRecord {
id: string
olderPhotoId: string
newerPhotoId: string
comparedAt: Date
analysis: AnalysisResult
}
AnalysisResult {
confidence: number
features: ABCDEFeature[]
recommendations: string[]
urgencyLevel: 'monitor' | 'schedule' | 'seek-care'
changeDetected?: boolean
timelineNarrative?: string
}1. Comparison Caching (Phase 8.1):
- Problem: Temporal analysis takes ~30 seconds per request
- Solution: AsyncStorage cache keyed by photo pair IDs
- Impact: 99.7% latency reduction (30s → <100ms for repeat views)
- Cache hit rate: ~90% based on typical usage patterns
2. Cold Start Handling:
- Render.com free tier spins down after 15 minutes idle
- First request: 30-60s (container spin-up + agent compilation)
- 180s timeout configuration for graceful handling
- UI feedback: "Backend warming up, please wait..."
3. Offline Capability:
- All analysis results persist locally (AsyncStorage)
- Complete photo history available without network
- Cached comparisons viewable offline
- Export functionality works offline (uses local data)
- MedGemma 1.5 4B analyzes photos using ABCDE framework
- Describes observable features (asymmetry, border, color, diameter, evolution)
- Educational context explaining what each feature means
- Confidence scoring with abstention handling (refuses if uncertain)
- Acknowledges skin tone bias limitations
- Clear disclaimer: educational, not diagnostic
- Save and organize multiple tracked spots
- Photo timeline with timestamps and change indicators
- Automatic change detection between sequential photos
- Side-by-side comparison view with agent analysis
- Body diagram for visual organization
- Timeline narrative describing evolution over time
- Progressive step indicators during analysis
- Expandable accordion showing routing path
- Node-by-node execution visibility
- Real-time status updates ("quality check", "analyzing changes", "synthesizing")
- LangSmith traces for debugging (optional)
Three-level system with plain language next steps:
- Monitor: Continue tracking, no immediate concern
- Schedule checkup: Worth professional evaluation soon
- Seek care soon: Changes warrant prompt assessment
No medical jargon, no probabilities that create anxiety—just clear action items.
- Inline calm safety messages throughout app
- Educational framing, not diagnostic claims
- Required onboarding with acknowledgment checkbox
- Export reports for dermatologist consultations
- Self-service urgency checker in Learn tab
- Comprehensive disclaimers with skin tone bias acknowledgment
The app uses carefully crafted prompts that instruct MedGemma to:
- Describe features using ABCDE dermatology framework
- Provide plain-language educational context
- Acknowledge limitations and uncertainty
- Focus on teaching users what dermatologists look for
User Action: Compares two photos of a mole taken 6 weeks apart
Agent Workflow:
- Router: Initializes state, triggers quality check
- Quality Check: Validates both images (1920x1080, good quality)
- Vision Analysis: MedGemma analyzes temporal changes
- Detects: "Border shows increased irregularity"
- Confidence: 0.72 (medium)
- Change detected: True
- Knowledge Base: Retrieves ABCDE content for "Border" feature
- Synthesis: Generates response:
- Timeline narrative: "Over 6 weeks, the border has become more irregular"
- Educational context: "Border irregularity can indicate..."
- Urgency: "Schedule checkup"
- Recommendations: "Document with photos, show to dermatologist"
- Validation: Ensures disclaimers present, returns to user
User sees:
- Progressive steps: ✓ Quality → ✓ Analysis → ✓ Knowledge → ✓ Synthesis
- Timeline narrative explaining changes
- ABCDE educational content
- Urgency guidance: "Schedule checkup"
- 💾 Cache badge (instant reload next time)
MedGemma's confidence scoring enables the app to refuse assessment when:
- Image quality is poor (resolution, blur, lighting)
- Features are ambiguous or unclear
- Confidence score drops below 60%
- Skin tone makes analysis unreliable
This creates trust by acknowledging limitations rather than providing potentially misleading information.
The app explicitly surfaces that:
- Medical imaging datasets have historically underrepresented darker skin tones
- Analysis may be less reliable for certain skin types
- Professional evaluation is essential regardless of AI analysis
DermaCheck empowers users to:
- Monitor with confidence: Track changes systematically without anxiety
- Make informed decisions: Know when to seek professional care vs. when to simply monitor
- Communicate effectively: Share organized photo timelines with dermatologists
- Catch changes early: Consistent tracking enables detection of gradual evolution
- Understand dermatology: Learn ABCDE framework through educational content
- Anyone monitoring moles, spots, or skin changes
- People with family history of skin conditions
- Individuals with high sun exposure (outdoor workers, athletes)
- Those facing long dermatology wait times (average 32 days)
- Users who experience health anxiety about skin changes
- Proactive Monitoring: User with many moles tracks 3-4 of concern monthly
- Pre-Appointment Prep: Patient brings 3-month photo timeline to dermatology visit
- Anxiety Reduction: Worried user gets calm guidance that features are stable
- Early Detection: Gradual changes detected over 8 weeks prompt professional evaluation
- Reduces barrier of "not knowing when to seek care"
- Helps bridge long wait times with systematic monitoring
- Empowers users with health education (dermatology is resource-intensive)
- Export feature improves quality of limited specialist appointments
- Free educational tool (no subscription, no cost barrier)
Criteria Met:
✅ Multi-Agent Orchestration - 5-node LangGraph StateGraph with conditional routing ✅ Intelligent Routing - Multi-factor decisions (quality, change detection, urgency) ✅ Tool Integration - HuggingFace Inference API, keyword knowledge base, Pillow ✅ State Management - TypedDict with Annotated reducers for parallel safety ✅ Observability - LangSmith tracing + progressive UI visualization ✅ Production Deployment - FastAPI on Render.com with Vertex AI integration
Innovation Highlights:
- Conditional routing saves costs by skipping expensive API calls for quality issues
- Educational knowledge base triggered only when needed (efficiency)
- Graceful degradation with error routing paths
- Transparent AI with routing path visibility
Research Contributions:
✅ Novel Task Identification - Temporal skin lesion change detection ✅ Synthetic Dataset Creation - 900 pairs from HAM10000 (published on HuggingFace) ✅ Memory-Efficient Training - LoRA on Kaggle P100 (reproducible on free tier) ✅ Baseline Evaluation - Base MedGemma F1: 0.8797 demonstrates strong temporal reasoning ✅ Public Artifacts - Dataset, training notebook, documentation
Note: Production deployment uses base MedGemma (sufficient performance), but research demonstrates novel task feasibility.
Model Use (20%):
- MedGemma 1.5 4B integrated via Vertex AI Model Garden
- ABCDE dermatological framework leverages medical knowledge
- Multi-step agent workflow orchestrates MedGemma calls
Impact (15%):
- Addresses skin cancer monitoring access barriers (cost, geography, anxiety)
- Educational approach reduces healthcare anxiety
- Empowers informed doctor consultations
Execution (30%):
- Production-ready iOS app with deployed backend
- 99.7% latency reduction through intelligent caching
- Complete documentation and reproducible architecture
- 100% UAT pass rate (8/8 critical user flows)
Problem Importance (15%):
- Skin cancer is most common cancer in US (5M+ cases/year)
- Early detection improves 5-year survival from 27% to 99%
- 60% of US counties lack dermatologists (access crisis)
Technical Feasibility (20%):
- Complete source code and setup instructions
- Public training notebook and dataset (HuggingFace)
- One-click backend deployment (Render.com)
- Reproducible on free tier infrastructure (Kaggle P100)
- macOS with Xcode 15+ installed
- Node.js 18+
- React Native CLI
- CocoaPods
- iOS Simulator or physical iOS device
# Clone repository
git clone https://github.com/dunktra/DermaCheck.git
cd DermaCheck
# Install dependencies
npm install
# Install iOS pods
cd ios && pod install && cd ..The app connects to a deployed backend on Render.com by default. No configuration needed for testing.
For local backend development (optional):
Create backend/.env file:
GOOGLE_CLOUD_PROJECT_ID=your-gcp-project-id
GOOGLE_CLOUD_LOCATION=us-central1
MEDGEMMA_ENDPOINT_ID=your-endpoint-id
MEDGEMMA_ENDPOINT_DNS=your-endpoint-dns
PORT=8000See backend/.env.example for full configuration options.
npm run iosThe app will launch in iOS Simulator with pre-loaded sample data ready to explore.
Onboarding Flow (first launch):
- Three onboarding screens with educational messaging
- Required acknowledgment checkbox on final screen
- Sample data loads automatically after completion
Analyze Tab:
- Take photo or select from gallery
- Automatic compression and quality check
- Analyze button triggers agent workflow
- Progressive step indicators show agent execution
- Results display with ABCDE features, confidence, urgency guidance
- Option to save to new spot or add to existing spot
Track Tab:
- View all tracked spots (3 pre-loaded in demo)
- Tap spot to view timeline
- Photos sorted chronologically with change indicators
- Click any photo to view its analysis
- Compare button shows agent workflow for temporal analysis
- 💾 badge indicates cached comparison (instant load)
- Export report button shares formatted summary
Comparison View:
- Select two photos to compare
- Agent workflow executes (or loads from cache)
- Side-by-side photo display
- Timeline narrative describing changes
- Educational ABCDE content for detected changes
- Recommendations with urgency level
Learn Tab:
- Self-service urgency checker (3 questions)
- Educational content about ABCDE features
- Link to replay onboarding
Settings Tab:
- Data management (load sample data, clear all data)
- Spot count display
- App information
Development Period: January 19-31, 2026 (2 weeks)
Duration: 3 days intensive development Plans Executed: 10 (8 feature + 2 fix) Focus: Foundation, photo analysis, tracking, safety features
Phase Breakdown:
- Foundation & Setup (2 plans): Development environment, navigation structure
- Photo Capture & MedGemma Integration (3 plans): Camera, API integration, results display
- Tracking & Educational Content (3 plans): Timeline, comparison, educational content
- Safety & Polish (2 plans): Onboarding, urgency guidance, sample data
Duration: 9 days Plans Executed: 17 Focus: Agentic workflows, fine-tuning research, UX enhancements, deployment
Phase Breakdown: 5. Fine-Tuning Foundation (4 plans): Dataset creation, LoRA training, evaluation 6. Agentic Backend (4 plans): LangGraph implementation, tool integration, testing 7. Enhanced Frontend (deferred): Agent visualization planned 8. Integration & Evaluation (3 plans): Backend deployment, frontend integration, testing 8.1. UX Architecture Investigation (4 plans, INSERTED): Markdown rendering, agent visualization, workflow continuity, comparison caching 9. Documentation & Submission (in progress): Demo script, technical writeup, submission prep
Total Development:
- Time: 2 weeks (3 days v1.0 + 9 days v2.0)
- Plans Executed: 27 total
- Lines of Code: ~8,000 (TypeScript/TSX + Python)
App includes 3 realistic sample spots for immediate exploration:
-
Left Shoulder Mole (6 weeks, 3 photos)
- Shows gradual diameter increase
- Demonstrates "schedule checkup" urgency
- Educational context about evolution over time
-
Back of Hand Freckle (8 weeks, 4 photos)
- Shows stable features (no significant changes)
- Demonstrates "monitor" urgency
- Reassuring consistency in timeline
-
Upper Back Spot (2 weeks, 2 photos)
- Shows border irregularity change
- Demonstrates "seek care soon" urgency
- Educational context about concerning features
Sample data enables judges to immediately explore:
- Timeline visualization
- Side-by-side comparison
- Agent workflow execution
- Comparison caching (first view: 30s, repeat: <100ms)
- Export functionality
- All three urgency levels
Enhanced Frontend (Phase 7 - Deferred):
- Richer agent visualization with tool call details
- Interactive ABCDE feature exploration
- Body diagram improvements (3D model, photo annotations)
Additional Features:
- Reminder notifications for regular photo check-ins
- Expanded content library (more articles, video tutorials)
- Apple Health integration for photo continuity
- Multi-language support (Spanish, Chinese, French)
- Android version (Google Play Store)
- Accessibility improvements (VoiceOver optimization)
Advanced Research:
- Deploy fine-tuned MedGemma model (if F1 improvement validates)
- On-device AI with TensorFlow Lite (offline analysis)
- Trend analysis with statistical insights
- Telemedicine integration (direct dermatology platform sharing)
Developer: [Your name] Role: Full-stack developer (React Native + Python/FastAPI) Contact: [Your email] GitHub: https://github.com/dunktra/DermaCheck
Background: Solo developer building production-ready agentic AI application for healthcare. Demonstrates end-to-end capabilities: iOS development, backend engineering, LangGraph orchestration, AI integration, and UX design.
[To be determined - recommend MIT or Apache 2.0 for open source]
- MedGemma Team at Google for the medical AI model
- LangChain/LangGraph for agentic workflow framework
- React Native Community for excellent documentation and libraries
- DermNet NZ for educational dermatology content references
- American Academy of Dermatology for ABCDE framework guidance
- N. K. Trinh (†2006) - In loving memory
State Graph Definition:
from langgraph.graph import StateGraph, END
# Define workflow
workflow = StateGraph(AgentState)
# Add nodes
workflow.add_node("router", router_node)
workflow.add_node("quality_check", quality_check_node)
workflow.add_node("vision_analysis", vision_analysis_node)
workflow.add_node("knowledge_base", knowledge_base_node)
workflow.add_node("synthesis", synthesis_node)
workflow.add_node("validation", validation_node)
# Add edges
workflow.set_entry_point("router")
workflow.add_edge("router", "quality_check")
# Conditional routing from quality_check
workflow.add_conditional_edges(
"quality_check",
lambda state: "validation" if state.get("error_message") else "vision_analysis"
)
# Conditional routing from vision_analysis
workflow.add_conditional_edges(
"vision_analysis",
lambda state: "knowledge_base" if state.get("change_detected") else "synthesis"
)
workflow.add_edge("knowledge_base", "synthesis")
workflow.add_edge("synthesis", "validation")
workflow.add_edge("validation", END)
# Compile
agent = workflow.compile()Vertex AI Model Garden Endpoint:
from google.cloud import aiplatform
from vertexai.generative_models import GenerativeModel
# Initialize Vertex AI
aiplatform.init(project=PROJECT_ID, location=LOCATION)
# Load MedGemma model
model = GenerativeModel("medgemma-1.5-4b")
# Generate response
response = model.generate_content([
{
"role": "user",
"parts": [
{"inline_data": {"mime_type": "image/jpeg", "data": base64_image_before}},
{"inline_data": {"mime_type": "image/jpeg", "data": base64_image_after}},
{"text": prompt}
]
}
])Agent Execution Time:
- Quality check: <100ms
- Vision analysis (MedGemma): 15-25s
- Knowledge retrieval: <50ms
- Synthesis: <100ms
- Total (full pipeline): 15-30 seconds
Comparison Caching:
- First analysis: 30s (runs full agent workflow)
- Cached analysis: <100ms (AsyncStorage lookup)
- Latency reduction: 99.7%
- Cache storage: ~5KB per comparison (JSON)
- Cache hit rate: ~90% (typical user behavior)
Backend Deployment:
- Cold start (free tier): 30-60s (container spin-up)
- Warm requests: 15-30s (agent execution only)
- Memory usage: ~400MB (well within 512MB free tier limit)
Q: Is this a medical device? A: No. DermaCheck is explicitly an educational tool, not a medical device. It cannot diagnose conditions or replace professional medical evaluation.
Q: How accurate is the AI analysis? A: MedGemma provides educational descriptions of observable features. Accuracy varies based on image quality and skin tone. The app includes confidence scoring and abstention handling to avoid misleading guidance.
Q: Does the app work offline? A: Partially. Stored spots, photos, and cached analyses are accessible offline. New analysis requires internet connection to call the backend agent (MedGemma via Vertex AI).
Q: What happens to my photos? A: Photos are stored locally on your device using AsyncStorage. They are only transmitted to the backend (Render.com → Vertex AI) for analysis via encrypted HTTPS. Google does not store photos beyond processing.
Q: Can I trust the urgency guidance? A: Urgency guidance is educational, not diagnostic. It's based on keyword analysis of MedGemma's observations. Always consult a dermatologist for actual medical decisions.
Q: Why use base MedGemma instead of the fine-tuned model? A: Base MedGemma achieved strong performance (F1 0.8797) on temporal change detection. For an educational application, this baseline is sufficient. Fine-tuning research validates the approach but isn't required for production.
Q: Is my data HIPAA compliant? A: DermaCheck does not store Protected Health Information (PHI) as defined by HIPAA. It's a consumer educational app, not a healthcare provider system.
Q: What is the agent workflow doing? A: The agent orchestrates multiple steps: quality checks, MedGemma vision analysis, educational content retrieval, synthesis, and validation. Each step is visible in the UI with progressive indicators.
DermaCheck demonstrates how agentic AI workflows can transform medical AI into calm, practical guidance for everyday health decisions. By combining:
- Multi-agent orchestration (5-node LangGraph with conditional routing)
- Production deployment (FastAPI on Render.com, Vertex AI Model Garden)
- Educational framing (ABCDE framework, plain-language guidance)
- Safety-first design (disclaimers, abstention, bias acknowledgment)
- Real-world utility (temporal tracking, export, anxiety reduction)
The app addresses genuine barriers to skin cancer monitoring while maintaining appropriate clinical boundaries.
Competition Submission Highlights:
✅ Agentic Workflow Prize - Production-ready multi-agent system with intelligent routing ✅ Novel Task Research - Temporal change detection dataset + fine-tuning methodology ✅ Technical Excellence - Complete iOS app + FastAPI backend + LangGraph orchestration ✅ Real-World Impact - Addresses access barriers, reduces anxiety, empowers monitoring ✅ Open Source - Public code, dataset, documentation for reproducibility
Built with dedication to N. K. Trinh (†2006) and all those affected by skin cancer.
Thank you for considering DermaCheck for the MedGemma Impact Challenge.
Last Updated: 2026-01-31 Submission Date: February 2026 Repository: https://github.com/dunktra/DermaCheck