DermaNet: Fully Local AI-Powered Skin Cancer Screening
Inspiration
Our inspiration for DermaNet began when we found a research article on PubMed:
https://pubmed.ncbi.nlm.nih.gov/39958023/
The article highlighted a critical and growing issue: skin cancer rates are rising, particularly in rural areas of the United States. These communities face:
- Long travel distances to dermatologists
- Provider shortages
- Delayed appointments
- Increased reliance on primary care providers for screening
- Later-stage detection and higher disease burden
The problem can be conceptualized as an access function:
$$ \text{Detection Risk} \propto f(\text{Travel Distance}, \text{Appointment Delay}, \text{Specialist Availability}) $$
As travel distance and wait time increase, early detection decreases. Since survival probability depends heavily on stage at detection:
$$ \text{Survival Probability} \approx f(\text{Stage at Detection}) $$
Delayed screening directly impacts patient outcomes.
This research made us ask:
What if dermatology-level screening could happen instantly, locally, and without internet access?
That question became DermaNet.
What We Built
DermaNet is a fully local, AI-powered skin screening system designed to run entirely offline.
It bridges the dermatology access gap by providing:
- Image-based lesion classification
- A conversational AI explanation interface
- An intuitive body-part selection system
- Complete offline functionality
- Privacy-preserving on-device inference
No cloud.
No external APIs.
No internet dependency.
System Architecture
DermaNet consists of three main components:
- Frontend Service
- Backend Service
- On-Device AI Models (Classifier + LLM)
Frontend: Interactive Body-Based Screening
We built a frontend interface where users can:
- Click on a body diagram to select a region
- Upload one or more images of a skin lesion
- Receive:
- A classification result
- A conversational explanation through a chat interface
The design prioritizes:
- Accessibility
- Ease of use
- Clinical clarity
- Deployment in low-connectivity environments
The frontend sends requests to a local backend service running on the same device.
Backend: AI Orchestration Layer
The backend is responsible for:
- Image preprocessing
- Routing images to the skin lesion classifier
- Receiving structured diagnostic probabilities
- Passing structured output to a locally hosted LLM
- Returning an explanation to the frontend
The inference pipeline looks like:
$$ \text{Image} \rightarrow \text{Classifier} \rightarrow \text{Probability Vector} \rightarrow \text{LLM Explanation} \rightarrow \text{Frontend Chat} $$
The classifier outputs probabilities such as:
$$ P(\text{Melanoma}), \quad P(\text{Nevus}), \quad P(\text{Benign Keratosis}), \dots $$
The LLM then converts this structured output into:
- Patient-friendly explanations
- Risk interpretation
- Suggested next steps
All of this happens entirely on-device.
How We Built It
1. Local Classifier Integration
We deployed a skin lesion classifier locally, optimized for on-device inference. This required:
- Model packaging for Qualcomm hardware
- Format conversion
- Memory optimization
- Inference pipeline tuning
2. Local LLM Integration
We hosted a lightweight LLM locally on the backend. Its role is not classification, but explanation.
The LLM receives structured outputs and generates:
- Clear diagnostic summaries
- Contextual explanations
- Follow-up guidance
The key architectural idea was separation of responsibilities:
$$ \text{Classifier} = \text{Prediction} $$
$$ \text{LLM} = \text{Explanation} $$
3. Fully Offline Execution
A strict design constraint was:
$$ \text{Internet Access} = 0 $$
This meant:
- No cloud fallback
- No remote inference
- No external API calls
- No telemetry
Everything — frontend, backend, classifier, and LLM — runs locally on the device.
Challenges We Faced
1. Qualcomm Hardware Architecture Constraints
Targeting a Qualcomm-based system introduced significant complexity:
- NPU compatibility issues
- Tensor format mismatches
- Memory limitations
- Execution pipeline restrictions
We had to carefully align model formats and runtime expectations to match the hardware architecture.
2. Nexa AI SDK and Genie SDK Integration Failures
One of our biggest technical challenges was attempting to integrate:
- Nexa AI SDK
- Genie SDK
Individually, each SDK worked. However, when running models locally:
- Model packaging formats conflicted
- Runtime dependencies differed
- Execution environments were incompatible
- Integration pipelines failed
This forced us to:
- Re-evaluate model export formats
- Modify inference runtimes
- Debug low-level execution errors
- Rethink orchestration between components
Integration became significantly harder than model development.
3. Fully Local Constraint Increased Complexity
Because we required fully offline operation, we could not:
- Use cloud-hosted LLM APIs
- Offload computation
- Rely on managed inference services
Every optimization and integration problem had to be solved locally.
This dramatically increased engineering complexity.
What We Learned
Healthcare Access Is a Systems Problem
Healthcare inequity can be modeled as:
$$ \text{Access Gap} = \text{Patient Need} - \text{Specialist Availability} $$
AI cannot replace dermatologists. However, it can reduce this access gap by:
- Empowering primary care providers
- Enabling early screening
- Providing immediate triage insight
Integration Is Harder Than Training
The hardest part of DermaNet was not training a classifier.
It was:
- Hardware compatibility
- SDK interoperability
- Runtime integration
- Offline deployment
Real-world AI systems fail at the seams — not in the models.
Why DermaNet Matters
In rural America:
- Dermatology specialists are scarce
- Travel distances are long
- Detection is delayed
Because early-stage detection dramatically improves outcomes:
$$ \text{Early Detection} \Rightarrow \text{Higher Survival Probability} $$
DermaNet reduces:
- Travel burden
- Appointment delays
- Diagnostic uncertainty
It provides:
- Fast screening
- Local inference
- Privacy preservation
- Accessibility anywhere
All without requiring internet access.
Final Reflection
DermaNet began with a research paper and a simple question about access.
It became:
- A fully local AI diagnostic system
- A frontend-to-backend medical inference pipeline
- A hardware-constrained engineering challenge
- A lesson in real-world AI deployment
The most difficult part was not building intelligence.
It was making everything work together — locally, reliably, and responsibly.
DermaNet represents our belief that:
AI should reduce healthcare inequity — not depend on high-speed internet to function.
Log in or sign up for Devpost to join the conversation.