Inspiration
Melanoma is the deadliest form of skin cancer. Once it spreads, the five-year survival rate is only 17%. However, if the disease is diagnosed before it spreads, the survival rate becomes 98% (source). A recent Nature paper demonstrated that convolutional neural networks could detect different skin conditions with near-dermatologist accuracy. This convinced us that building an application to detect early-stage melanoma would be feasible. Our goal was to make this technology accessible to anyone with a smartphone. We foresee a benefit for those living in countries without a developed healthcare system.
What it does
We created a standalone iOS app that can be used as a diagnostic tool for melanoma. Melano.me uses a state-of-the-art convolutional neural network to estimate the probability that a photo contains a malignant tumor.
Melano.me is an offline solution, the first of its kind. Unlike most approaches, classification takes place directly on the device and not on a server. This is especially important for patient privacy and HIPAA compliance.
How we built it
We obtained our training dataset from the public ISIC Archive. In total, we collected about 13,000 images and their binary masks for cropping purposes.
In order to build our classifier, we fine-tuned Google’s Inception-v4 model, which was pretrained on the ImageNet dataset. We accomplished this with an AWS Deep Learning AMI instance that allowed us to train our model quickly. The instance had an NVIDIA V100 GPU, which is currently the fastest available.
Using Apple’s CoreML framework, we were able to achieve an average classification time of 0.3 seconds per frame on a continuous video stream running at 5 FPS.
Challenges we ran into
One of the biggest challenges we ran into was getting our app to recognize malignant tumors independently of the orientation of the device. In order to ensure this, we performed data augmentation using various transformations such as random rotations, brightness and contrast changes, zooms, shears, et cetera. This increased our dataset size to approximately 63,000 images. After retraining our network on the augmented dataset, we found that the app successfully detected melanoma regardless of the rotation of the device.
Accomplishments that we're proud of
Our model achieved 97.71% accuracy on our validation set.
What we learned
This was our first time using the Keras and CoreML libraries. Also, this was our first time using Amazon Web Services for deep learning.
What's next for Melano.me
We want to deploy this to the App Store, initially targeting primary care physicians. Moreover, we would like to expand our application to allow for an opt-in data collection feature. This would allow users to optionally contribute their data anonymously to medical research.
Built With
- amazon-web-services
- aws-deep-learning-ami
- coreml
- keras
- python
- swift

Log in or sign up for Devpost to join the conversation.