Inspiration
Each year, thousands of animals go extinct and thousands more become endangered. A majority of these endangered animals live in the ocean. Researchers are developing new ways to protect these endangered species. One of these ways is recording underwater footage from a submarine to check the area for any endangered species. However, combing through hundreds of hours of footage can be tedious. SEA is our solution.
What it does
Our project, Save Endangered Animals, or SEA for short, is a convolutional neural network trained to detect endangered species from an image. As of right now, we can identify 4 endangered or vulnerable marine species, which are the Sea Otter, Hawksbill Sea Turtle, Walrus, and Hector’s Dolphin.
How we built it
We used the VGG16 Convolutional Neural Network Architecture. We chose this specifically because it won the ImageNet competition in 2014. The python libraries used were Tensorflow, Keras, and OpenCV. The graph shows training loss and accuracy when we trained our model. As you can see, training accuracy fluctuates around the same level as validation accuracy, which is a good sign that there is no overfitting. The VGG16 Convolution Neural Network Architecture consists of convolution and max-pooling layers, followed by a fully connected layer. We made the datasets ourselves by utilizing the Bing Image Search API to quickly download hundreds of images.
Challenges we ran into
We underestimated how much training data we would need. We first decided to get labeled datasets using just the animal's names. However, we quickly realized that for marine animals, you needed underwater pictures as well. So we downloaded more images. In the end, we developed a practically accurate model that can detect the 4 endangered animals that we trained it on.
Accomplishments that we're proud of
This was our first time making a Convolutional Neural Network, and while it seemed scary, it ended up being a very rewarding process. Seeing our model train for 100 epochs, then running predictions on it that yielded accurate results was a satisfying feeling.
What we learned
We learned how to interact with Flask in order to deploy an image classification machine learning model to the web. We also learned about the code that is behind making a CNN.
What's next for SEA
We hope to launch SEA as an app or program that researchers can run on large datasets of images that are extracted from video footage. Given that we only had a limited amount of time to collect our dataset and train, it is definitely possible to train our model to detect even more endangered marine animals, such as the Vaquita and the Narwhal.

Log in or sign up for Devpost to join the conversation.