Inspiration
COVID-19 has disrupted society and changed the way we live, learn, work, and play. I have experienced this first hand as I've been layed off from my job at the university, switched to fully virtual courses, and changed my interactions with friends and family to a digital format.
My university, North Carolina State University, briefly opened for in person class at the start of August but was quickly forced back online due to a sharp increase in COVID-19 cases and clusters on campus. During the short period, I was baffled at how many people (a small but noticeable minority did not wear masks and even refused to wear masks. I wonder what life would be like if we had more actionable data and tools to monitor and remedy this.
What it does
This tool turns any device with a camera and web browser into a face covering monitoring agent. Agents constantly record a specific area and report to the backend whenever they detect n number of faces. The backend takes the picture of faces from the agents and collects data on how many are wearing face coverings properly. The aggregate data can be used by policy makers and the public to guide decisions on enforcement and awareness efforts.
How I built it
In this project, agents are devices with browsers and cameras that will send images to the backend when a person is detected. Work is done on the client side to ensure that only images with people in them are sent (the backend can get expensive as each image has a small charge). Once one is detected, the client base64 encodes the image and sends that string to the backend. The backend listens for images and processes an image for data about how many people are in it, how many are wearing face coverings improperly, and how many that are not wearing any face coverings. Only aggregate data is collected to remove possibility of bias.
The backend is hosted on AWS with the help of a serverless microservice that I developed using golang, Lambda, API Gateway, and dynamoDB. The backend processes the image data and sends it through AWS Rekognition with the brand new feature that detects PPE usage (literally announced hours before HackGT began) . Unfortunately I ran out of time before I could fully use rekognition, but I hope to develop this further after the event ends.
The entire project (with the exception of the front end which is hosted on github pages) is easily deployable using AWS CDK which is a free infrastructure as code tool that allows for infrastructure to be defined using a programming language rather than a template.
The front end is hosted on a github pages static website and uses face-api.js to detect when a face comes into view. Costs associated with streaming video (especailly with rekognition image processing) are extremely high so the decision was made to have clients detect when a person comes into view so as to not waste money and resources on images with no data in them. I also I learned that the browser will only allow webcam usage when the site is served over HTTPS. I had to switch from my orignal plan of hosting it in an s3 bucket to github pages.
What's next for FaceCOVmonitor
-Fully implement the back end and provide support for multiple agents and tenants. -Possibly find a cheaper way to process images by learning how to do that myself. -Clean up the front end
Try out the front end client here (It's unfortunately not hooked up to the backend due to time restraints) https://github.com/adchungcsc/FaceCOVmon
Built With
- amazon-web-services
- api-gateway
- cdk
- dynamodb
- githubpages
- html
- javascript
- lambda
- typescript
Log in or sign up for Devpost to join the conversation.