Inspiration
Have you ever forgotten someone’s name after already meeting them in the past? When we formed our team, most of us could at most recognise a teammates face but failed to recall the name. Asking people about basic information and making introductions can get tedious and repetitive. Remembering things is hard and this stage of building a relationship with someone is quite frankly, very unnecessary. The Facial Recognition Utility (TFRU) aims to streamline this process and help people build deeper relationships with one another.
What it does
We have implemented an AR environment which returns a user defined auto-biography upon user facial recognition. TFRU displays to the user details about another person. It can include things such as one's name, interests, as well as health information such as allergies, blood type (if in an emergency situation), illnesses or disabilities that aren’t very obvious to the eye to help build awareness to illnesses and disabilities that may be present in friends and colleagues.
When meeting this person again, you can view their interests and start a conversation with one of their interests as a topic. Forgot their name? Easily recall their name after a scan of this said-person to avoid embarrassment when asking for their name again.
The user uses their camera to scan and select the person they want information about. TFRU returns information that is completely user defined. We will only display what you want others to see. Regarding privacy concerns, all video data is used solely for immediate facial recognition, and will not be stored, it is only used for the identification of a TFRU user.
How we built it
https://github.com/zikizheng/tfru We have a react frontend to pass data from the camera to the backend via Axios. Through a Flask backend, we used a openCV library to recognise the location and number of faces on a live video feed. These extract coordinates are transmitted back to the frontend in ordered to be rendered for the user. Future implementations can transmit database references with user logins.
Challenges we ran into
Our main challenge was passing face coordinates generated on the backend to be rendered on the webpage, the complexity of the array-in-array data structure did not help.
Accomplishments that we're proud of
We are proud of the code on GitHub as it has the ability to find faces from a live video feed captured in a browser, sending it back to the backend server where its processed and then the metadata is finally presented in the very same web window it came from. This is because this is our entire team's first attempt at building a web app. Learning all the technologies was a scary but ultimately satisfying experience.
We are also proud of the CGI simulator of a production ready version of this app with functional buttons and better UI.
What we learned
We have learned React.js frontend development, Flask for python backend, Node.js for self hosting, how to implement openCV, and also how to track faces and add elements in Davinchi resolve.
What's next for tfru
Next we would refine the backend and implement a database that allows users to submit their reference photo for the system to compare too, adding more rich data and allowing it to be displayed in AR would permit nearly unlimited use cases with integration with social media apps and government.
Log in or sign up for Devpost to join the conversation.