Inspiration

https://www.youtube.com/watch?v=lK_cdkpazjI

What it does

First, it provides a camera view that detects faces with bounding boxes. After clicking on a bounding box around a face, information about that person is shown in augmented reality.

How we built it

We used Swift/Xcode to program most of the project, ARKit for displaying media in augmented reality and Apple's face detection library for the real time bounding box detection.

To recognize faces, we modified the Inception V3 image recognition model to output the feature vector for each face instead of a full classification. When performing face recognition, we compare the outputted feature vectors of the faces the camera is pointing at to the feature vectors of faces the app has seen before.

The loading animation was done in Adobe After Effects and the information cards were done in Adobe Illustrator.

Challenges we ran into

Cropping images, transparent gifs, learning multiple coordinate systems (thanks Apple), calculating the depth of the bounding boxes and pretty much everything else.

Accomplishments that we're proud of

It works!

What we learned

Swift. None of us have ever opened Xcode on purpose prior to this hackathon.

What's next for CreepAR

More data.

(middle person is supposed to be unknown) DEMO: alt text (if gif doesn't display, link is here: https://media.giphy.com/media/3o7aCXgTsGiRdoEXC0/giphy.gif)

Built With

Share this project:

Updates