ExplorAR

Effortless navigation through intuitive AR.

Purpose

Imagine you are in downtown, looking for a place to eat in.

You open Google Maps or Yelp.

Too many places to pick from! Now you go into list view to check out the ratings.

Finally! You picked a place....but where is that place? You enter the map view again and find it.

Needless switching between one view and the other, but why? What if we could achieve this effortlessly in an intuitive Augmented Reality, all from an iOS app?

Demo

demo image

Technical Approach

Technologies used : Google Vision API (logo detection), Google Places API, Apple ARKit3, Apple SceneKit.

Approach : We first feed a frame from the camera to the Vision API and obtain the label and the location of any logos it sees. Then, we obtain ratings and other useful about the restaurant that was detected by Vision API using Places API. Finally, we display the information in the Augmented Reality environment using ARKit3. The information will be overlayed over (or close to) the actual location of the restaurant and it will remain temporally stable. We currently have a working proof of concept--including detection and feature display on AR environment-- without any heavy UI/UX implementations.

Future Improvements

  • Improve the UI/UX beyond what the time constraints in CalHacks allowed.
  • Allow the user to choose what features should be displayed in the AR scene; every individual has different priorities, so this feature has high priority.
  • Try to make the app lightweight so we can increase the rate for prediction/display from ~1 times a second to ~10 times a second.

Built With

Share this project:

Updates