Inspiration
As computer engineering majors, we came into Treehacks knowing we wanted to work with hardware. Perhaps the most notable recent hardware releases are those of mixed reality headsets focused on passthrough, such as the Apple Vision Pro and Meta Quest 3. As we thought about these exciting trends in immersive AR, we realized a critical gap: a way to bridge your AR experience with the physical world around you. Naturally, this led us to the world of IoT, since so much of our world is already connected to the internet and ready to be integrated into mixed reality.
What it does
This is a fun project that puts an adaptive visual display and controls on top of IoT devices in an augmented reality environment. These displays are interactive and informative. For example, our app puts a virtual button on top of a “smart” light source (which we achieve with an LED). A user can toggle this button to turn the light on and off from a panel rendered directly anchored to the real device in AR. We also have a temperature sensor and encoder which are enhanced through displaying the current decimal reading above them.
We think there are many potential applications of such technology, especially if it were adopted for devices such as the Apple Vision Pro, which people may end up wearing for extended periods of time. For example, this tech could provide a fun and convenient experience for interacting with various devices around your house, allowing you to quickly check the temperature in your kitchen or control a smart lamp with a glance. Multiple people wearing an AR headset could even walk into a room and view the same customizable information, and be allowed to interact with real devices in easy or novel ways.
How we built it
The IoT devices are two Particle Photons, each connected to a humidity/temperature sensor, a rotary encoder, and an LED. The Photons publish their sensor data every three seconds. Clients can also request action from their devices (or Things), such as turning on a light.
We wrote the backend for this project in the form of AWS Lambda functions using Python, and hosted persistent storage via a Mongo Atlas database.
We built the app in Swift using ARKit and OpenCV. To identify the IoT devices, we used ArUco fiducial markers, whose locations we could determine quite consistently and accurately using OpenCV.
Challenges we ran into
We had initially planned to look for device markers through the Oculus Quest 2 using OpenXR SDK. After going through several hurdles getting the SDK setup, we found that the Quest 2 restricts access to the passthrough stream, prohibiting marker detection. This led us to pivot to using ARkit.
We also had to combine the different tech stacks each of us were working on, which required careful planning and strong communication.
Accomplishments that we're proud of
Overall, we’re proud of the scope of our project and the fact that we were able to integrate such a variety of technologies. We also all made sure to step outside of our comfort zones. Each team member chose the tech stack we were least familiar with and learned a ton.
What we learned
We learned a wide range of technologies from firmware to server-side software to frontend tech. As a team, we gained experience using ARKit, building API endpoints on AWS, and writing firmware in C++.
What's next for AR-iot
We were able to produce a cool MVP after this hackathon, but there is still a good amount of optimization and some improvements to think about. With more time, we could move away from Particle towards lower latency options, and we could customize the UI to include more devices and richer interaction.
Some next steps would also be to use this tech with AR headsets that are commercially available, particularly the Apple Vision Pro, assuming that options are provided to developers to incorporate actual camera data for object tracking while still respecting user privacy.
Log in or sign up for Devpost to join the conversation.