Inspiration

AR is the future vessel for AI, and that future should be inclusive to everyone. That is why we developed Accessible AR: an app that brings virtual world building through sign language.

What it does

Accessible AR is a Spectacles Lens that combines AI-powered sign language detection with real-world surface recognition to enable virtual world building through ASL (American Sign Language). Users are guided to sign select words, which then generate dynamic, interactive 3D objects in the environment. Features include movable tutorials, 3D painting, scalable and physics-based objects, and even a remote-controlled car — all controlled naturally through hand gestures and real-world interactions.

How we built it

Accessible AR was built using the powerful capabilities of Lens Studio and Snapchat Spectacles, blending game engine principles with AR technologies to create a dynamic world-building experience.

We structured our project around scene objects and prefabs — reusable, modular templates that allowed us to easily spawn complex, interactive 3D objects. Each prefab combined physics bodies, colliders, and interactable components to ensure natural movement and collision with the real world.

Our app relies heavily on instantiation, dynamically generating objects at runtime in response to user actions like signing or painting. This gives users a sense of creating and manipulating the AR world in real time.

We integrated Lens Studio’s AI-powered ObjectTracking3D to precisely track hand joint distances and recognize ASL gestures, triggering specific interactions and spawning events. To extend functionality, we built custom scripts that handled input detection, word matching, object creation, and control logic.

World interaction was made possible through Spectacles’ world mesh — a real-time 3D reconstruction of the environment — enabling our virtual objects to collide, land on, and bounce off physical surfaces. This gave users a highly immersive experience where digital and physical worlds blended seamlessly.

Finally, we used Spectacles Interaction Kit (SIK) components like pinch buttons and movable UI panels to make our tutorial and controls intuitive. These elements communicated with our custom event system to guide users smoothly through the experience.

Challenges we ran into

One of the biggest challenges was getting familiar with Lens Studio’s development environment and understanding how to combine game engine concepts with AR-specific features like world mesh and hand tracking. Debugging interactions between scripts, prefabs, and real-world surfaces also took significant trial and error. Learning to optimize performance while handling dynamic instantiation in AR was another key hurdle we had to overcome.

Accomplishments that we’re proud of

We’re proud to say that we created a fully functional AI-driven world building environment that interacts meaningfully with ASL gestures to interact with objects in AR.

What we learned

We learned how to quickly adapt to a completely new development environment by combining AR technologies with real-time 3D and AI interaction. Working with Lens Studio taught us how to build complex systems from modular components, troubleshoot AR-specific challenges, and design intuitive, accessible experiences for users in a three-dimensional environment.

What’s next for Accessible AI

The next steps for this product would be two-faceted. First, we want to improve the world-building capabilities that interact with sign language. This could include adding sign language words that encode actions for the objects, such as “G-A-S” to make a car move forward, or making more objects to be generated. Second, we want to address other areas of accessibility, which could include features like zooming into certain objects for visually impaired people.

Built With

Share this project:

Updates