Inspiration

We were inspired by one core question: why is programming still trapped on flat screens when we can now draw directly in mixed reality?
The MX Ink stylus made us think beyond “drawing apps” and toward a new interaction model: drawing logic into real space.

What it does

RealityScript turns physical rooms into programmable canvases.
You draw symbols (like trigger zones, force arrows, paths) on real surfaces, and the system converts them into live interactive behaviors and physics in MR.

How we built it

We designed the concept around Unity + Meta Quest scene understanding and spatial anchors.
Stroke gestures are interpreted as behavior primitives, compiled into a spatial behavior graph, then executed in a real-time physics simulation layer.

Challenges we ran into

The hardest parts were reliable symbol recognition, keeping anchors stable across surfaces, and making the UX simple enough for first-time users while still powerful for creators.
We also had to keep scope realistic for an app-store-ready V1.

Accomplishments that we're proud of

We created a clear, differentiated concept where MX Ink is essential, not optional.
We defined a practical V1 (focused symbol set, physics effects, save/load, export path) and a pitch that is both ambitious and technically credible.

What we learned

Novel ideas win only when they are scoped and buildable.
We learned that stylus-first MR needs strong visual feedback, confidence/error correction, and a tight end-to-end workflow from draw -> compile -> simulate.

What's next for RealityScript

Next, we plan to prototype and test with XR creators, improve gesture accuracy, and add collaboration/export features.
After validating user workflows, we’ll prepare an app-store-ready release and expand to education and robotics use cases.

Built With

Share this project:

Updates