Inspiration

What if your virtual avatar could change how you see yourself? This question sparked our creation of AvAsana for the Meta Presence Platform. The game delves into the psychological effects of avatar customization on self-esteem and self-compassion, all through the lens of how we mentally map ourselves in space.

In the virtual world, avatars are more than just digital doubles; they're a reflection of how we see ourselves. But until now, these avatars often missed the mark on truly representing our real-world forms. With AvAsana, we're breaking new ground by allowing the player to choose a 3D avatar modelled to them selves using Ready Player Me. Furthermore in-game, we allow them to tweak multiple aspects of their body - arms, legs, hips, head, and bust to better represent how they perceive themselves in real life This ensures that every avatar is a true-to-life representation, giving players a more genuine and impactful way to explore and improve their self-perception in the virtual space.

What it does

We're redefining the virtual reality experience with AvAsana, a single-player guided game that merges advanced 3D body scanning technology with a deep understanding of body image. This innovative platform allows players to create lifelike avatars based on their actual body dimensions, providing a unique opportunity to explore self-perception and body positivity in a supportive, interactive environment. Dive into a more authentic and impactful journey towards self-acceptance with our cutting-edge VR game.

How we built it

At AvAsana, we've harnessed the capabilities of ReadyPlayerMe Studio to seamlessly integrate personalized avatars into our VR yoga experience. Utilizing a specialized package, we connect these avatars with Unity and streamline their importation. By simply copying an avatar's .glb link from the Quest Browser and then entering the game, users are transported directly to a virtual locker room. Here, users can fine-tune their avatars to reflect their real-world proportions, adjusting features such as length and width through intuitive sliders.

Our avatars come fully equipped with full-body retargeting via MovementSDK and fingers via Interaction SDK, precision eye tracking using OVREyeGaze, and dynamic mouth animations enabled by face tracing in MovementSDK , ensuring a lifelike representation. In the locker room, a virtual mirror allows users to refine their avatar's appearance further, promoting a deeper connection between their physical and virtual selves.

Upon finalizing their avatar, users enter the Yoga Room, where a digital instructor—modeled after a lab team member—guides them through a series of yoga routines. This immersive session includes a curated selection of six exercises, each designed to enhance body awareness and breathing techniques.

Post-routine, players return to the locker room, where the mirror doubles as a passthrough window, easing the transition back to reality or facilitating further avatar adjustments. Additionally, our innovative Eye Tracking logger captures user focus within the 3D space and on the 2D screen, gathering valuable data on attention patterns and the evolution of self-consciousness throughout the VR experience. This cutting-edge functionality not only enhances user engagement but also provides critical insights for ongoing research into virtual self-perception.

Challenges we ran into

The journey wasn't one without hurdles at quite literally every step of the process, and everyone in the team stepped up. There were multiples phases of difficulties, and that's how I would get into it:

  1. Streamlining Avatar Importation: Initially, our process required users to manually paste a .glb link into a textbox within the game to import their ReadyPlayerMe avatar, a step that proved cumbersome in VR. Recognizing the need for a smoother user experience, we innovated an automated system that bypasses manual entry and enhances user engagement by directly linking the avatar importation to the Quest Browser.

  2. Enhancing Avatar Customization: Adapting the ReadyPlayerMe avatar for detailed personalization presented significant challenges. Our goal was to allow users to modify each aspect of their avatar's body precisely. To achieve this, we developed an intuitive slider system, enabling users to effortlessly explore and adjust various body dimensions, fostering a more personal and immersive interaction with their virtual self.

  3. Refining Full-Body Retargeting: Integrating the MovementSDK with the ReadyPlayerMe avatar initially encountered numerous technical hurdles, particularly in achieving fluid and accurate body movements. The process of aligning eye gaze tracking and facial expressions required extensive experimentation. However, through persistent development and testing, we managed to achieve a high level of precision in motion capture, significantly enhancing the realism and responsiveness of the avatars in virtual yoga sessions.

  4. Accurate Simulation of Complex Movements: Understanding and replicating how muscles and body parts coordinate in actions like a cross-body stretch was more complex than anticipated. Achieving a realistic movement required not just bending limbs but orchestrating a symphony of shoulder, leg, and forearm movements. This detailed approach to animating large to small body parts significantly extended our animation creation time.

  5. Enhancing Naturalistic Animations: Replacing a live yoga instructor with a 3D animated model demanded high fidelity in every gaze and gesture to ensure lifelike realism. We adopted several techniques to enhance naturalism in our avatars. For instance, we integrated eye blinking with variable timing using the Eye Blinking package from ReadyPlayerMe. Additionally, we incorporated subtle chest movements to simulate breathing—expanding gently during inhales and contracting more rapidly during exhales. These enhancements were particularly focused on poses held for longer durations, such as stretches, to imbue them with a sense of living, breathing authenticity.

These advancements not only solved initial usability issues but also elevated the overall user experience in AvAsana, making the interaction with one’s virtual self seamless and deeply engaging. All-in-all, we faced a lot of set backs, as someone who were using Presence Platform SDKs for the first time, but we figured out it out.

Future Enhancement for AvAsana

As AvAsana continues to evolve, our roadmap includes several exciting enhancements aimed at deepening the immersive experience and expanding the community:

  1. Integrating Spatial Anchors for Enhanced Realism: We plan to incorporate Spatial Anchors to elevate the spatial awareness within the game, connecting the virtual yoga environment more closely with the real world. This feature will not only make the experience more lifelike but also aims to reinforce self-positivity by allowing users to visualize themselves and their movements in real-world settings, fostering a stronger connection between physical and virtual self-awareness.

  2. Launching Multiplayer Yoga Sessions: To enrich the social dynamics of AvAsana, we are excited to introduce multiplayer classes. This will enable users to join sessions with friends or connect with new people, enhancing the sense of community and support. These interactive classes are designed to encourage open dialogue about self-perception and body positivity, helping to spread awareness and understanding of Body Dysmorphia in social contexts.

  3. Developing Personalized Progress Tracking: Looking further ahead, we aim to implement a feature that allows users to track their progress over time. This will not only help users see how they have improved in their yoga practices but also provide insights into how their self-perception changes with regular engagement in the virtual environment.

These developments are guided by our commitment to making AvAsana not just a game, but a transformative platform for personal growth and communal support in the journey towards self-acceptance and mental well-being.

Built With

Share this project:

Updates