Team Name: ReliefAR

The Starting Pitch :

Hi all, IT project manager and also run art workshops for people to release angst in a healthy way by painting collaboratively. I am here with 2 developers to create a VR space to augment this experience. Essentially, I want to create a game that uses emotion as the currency, and the ideal end game is to maximize happiness. We are looking to generate VR space from characteristics of people’s faces, expressions, and interactions. And express the art in VR space that evolves based on changes in the people’s emotion.

The Challenge

The Pitch :

Suzanne is a 81-year old retiree who struggles with late-stage dementia. She increasingly has a difficult time articulating her feelings verbally or interacting with others. Her closure has made it very difficult for her family to ensure her well-being at the retirement home where Suzanne is put under a social worker’s care.

With ReliefAR, her daughter Julie and others who care deeply for Suzanne are able to understand Suzanne’s emotional well-being from afar. ReliefAR feeds on Suzanne’s biometrics as triggers to “paint” a virtual 3D space - with visual, soundtrack, and keywords display that changes depending on a range of emotions. Julie can check on this real-time feedback and be assured that Suzanne is under the right caretaker, even if Suzanne might not have the cognitive ability to articulate her feelings.

ReliefAR was inspired by an art workshop series where people release angst in a health way - through collaborative painting. Our team, “ReliefAR,” created a VR space that would augment this collaborative healing experience and broaden the reach of its participants. Essentially, we wanted to create a game that uses emotion as the currency, where the ideal end game would be to maximize emotional fulfillment. We looked to generate a VR space from characteristics of people’s facial expressions and speeches. The environments in the VR space evolves based on changes in the people’s emotion.

The facial expression and speech of the “performer” creates VR space in the end user’s VR space. Facial expression creates environments in the VR space. Sound adds different objects to environments in the VR space also. The soundtrack in the VR also changes depending on the mood of the “performer.”

This resultantly creates a cathartic/therapeutic experience for participants. Think of a situation where directly confronting people was uncomfortable. This technology functions as a multimedia “letter”; a creative/different way to express your emotion by virtually “painting” another person’s world. The “audience” or the end-user can remotely experience the “performance” of the “artist” digitally, in a spontaneous, creative way that has not been made obsolete by popularization of YouTube/camera/immortal media.

Phase 2:

We see potentially incorporating more inputs to the triggers for the VR space such as bodily motion and biometrics of the “performer.” That way, this technology can be incorporated with performances such as ballet or dancing. The “audience” - or the end-user - then spontaneously contributes to the VR space by creating sound. The sound from the end-user will add to the VR space. Therefore, the loop between performer and audience is complete. The receiver contributes to the VR space by reciprocating with images that are fed his/her speech/facial expression.

We see the business viability of this technology being three-fold: Communication with those who have difficulty with direct confrontations or verbalizing their feelings. For example, an autistic child can pick up portraying certain emotions more easily when he is immersed in a world where images, background sounds, and words are aligned on displaying the same emotion. Family members of critically-ill patients can also glean emotional response from the patient. Revitalize the sanctity of live performance by engaging with the audience through a VR space, in real-time. The ubiquity of 2D video recording has discounted the value of “live performances.” The VR will come in as a new, transcendent, enhanced experience of engaging with a live performance. eCard that crosses language barriers. We can connect across culture by speaking through “emotions,” which is the most basic universal human trait that not only crosses nationalities but also civilizational development (from pre-historic era to modern life). It will allow us to experience what it means to be human beyond the nationalities or geopolitical tensions. Our facial expressions are the most basic nonverbal way of “communicating” when we encounter another human being.

In this process of (literally) painting a collective narrative, we create beautiful stories. "Power of myths is what led to human cooperation," to quote Yuval Noah Hariri.

2. Documentation

  • Facial recognition
  • Tensorflow - to train a model based off of faces that were happy/sad/etc. Used this model and compared the face against the model.
  • openCV - to find the boundaries of the face within the webcam frame. Combined two above, decide which emotion was the face that was showing on web cam.
  • Speech recognition
  • audioPy to record sounds as they were coming in
  • googleSpeechRecognition to parse through language and churn into text
  • Indico (emotion API) to determine the emotion that the language conveyed

Built With

Share this project:

Updates