EYEFX 👁️ 🖥️

Inspiration

In today's digital age, our interactions with digital assets are increasing at an exponential rate. To ensure these digital assets serve their purpose, i.e., desired user interaction and its following outcomes, it's imperative that they are perceived by users and in turn, stimulate them, as intended. This puts the design of these assets as the topmost priority.

What it does

EYEFX measures where the users are looking on the computer screen, for how long, and where do they go next. This provides valuable insights into the perception and interaction of users of these digital assets which can be used to optimize design towards specific targets. Through the Effect Force platform, a requester can submit their design and have it trialled by a number of users, throughout the world, otherwise impossible to reach in a practical fashion. Moreover, pooling thousands of users' interactions enables Machine Learning techniques to validate and combine the data in order to improve detection algorithms over time.

How we built it

In practice, the campaign works by recording eye tracking data while a specific task (e.g. finding information) is performed on the desired webpage. Once the task is completed, the worker will be prompted to complete the task by answering one or more questions. This makes it possible to relate the eye tracking data to the quality of the answer given by the worker, further increasing the insights obtainable from the campaign results.

Under the hood, EYEFX uses the Webgazer library, developed by the Brown HCI lab. Its functioning relies on Machine Learning to generate predictions of the observed section of the screen surface from eye features. Additionally, head positioning is monitored and validated, while optionally showing feedback on screen. Visual design is handled by bootstrap, while object interactions takes advantage of jQuery. Hidden input fields are used to submit the task in a string of text, JSON formatted, containing x and y coordinates for each timestep. The Effect SDK is used to interact with TEN

The effect workforce would use our AI-based eyetracking program to collect and synthesize data from campaigns and present customers (websites, adTech, app devs) with actionable insights that will guide their design efforts towards fruition. This provides an opportunity to tap into the 41.8 bn USD (just U.S) web design services market and contribute to the web dev and digital design job market already at a 13% growth rate.

Challenges we ran into

Specific challenges consisted of technical limitation, due to having one device for testing and developing the application. Fortunately, overheating became serious only when recording the demo before the deadline. This made it so that the best accuracy achievable with webcam based eyetracking could not be showcased in the demo. While acknowledging that this type of gaze monitoring is extremely sensible to perturbations (e.g. slight body movements and light changes), EYEFX has been observed achieving 96 % accuracy when deployed on TEN.

Accomplishments that we're proud of

Participating to the hackathon was in itself a huge satisfaction. Once started the project roughly ten days ago, it became clear this project was going to become greater in size than initially expected. Getting the idea to realisation and seeing live on the Effect Network was incredibly rewarding: A great Start of the Year indeed!

What we learned

Little of these technologies was familiar to me, although I did some javascript programming five years ago. Besides increased familiarity with the frameworks themselves, deploying an application to run on a decentralized system inspired me in deepening this aspect of contemporary knowledge. According to some, we are on a path of increased interconnection and smarter living. dApps as the ones offered by the Effect Network are one tiny portion of what will be possible tomorrow. It is possible that sizing the opportunity to be part of this change might change our lives for the best.

What's next for EYEFX

In the future, eye tracking accuracy could increase thanks to the combination of thousands of eyetracking data available throughout the world. More echnical, simpler improvements are also possible, such as Full Screen Mode to offer a greater tracking surface, conversely dicreasing distactors and inter-browser differences. Lastly, the recent increase of biometric (Infra Red) cameras in consumer-grade devices opens avenues of potential improvement for webcam based eye tracking technologies.

Share this project:

Updates