Inspiration

In the world of Generative AI, content creation is one of the fastest-growing field. While vast contents are increasing, our attention spans have decreased significantly. The growing need for personalized, engaging content in today's digital era inspired us to create MindDiffuser. We thought what if we can convert neural data into personalized videos harnessing the power of Generative AI. This product can revolutionize the whole content creation and marketing world.

What it does

MindDiffuser AI records your EEG data while watching a video, analyzes the moments in the video where you were paid the most attention, and then turns those detected Salient EEG data into videos. It uses Generative AI Models to create videos from EEG data.

How we built it

We can divide this part into 4 parts: Neuroscience, Frontend, Backend and AI. So for Neuroscience part we have collected EEG data with OpenBCI Cyton Board while a participant is looking at a video in a controlled environment. Then we do filtering, Artifact Removal (ICA) and frequency band analysis with MNE. For Attention detection algorithm we used Alpha/Beta band ratio to compute high attention moments in the video. Then for the AI Part: EEG signals are normalized, padded to 128 channels, tokenized into fixed time windows, and embedded into high-dimensional representations. Then the model is pretrained to reconstruct masked portions of EEG embeddings using a reconstruction loss, learning global context. Following that we do latent EEG embeddings condition a diffusion-based generative model to map EEG signals to corresponding visual representations. Then diffusion process iteratively refines noise into a coherent image aligned with the input EEG signals. Then the reconstructed images are sent to the kling AI to output a video. For the Application Frontend and Backend we use industry standard React, Django, sqlite, REST frameworks.

Challenges we ran into

As we know the first problem with any EEG data is that its very noisy. So we had to figure out both online and offline filtering for pre-processing the data. The next big challenge we had to solve was to transform the EEG data that can be used for our AI Models to generate content. The hardest part was generating Images with our AI models because we were looking at multiple research papers where they built different models and we had to adapt the EEG data format and do interpolation and the models that can accept our EEG Data, Given the time frame it was a very ambitious project that we had to solve with such short time.

Accomplishments that we're proud of

Given the ambitious idea and timeframe, the fact that we can generate Images and Videos from EEG Data that is close to what a person saw is our biggest accomplishment. We believe that a proof of concept is a big accomplishment from the perspective of business validation. Doing something that has never been done from a product perspective in NeuroTech and MindDiffuser can create a revolution in the market, make us feel that we are doing what can change the whole game of content creation. We also felt complete working in a diverse team with such dynamic and brilliant minds where learning and collaboration were at the core motivators for us.

What we learned

Starting from Neuroscience to Generative AI to creating a MVP we gained a vast amount of knowledge and skills. From a Neuroscience perspective, We learned how to operationalize attention, study attention from EEG Alpha Beta Wave, filtering and pre-processing EEG data, and how the OpenBCI Cyton board works. From a Computing Science Perspective, we learned how AI models like Stable Diffusion works, implementing cutting-edge research papers on GenAI, clip encoding, grid generation for working with images in Generative AI models, Frontend and Backend integration in a production environment, Training AI Models with EEG data and inferencing. From a Product perspective, Frontend and specifically UI/UX is as important as Backend, Prioritizing product features given time constraints,

What's next for MindDiffuser

The next steps for MindDiffuser would be gaining traction and raising funds. We plan to reach out to incubators and validate our business models so that we are ready for market. With funding from angel investors, we plan to improve our models so that it is trained on a larger dataset, we plan to invest in hardware and we plan to build more features like Real time insights, Expanding capabilities to detect user moods for deeper personalization and many more. We also plan to reach out to educational institutions for research partnership opportunities and collaborating with marketing industries. According to our Go To Market Strategy, begin awareness in Quarter 1 and plan to do full market entry within Quarter 3.

Built With

Share this project:

Updates