Inspiration

AnatoVerse was made to bridge the gap between advanced medical imaging and immersive technology. We envisioned a platform that could not only enhance medical education and diagnostic insights but also make exploration interactive and intuitive. The inspiration came from witnessing the challenges in understanding complex 3D anatomical structures using traditional 2D methods such as MRI results of the brain; we recognize the potential of VR to revolutionize how we visualize and interact with the human body.

What it does

AnatoVerse transforms brain medical imaging data such as MRIs into interactive 3D models. Specifically, it:

  • Segments brain anatomical structures: Uses a 3D UNet to segment brain tumors from MRI scans
  • Generates 3D meshes: Applies the marching cubes algorithm to extract detailed 3D representations of the tumor in the brain, along with the rest of the brain anatomy
  • Integrates into VR: Imports these 3D models into Unity, where users can interact with models of the brain, heart, eye, lung, liver, and stomach. Users can pinch, drag, zoom, and move the models around their VR space for an immersive educational or diagnostic experience

How we built it

Our development process combined a deep learning techniques with VR technology:

  • Data Preparation & Preprocessing: We handled large 3D MRI datasets by reducing the slice range and normalizing data, ensuring memory efficiency.
  • Deep Learning Segmentation: We implemented a 3D UNet in PyTorch, utilizing precision training to segment brain tumors effectively.
  • Mesh Extraction: Post-segmentation, we employed the marching cubes algorithm from scikit-image to extract a detailed 3D mesh of the tumor.
  • VR Integration: The 3D models were then imported into Unity. We developed interactive functionalities such as pinch drag, pinch zoom, and pinch move, enabling users to intuitively manipulate the anatomical models in a VR environment.
  • Visualization: Tools like PyVista and Matplotlib were used for initial visualizations before final integration into the VR space.

Challenges we ran into

Throughout the project, we encountered several challenges:

  • Memory Management: Handling high-dimensional 3D MRI data required innovative strategies such as slice selection to avoid memory bottlenecks.
  • Accurate Segmentation: Achieving precise segmentation with the training data we found was challenging, necessitating careful model tuning.
  • Mesh Optimization: Extracting detailed 3D meshes without sacrificing vital anatomical details was a delicate balance.
  • VR Integration: Merging deep learning outputs with real-time VR interactions in Unity posed integration challenges, especially ensuring smooth performance and intuitive user gestures.
  • User Interaction: Designing natural and responsive pinch gestures in VR to manipulate complex 3D models required extensive testing and iteration.

Accomplishments that we're proud of

We’re extremely proud of several key achievements:

  • End-to-End Pipeline: Successfully building a comprehensive pipeline—from data preprocessing and segmentation to 3D mesh generation and VR integration.
  • Efficient Model Design: Creating a 3D UNet model that performs accurate brain tumor segmentation while having optimized memory from the brain slices.
  • Interactive VR Experience: Developing a VR platform where users can intuitively interact with multiple anatomical models, significantly enhancing the educational and diagnostic potential.
  • Overcoming Technical Hurdles: Tackling challenges in both deep learning and VR integration, and optimizing performance without compromising on detail.

What we learned

This project has been a tremendous learning experience:

  • User-Centric Design: We learned the importance of designing user interactions that are both intuitive and responsive in a VR setting, ensuring that complex data is accessible and engaging.
  • Iterative Development: Overcoming challenges in mesh generation and VR integration emphasized the value of iterative design and constant feedback from technical and clinical perspectives.

What's next for AnatoVerse

  • Expanding Anatomical Models: Adding more detailed and diverse anatomical structures and improving the precision of existing models.
  • Real-Time Segmentation: Integrating real-time segmentation capabilities for other organs to enhance interactive diagnostics and surgical planning.
  • Augmented and Mixed Reality: Exploring AR/MR integrations to broaden the platform’s applications in medical education and clinical practice.

Built With

Share this project:

Updates