What is HeartScape
HeartScape is an interactive 3D platform that transforms raw medical imaging data into fully explorable heart models. Users can rotate, zoom, slice, and compare healthy and congenital heart anatomy in real time. By turning complex scan data into intuitive visual experiences, HeartScape makes advanced cardiac visualization accessible to students, researchers, and clinicians.
Why HeartScape
Understanding congenital and structural heart diseases from 2D scans is difficult, even for trained professionals. Traditional medical imaging requires interpretation across multiple slices, which can be time-consuming and unintuitive. While MRI and CT scans offer a plethora of 2D data that needs to be pieced together in careful order to understand, this is often a tedious and difficult process that can be optimized using 3D vectorized heart models — enabling clinicians to explore cardiac anatomy as a cohesive, spatial structure rather than isolated slices. By integrating comparative modeling between healthy and pathological hearts, distance-based difference visualization, three JS rendering and orientation, and advanced annotation tools, we provide measurable insight into structural deviations with greater clarity and efficiency.
Our platform bridges the gap between raw imaging data and true anatomical understanding — turning complexity into intelligent, actionable precision.
HeartScape bridges this gap by:
- Converting medical imaging data into interactive 3D models
- Making complex anatomy easier to understand visually
- Providing AI-powered explanations for selected regions
- Supporting education, interactive note-taking, simulation-based experimental research, and patient awareness
Our goal is to make cardiac anatomy not just visible, but understandable.
How We Built It
Our Datasets and Research
We primarily used the CMR scans from the HVSMR-2.0 dataset: https://figshare.com/collections/HVSMR-2_0_A_3D_cardiovascular_MR_dataset_for_whole-heart_segmentation_in_congenital_heart_disease/7074755/2
Building a 3D Model
We started our project with the ambitious and rather artistic goal of building an interactive and segmented 3D model of a heart. After much exploration of existing data, we found the beauty in CMR scans (a specific type of MRI scans for the heart), which essentially gives us a set of x-y cross sections of the heart at various depths. Using these sequential slices, we reconstructed the heart in three dimensions through a process of segmentation and volumetric stacking. Once segmented, the contours from consecutive slices were interpolated and connected to generate continuous surfaces. By stacking and vectorizing these segmented layers, we transformed flat grayscale intensity data into a coherent 3D volumetric mesh, using Three.js on web to color the model based on segments we defined as the 4 biggest values and 4 biggest chambers of the heart.
The Frontend
The frontend is a web-based application built with Three.js. It renders interactive 3D heart models directly in the browser, allowing users to:
- Rotate and zoom into the heart
- Slice through anatomical sections using a slider
- Compare healthy and diseased structures side by side
- Select specific regions for deeper analysis
- Annotate and take notes on the 3D models by inserting comment pins, sticky notes, and even 3D drawings on the model itself.
The Backend
The backend processes HVSMR medical imaging data and converts it into 3D mesh models. We built a pipeline that:
- Parses segmentation data
- Extracts anatomical structures
- Generates mesh geometry
- Optimizes models for real-time browser rendering
This allows raw medical data to become lightweight, interactive 3D assets.
Our Use of Sphinx.AI
Using Sphinx.AI, we were able to generate interactive python notebooks that use heatmaps of point-cloud–based deviation maps between simulated conditions and a normal heart. These notebook scan also align and visualize those differences in Plotly for quick inspection. They compute nearest-neighbor distance fields after rigid ICP alignment and export both interactive plots and file artifacts for downstream use. Powered by Sphinx.AI, the workflow turns raw cardiac meshes into interpretable spatial heatmaps with minimal friction.
Our Use of Actian VectorAI
HeartScape uses Actian VectorAI as the core retrieval engine for structural patient matching and soft condition attribution in congenital heart disease (CHD) analysis.
What we store
We populate six collections across three anatomical resolutions:
| Collection | Dim | Contents |
|---|---|---|
heart_patients |
9D | One vector per patient (whole-heart) |
condition_archetypes |
9D | Mean anatomy per condition group |
heart_chambers |
4D | Chamber-only vectors (LV, RV, LA, RA) |
heart_vessels |
4D | Vessel-only vectors (Aorta, PA, SVC, IVC) |
chamber_archetypes |
4D | Chamber archetype per condition |
vessel_archetypes |
4D | Vessel archetype per condition |
Each vector is built from real volumetric measurements in mL extracted from segmented MRI scans, not learned embeddings. Every dimension is anatomically interpretable.
Why Euclidean over cosine
Our first implementation used L2 normalization and cosine similarity. Every patient returned 0.999 similarity. Cosine only measures vector direction; a heart with every chamber twice the normal size looks identical to a normal heart because the ratios are the same. We switched to z-score standardization + Euclidean distance (DistanceMetric.EUCLIDEAN), which correctly captures magnitude differences: a distance of 2.5 means this patient's anatomy is 2.5 joint standard deviations from the nearest known case.
Multi-resolution query
On every patient upload, we run six searches in parallel, whole-heart, chamber-only, and vessel-only, against both patient and archetype collections. This lets us distinguish a patient whose chambers are abnormal from one whose vessels are abnormal, whereas a single global search collapses them.
Soft condition attribution
Archetype collections store the mean anatomy of every CHD condition group. Query distances against condition_archetypes are converted to attribution scores using softmax over negative Euclidean distances with an adaptive temperature:
spread = dists.max() - dists.min() temp = min(3.0 / max(spread, 0.05), 6.0) scores = softmax(-dists * temp)
The Gemini Lens
We integrated Gemini 2.5 to power an intelligent “Lens” feature. Users can drag and select a region of the 3D heart, and the system generates contextual explanations about the selected structure.
The Lens feature:
- Captures the selected segment
- Sends contextual data to the AI
- Returns educational insights about anatomy or disease
- Displays results in a non-intrusive overlay
This creates a Google Lens–style experience for cardiac anatomy.
Challenges We Ran Into
Making Sense of the HVSMR Data
We spent significant time understanding the HVSMR dataset structure. The data required deep analysis to correctly interpret segmentation labels, coordinate systems, and anatomical regions. Transforming raw voxel data into meaningful 3D geometry was one of the most technically demanding parts of the project.
Creating 3D Models of Diseased Hearts
While healthy heart models are widely available online, high-quality 3D models of congenital or diseased hearts are extremely rare. We couldn’t rely on prebuilt assets, so we had to generate accurate models directly from medical imaging data.
Accomplishments We’re Proud Of
Generating Diseased Heart Models from HVSMR Data
We successfully built a pipeline that converts HVSMR medical imaging data into accurate 3D models of diseased hearts. This allowed us to visualize congenital conditions that are rarely represented in publicly available 3D formats.
Interactive AI-Assisted Exploration
The integration of AI-powered contextual explanations elevates HeartScape beyond visualization—it becomes an intelligent learning tool.
What We Learned
- Medical data is powerful but complex—clean preprocessing is critical.
- Visualization dramatically improves comprehension of anatomical structures.
- AI becomes significantly more valuable when grounded in specific visual context.
- Building from raw medical datasets requires both technical skill and domain understanding.
What’s Next for HeartScape
Our vision is to expand this technology beyond the heart. We aim to:
- Extend support to other organs
- Visualize a wider range of diseases
- Improve slicing and comparison tools
- Integrate structured medical knowledge bases for more accurate AI explanations
- Explore applications in medical education and surgical planning
Ultimately, we want HeartScape to become a comprehensive interactive platform for exploring human anatomy in both healthy and pathological states.
Log in or sign up for Devpost to join the conversation.