Inspiration
We wanted to bridge the gap between physical movement and data visualization in a fun, game-like format. Inspired by fitness apps that track your form and puzzle-games that challenge your pattern-finding skills, we set out to create an experience where your own body becomes the controller. By matching your silhouette to a dynamically generated line graph, the app gamifies self-awareness and exercise.
What it does
- Real-time Pose Detection: Uses the device camera and MIT App Inventor’s Pose component to detect major joints and limb positions.
- Random Graph Generation: Creates a 5–7 point line graph each session, with points placed at varying X/Y coordinates.
- Alignment Game: Guides the user through each segment of the graph, prompting small adjustments (up, down, left, right) until the detected pose matches the target line within a tolerance range.
- Scoring & Feedbac*: Displays a running score, time taken, and visual feedback (
✓when aligned, arrows to correct direction), then shows a final performance summary.
How we built it
- MIT App Inventor 2
- Designed all screens (Camera view, Graph overlay, Results) in the Designer.
- Wired up Blocks for camera control, pose events, graph data generation, and state transitions.
- Designed all screens (Camera view, Graph overlay, Results) in the Designer.
- Pose Component
- Leveraged the built-in Pose detection block to receive joint coordinates each frame.
- Normalized joint data into screen coordinates and compared against graph segment endpoints.
- Leveraged the built-in Pose detection block to receive joint coordinates each frame.
- Graph Logic in Blocks
- Generated random point arrays and drew them on a Canvas using line segments.
- Implemented a simple distance check to decide when a user’s joint line approximates the target segment.
- Generated random point arrays and drew them on a Canvas using line segments.
- UI & Theming
- Set a yellow (#FFD500) accent on a black background.
- Used minimalist icons and clear, large text to make prompts unambiguous.
- Set a yellow (#FFD500) accent on a black background.
Challenges we ran into
- Pose Accuracy Variance: Lighting conditions and camera angle could throw off joint detection. We tweaked tolerance thresholds and added “re-calibration” prompts to improve reliability.
- Performance on Older Devices: Continuous pose detection plus graph rendering taxed slower phones. We optimized by reducing the frame-rate of pose checks during static segments.
- Visual Feedback Timing: Ensuring the user sees immediate, clear feedback without lag required careful ordering of Blocks and brief debounce delays.
Accomplishments that we're proud of
- Seamless Real-Time Interaction: Successfully fused live camera input with on-screen graph overlays in a fully Blocks-only environment.
- Engaging Game Loop: Created a playback/retry cycle that keeps users motivated to beat their previous scores.
- Pure AIA Implementation: Achieved all functionality without any external extensions—just MIT App Inventor’s built-in components.
What we learned
- How to normalize and compare pose-detection coordinates against custom graphics on a Canvas.
- Best practices for managing state transitions and user prompts in a Blocks-based IDE.
- Techniques for balancing precision and forgiveness in motion-based UI challenges.
What’s next for Graphibody
- Custom Graph Modes: Let users choose difficulty levels or import their own graph patterns.
- Leaderboard & Sharing: Integrate with a cloud database to post scores and challenge friends.
- Additional Pose Games: Expand beyond line graphs to shapes (circles, triangles) and full-body silhouette matching.
- Adaptive Difficulty: Automatically adjust tolerance based on device performance and user skill over time.
Log in or sign up for Devpost to join the conversation.