One of our members had relatives who live with disabilities and struggle to express themselves, and that’s what inspired NeuroPlot. We wanted to build something that could turn their thoughts into something visible; a system where language becomes motion and emotion becomes art. The goal was simple but ambitious: let people see their own mind draw, but not on a computer…but in real life. So we made it for everyone who suffers from disabilities that impedes their creative abilities, such as vocal, physical and more disabilties.

We started by combining our backgrounds in AI/ML, robotics, and control systems to create a multimodal pipeline that fuses Claude semantic parsing, diffusion-based visual reasoning, and Fetch.AI orchestration into a coherent, physical sketch. From a raw auditory prompt, we generate parametric curves, optimize them into G-code, and stream them through ROS 2 to a 2-DOF drawing arm, transforming words into live, physical strokes in under 60 seconds.

Mathematically, we modeled generative trajectories as multivariate parametric manifolds $$ \mathbf{r}(t) = [x(t), y(t), \dot{x}(t), \dot{y}(t)]^\top $$ applying Fourier–Laplace transforms to filter discontinuities in frequency space, then reprojecting via inverse spectral reconstruction for smooth continuity. We used Ramer–Douglas–Peucker decimation and topological TSP optimization to minimize traversal distance, while curvature continuity constraints $$ \kappa(t) = \frac{\dot{x}\ddot{y} - \dot{y}\ddot{x}}{(\dot{x}^2 + \dot{y}^2)^{3/2}} $$

ensured differentiable, dynamically feasible motion before compiling it all into time-parameterized G-code.

Now, the build process…was an actual movie. We spent hours going across San Francisco, Berkeley, and the East Bay hunting for a working 3D printer, and just as we found one, our car towed. We were about this close to pivoting to some boring B2B SaaS like everyone else, but we knew we wanted to push through and make something real. After paying $700 at the impound, we discovered a hacker space in SF that saved us, though we still had to hotspot from the car to run ROS nodes because the Wi-Fi kept cutting out.

Through all of it; sleep deprivation, panicking about our car, and driving hours away from SF for a 3D printer; we learned how to translate abstract neural outputs into tangible motion, how to merge art with engineering, and how to keep laughing when everything’s collapsing. We realized that what looks like progress is mostly persistence, late nights, and three cans of redbull each!

Next, we plan to evolve NeuroPlot into an assistive-technology platform for creative therapy, enabling users to translate emotions into art through multimodal AI and biometrics. We aim to scale it toward multi-robot collaboration, real-time emotion tracking, and large-format canvases for immersive, expressive experiences via UC Berkeley’s EE Lab.

Built With

Share this project:

Updates