Inspiration
Shopify's mission is to make commerce accessible to everyone, yet the analytics tools merchants depend on remain fundamentally visual, leaving the estimated 2.2 billion people globally living with visual impairment without meaningful access to their own business data (WHO, 2023). Of these, approximately 43 million are fully blind, and studies show that inaccessible visual interfaces represent one of the most consistent barriers preventing visually impaired users from independently operating online commerce platforms (Bourne et al., 2024). A screen reader can announce a revenue number but cannot communicate trend, momentum, or the causal relationship between two metrics. Sonify was built to change that.
We wanted to rethink commerce analytics as an audio-native experience instead of another dashboard. Sonify was built around a simple idea: a merchant should be able to ask a question naturally, hear a polished spoken answer, and then hear the data itself through sonification. That made Backboard and ElevenLabs a natural fit: one powers the reasoning, the other makes the interface feel alive.
Sonify Dashboard maps Shopify sales data directly to stereo audio using psychoacoustic principles, making business performance something you hear rather than read. Auditory display research has demonstrated that non-speech audio can convey multivariate data relationships with accuracy comparable to visual representations (Kramer et al., 1999), and that sonification is particularly effective for communicating time-series trends and anomaly detection in continuous data streams (Hermann et al., 2011). Spatialized stereo encoding exploits the brain's innate ability to localize and separate simultaneous sound sources, a phenomenon known as the auditory scene analysis effect, enabling listeners to track two independent data channels without confusion (Bregman, 1990).
Studies specifically examining accessibility for visually impaired users found that parameter-mapped sonification allowed participants to identify data trends and correlations with response accuracy exceeding 80%, performing on par with sighted users viewing equivalent line charts (Walker & Nees, 2011). In our traffic vs revenue indicator, traffic drives the left ear as a warm organ pad with pitch rising logarithmically with order volume, mirroring how the human auditory system perceives magnitude proportionally rather than linearly (Stevens, 1957). Revenue efficiency drives the right ear as a brighter, chorused tone placed perceptually in a larger acoustic space, making the two channels feel immediately distinct even through a single speaker. The temporal lag between traffic and revenue (typically one to two days) is encoded as an actual delay, so you hear the spike in your left ear, wait, and feel the echo arrive in your right. Louder echo means better conversion; silence means the funnel is leaking. No chart-reading required.
What it does
Sonify is a voice-first analytics assistant for Shopify merchants. A user can ask questions like “How are we doing today?” or “What caused the spike?” and Sonify:
interprets the request through a constrained analytics toolset generates a short spoken response renders that response with ElevenLabs attaches a sonified clip of the underlying time-series returns follow-up prompts, transcript, and tool trace The result is a full loop: voice in, agent reasoning, spoken answer, then data-as-sound.
How we built it
We built Sonify in our existing Shopify app and reused an earlier audio analytics prototype as the base. The core stack has three layers:
Backboard as the orchestration layer for tool calling, memory, and response shaping ElevenLabs for server-side TTS, making voice a core product surface instead of a UI extra a custom audio pipeline for replayable sonification clips and sequenced playback We also built a standalone top-level voice route outside Shopify’s embedded iframe so microphone capture can still work when browser permissions block it in the embedded app.
Challenges we ran into
The biggest challenge was making voice work reliably inside a real Shopify embedded environment. Browser microphone permissions inside iframes are inconsistent, so we had to build a signed standalone voice path on our app domain.
The other challenge was balancing ambition with reliability: using Backboard and ElevenLabs in a way that felt central to the product, while keeping the demo stable and deterministic.
Accomplishments that we're proud of
We’re proud that Sonify is a real end-to-end system, not just a concept demo. We shipped:
a working voice-first analytics assistant inside Shopify Backboard-based orchestration with structured tool-driven responses ElevenLabs-powered spoken responses served as replayable audio sonified time-series clips queued after the explanation a standalone voice flow that preserves functionality outside the iframe The most exciting part is that sound is doing real analytical work, not just reading text aloud.
What we learned
We learned that the strongest AI experiences come from orchestration, not just generation. Backboard was most valuable as the system that coordinates tools, memory, and output shape. ElevenLabs was most valuable when treated as a first-class interface layer, not just TTS.
We also learned that real platform constraints, like browser mic permissions in embedded apps, can shape product architecture as much as the AI itself.
What's next for Audify
Next, we want to push the parts that already make Sonify stand out:
more Backboard-driven response planning and memory more expressive ElevenLabs voice modes and guided listening cues richer sonification for comparisons, anomalies, and product-level insights a true daily audio briefing for merchants Our goal is to make analytics something merchants can ask, hear, and understand without needing to stare at a dashboard.
References
Bregman, A. S. (1990). Auditory Scene Analysis: The Perceptual Organization of Sound. MIT Press.
Hermann, T., Hunt, A., & Neuhoff, J. G. (Eds.). (2011). The Sonification Handbook. Logos Publishing House.
Kramer, G., Walker, B., Bonebright, T., Cook, P., Flowers, J., Miner, N., & Neuhoff, J. (1999). Sonification report: Status of the field and research agenda. International Community for Auditory Display.
Stevens, S. S. (1957). On the psychophysical law. Psychological Review, 64(3), 153–181.
Walker, B. N., & Nees, M. A. (2011). Theory of sonification. In T. Hermann, A. Hunt, & J. G. Neuhoff (Eds.), The Sonification Handbook (pp. 9–39). Logos Publishing House.
WHO. (2023). Blindness and vision impairment. World Health Organization. https://www.who.int/news-room/fact-sheets/detail/blindness-and-visual-impairment
Bourne, R., et al. (2024). Trends in prevalence of blindness and vision impairment. Eye. https://www.nature.com/articles/s41433-024-02961-1
Built With
- backboard
- elevenlabs
- node.js
- react
- shopify
- typescript


Log in or sign up for Devpost to join the conversation.